Spring Boot import additional extensionless properties - java

I'm running my Spring Boot application with Kubernetes. Part of the architecture involves mounting secrets as file volumes under /opt/config.
$ ls -l
SECRET_1
SECRET_2
SECRET_3
Each file contains a .properties like syntax. (key=value)
I've been attemtping to make Spring Boot load these as per the documentation spring.config.import=file:/etc/config/myconfig[.yaml]
However I just can't get this to work, my full command is: java -Dspring.config.location=file:./opt/application.properties -Dspring.config.additional-location=file:./opt/config/*[.properties] -jar target/test.war --debug --server.port=8080
The ./opt/application.properties file is loaded correctly.
I have also attempted to rename all the files to include .properties:
$ ls -l
SECRET_1.properties
SECRET_2.properties
SECRET_3.properties
And load it via java -Dspring.config.location=file:./opt/application.properties -Dspring.config.import=file:./opt/config/*/ -jar target/test.war --debug --server.port=8080, however this also did not import any of the properties under ./opt/config.
I've read through the documentation dozens of times now, and this should be working, but it's not. Did I miss something obvious? Why is it not loading any of the files under ./opt/config?

I think the problem is that the locations you specify are where spring looks for application.properties, application.yml and profile variants (application-prod.properties, etc). I think you are better off loading all the files in a configured location programmatically, this might provide some ideas.

Related

How can I export traces generated by the OpenTelemetry Java agent to Google Cloud Trace?

I've got a Spring Boot application that'd I'd like to automatically generate traces for using the OpenTelemetry Java agent, and subsequently upload those traces to Google Cloud Trace.
I've added the following code to the entry point of my application for sending traces:
OpenTelemetrySdk.builder()
.setTracerProvider(
SdkTracerProvider.builder()
.addSpanProcessor(
SimpleSpanProcessor.create(TraceExporter.createWithDefaultConfiguration())
)
.build()
)
.buildAndRegisterGlobal();
...and I'm running my application with the following system properties:
-javaagent:path/to/opentelemetry-javaagent-all.jar \
-jar myapp.jar
...but I don't know how to connect the two.
Is there some agent configuration I can apply? Something like:
-Dotel.traces.exporter=google_cloud_trace
I ended up resolving this as follows:
Clone the GoogleCloudPlatform /
opentelemetry-operations-java repo
git clone
git#github.com:GoogleCloudPlatform/opentelemetry-operations-java.git
Build the exporter-auto project
./gradlew clean :exporter-auto:shadowJar
Copy the jar produced in exporter-auto/build/libs to my target project
Run the application with the following arguments:
-javaagent:path/to/opentelemetry-javaagent-all.jar
-Dotel.javaagent.experimental.extensions=[artifact-from-step-3].jar
-Dotel.traces.exporter=google_cloud_trace
-Dotel.metrics.exporter=none
-jar myapp.jar
Note: This setup does not require any explicit code changes in the target code base.

How to define datasource properties for WildFly bootable JAR without openshift CLI?

So normally you could use the standalone.xml to do this, but the wildfly bootable JAR seems not to have a standalone.xml since it's all within a single JAR.
The examples that JBoss provides assume you'll only ever use OpenShift for some reason and uses some arcane OpenShift CLI command (below) that just somehow creates the right file in the right spot. https://github.com/wildfly-extras/wildfly-jar-maven-plugin/tree/4.0.0.Final/examples/postgresql
oc new-app --name database-server \
--env POSTGRESQL_USER=admin \
--env POSTGRESQL_PASSWORD=admin \
--env POSTGRESQL_DATABASE=sampledb \
postgresql
However, there is no config file created with that command (or they didnt check it in), and the documentation doesn't say anything about how to do the same for non-OpenShift projects.
Trying to find any info on how to do configure a (postgres) data-source for a non-OpenShift deployment.
Figured this out on my own with some experimentation. WildFly documentation on bootable jars is still really minimal and lacking lots of detail that required lots of guessing / experimenting.
While there is an overlay that lets you specify DB info via environment variables, that's a bit hacky and doesn't allow you to define more than one datasource nor can you specify the JNDI name. Instead, I used a CLI script which gets fed into the jar builder plugin.
datasource.cli
data-source add --name=<name> --jndi-name=java:jboss/datasources/<schema> --driver-name=postgresql --connection-url=jdbc:postgresql://localhost:5432/<db> --user-name=<user> --password=<pass>
Make sure to use your own values for placeholders <name>, <schema>, <db>, <user>, <pass> and swap out the hostname / port if needed.
pom.xml (snippet)
<configuration>
<cli-sessions>
<cli-session>
<script-files>
<script>scripts/datasource.cli</script>
</script-files>
<resolve-expressions>true</resolve-expressions>
</cli-session>
</cli-sessions>
<feature-packs>
<feature-pack>
<location>wildfly#maven(org.jboss.universe:community-universe)#23.0.0.Final</location>
</feature-pack>
<feature-pack>
<groupId>org.wildfly</groupId>
<artifactId>wildfly-datasources-galleon-pack</artifactId>
<version>1.2.2.Final</version>
</feature-pack>
</feature-packs>
<layers>
<layer>jaxrs-server</layer>
<layer>postgresql-driver</layer>
</layers>
</configuration>
In the above XML config, items of note are
<script> value tells the builder where to find the CLI script that will do the datasource adding
<feature-pack> for the datasource pack. Though I didn't test if this was required... maybe the driver layer is all that's needed. Worth trying if you have time.
<layer> that specifies the postgresql-driver value. This is required and without it, the bootup will complain about missing a driver when the CLI script runs.

using tinylog to write loggings into tomcat's log-folder

I want to write logging-messages to a defined file into the tomcat's log-folder, using eclipse, maven, tinylog.
Problem: There is no webapp.log as soon as I run the app in tomcat.
In eclipse everything works fine.
What I did:
add Maven-dependency tinylog-1.2.jar
set configuration-parameter in Run Configuration (Main-Tab) so the tinylog-properties can be found for the build-process:
name: -Dtinylog.configuration
value: C:\Program
Files\Tomcat\apache-tomcat-9.0.0.M13\webapps\folder\subfolder\tinylog.properties
in Java-Class:
import org.pmw.tinylog.Logger;
...
Logger.info(message);
tinylog.properties looks like:
tinylog.writer = file
tinylog.writer.filename = webapp.log
tinylog.writer.buffered = true
tinylog.writer.append = true
tinylog.level = info
I also tried different file-references but none of them worked:
tinylog.writer.file = C:\Program Files\Tomcat\apache-tomcat-9.0.0.M13\logs\webapp.log
tinylog.writer.file= "C:\Program Files\Tomcat\apache-tomcat-9.0.0.M13\logs\webapp.log"
Does anybody know how to write the logs into the named path-file?
Thanks for any valuable hint.
I propose to use the tinylog-jul artifact instead of the usual tinylog artifact. tinylog-jul provides the tinylog API, but uses the Tomcat logging back end. So, you don't need to configure tinylog. All log entries will be automatically output as you are used to with other logging APIs on Tomcat.

Hadoop log4j cannot find KafkaLog4JAppender.class

I added KafkaLog4JAppender functionality to my MR job.
locally the job is running and sending the formatted logs into my Kafka cluster.
when I try to run it from the yarn server, using:
jar [jar-name].jar [DriverClass].class [job-params] -Dlog4j.configuration=log4j.xml -libjars
I get the following expception:
log4j:ERROR Could not create an Appender. Reported error follows.
java.lang.ClassNotFoundException: kafka.producer.KafkaLog4jAppender
the KafkaLog4JAppender class is in the path.
running
jar tvf [my-jar].jar | grep KafkaLog4J
finds the class
I'm kinda lost and would appreciate any helpfull input
thanks in advance!
If it works in local mode and not working in Yarn/distributed mode, then it could be problem of jar not being distributed properly. YOu might want to check Using third part jars and files in your MapReduce application(Distributed cache) for details on how to distribute your jar containing KafkaLog4jAppender.class

How can I run Play framework in HTTPS only in the dev mode?

I'd like to run Play Framework over HTTPS only in the development mode and I've done so using the following bit of configuration:
https.port=9443
trustmanager.algorithm=JKS
keystore.file=conf/certificate.jks
keystore.password=password
certificate.password=password
application.mode=dev
%prodenv.application.mode=prod
This works when I run play run but in production we run play run --%prodenv and I want to disable HTTPS as the HTTPS is handled by Nginx. I'm lost with how to do this. I would like to do this via the configuration file and not via additional command-line arguments as it does defy the purpose of having all my application configuration in the application.conf file.
One way to do it is to have two confs file: application.conf and prod.conf
application.conf stays the way it is and prod.conf would look something like
include "application.conf"
https.port = myProdPort
### other params to be overwritten
when launching your application in prod you can do
play run -Dconfig.file=/mypath/prod.conf
sbt run -Dhttps.port=9443 -Dhttp.port=disabled
Rather than have two configuration files, I achieved this by using just one. In order to run the app, I run play run --%dev and this is what the configuration looks like.
%dev.https.port=9443
%dev.trustmanager.algorithm=JKS
%dev.keystore.file=conf/certificate.jks
%dev.keystore.password=password
%dev.certificate.password=password
Similar to the other answer by Johan, I do it the reverse way: my application.conf is for prod and I run a dev.conf just in development:
include "application.conf"
https.port = devPort
And run locally like so:
play run -Dconfig.file=dev.conf
This way you don't have to change any configuration on your prod server.
You could remove the https.port param from your conf file and pass it in via the command line, when you run it in development mode:
play run -Dhttp.port=9443
See: Sprecifying server address and port
Play framework runs using Netty server you can overwrite the server configuration using -D parameters.
In sbt it can be done like:
sbt "project pepe-grillo-server" "run -Dhttps.port=42443 -Dhttp.port=disabled"
If you are using custom ssl engine provider CustomSSLEngineProvider you can use below command to run netty in ssl mode.
./sbt "-Dhttps.port=9443" "-Dplay.server.https.engineProvider=services.https.CustomSSLEngineProvider" "-Dconfig.resource=<config file> run
Once the server is up and running you can curl the endpoint to check cert validity.
curl -v https://127.0.0.1:9443

Categories