We run our micronaut integration tests in the cloud in a docker container
We're setting the MICRONAUT_ENVIRONMENTS=staging in the docker environment variables, to force our application to read the config values from application-staging.yaml.
However, micronaut is automatically adding "test" as an environment, and then read the config values from application-test.yaml.
From the docs (https://docs.micronaut.io/2.2.1/guide/index.html#propertySource), environment variables should have priority compared to deduced environments when loading the config
Is there any reason why micronaut is giving priority to the application-test.yaml values here?
The test environment is added when micronaut tests are running, even when setting up the environment variable MICRONAUT_ENVIRONMENTS
After a bit of digging, it seems the "test" environment is added before the DefaultEnvironment class is initialized, hence it's added even if micronaut.env.deduction is set to false
Related
I have the following property in my application.properties
// application.properties
LOCAL.JWT_PUBLIC_KEY=${JWT_PUBLIC_KEY:KEY_JWT}
And I'm injecting that property using the microprofile config anotation ConfigProperty
// OnPremiseSecrets.java
public JwtConfig(#ConfigProperty(name = "LOCAL.JWT_PUBLIC_KEY") String jwtPublicKey)
it works perfectly while running the program in JVM with a compiled jar quarkusDev
but it doesn't work in a native graalvm compilation, in the logs it return KEY_JWT as jwtPublicKey value which is the default.
I tried reading enviernment variables directry using
System.getenv
and it returns the right value, the environment variable is configured
Developed the integration tests using Test container. Have few fields as environment variables(Eg: passing it as quarkus.datasource.username=${SER_DB_USERNAME:postgres}) in application.properties file.
When setting environment field through test container
GenericContainer<?> someService = new GenericContainer<>(img)
.withEnv("SER_DB_USERNAME", DataLayer.DB_USERNAME)
This value is being successfully taken with test containers but
For the below environment variable,
app.security.enabled=${SER_SEC_ENABLE:true} defined in application.properties file
#IfBuildProperty(name = "app.security.enabled", stringValue = "true")
the environment variable is setting through cmd prompt using -DSER_SEC_ENABLED=true, but when trying to pass the same value in test containers, it's always null.
GenericContainer<?> someService = new GenericContainer<>(img)
.withEnv("SER_SEC_ENABLE", "true")
Without having more context of the project, I can at least observe, that app.security.enabled is a build property rather than a runtime property, so it might be evaluated at build time already. If you start the container with an already built image/application, it is very likely, that the environment variable has no effect.
Furthermore, setting a property on the JVM using the -D flag does not result in an environment variable, this is explicitly a system property on the JVM.
I have a spring boot REST API app. I am using environment variables in application.properties file. Some settings are as shown below:
logging.level.springframework.web=${WEB_LOG_LEVEL}
logging.level.org.hibernate=${HIBERNATE_LOG_LEVEL}
In my unit test, I use annocation #TestPropertySource("classpath:application-test.properties"). However, when I run mvn clean install, build fails because of unit test failure. I provided the error log. When I ran in IDE, I can provide those environment vairables. Any suggestions on how to pass them in mvn clean install? Or any other approaches you would recommend? Thanks much in advance!
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under 'logging.level.springframework.web' to org.springframework.boot.logging.LogLevel:
Property: logging.level.springframework.web
Value: ${WEB_LOG_LEVEL}
Origin: class path resource [application.properties] - 44:35
Reason: failed to convert java.lang.String to org.springframework.boot.logging.LogLevel (caused by java.lang.IllegalArgumentException: No enum constant org.springframework.boot.logging.LogLevel.${WEB_LOG_LEVEL})
Action:
Update your application's configuration. The following values are valid:
DEBUG
ERROR
FATAL
INFO
OFF
TRACE
WARN
We have many options!
Best is we (roughly) understand the 2 Externalized Configuration and PropertySource:
Leaving our application.properties as it is, we can:
(As tgdavies commented), introduce src/test/resources/application...
Here we can:
call it application.properties, and it will override (existing settings/"sensible") of src/main/resources/application.properties, then we don't need #PropertySource or #Profiles on our test.
call it application_test.properties, then work rather with #Profile("test") + #ActiveProfiles("test") (on our test class(es), with even higher precedence as the above).
don't use #PropertySource (some_custom_name.properties file) for this use case, it has too low precedence!
...in these properties we will write (without placeholders):
logging.level.springframework.web=warn
logging.level.org.hibernate=warn
# or the log level(s) of our choice, overriding(!) the "main ones"
SET/EXPORT these properties in our (dev) environment! (with our cli/OS dialog/MAVEN_OPTS/...)
Using #TestPropertySource (2nd highest precedence, in spring-boot configuration hierarchy!, no profiles):
like (override property):
#TestPropertySource(properties = "logging.level.springframework.web=warn", ...)
or (using/trying relaxed binding):
#TestPropertySource(properties = "web.log.level=warn", ...)
or (using a file):
#TestPropertySource(locations = "classpath:/some/properties.properties", ...)
But a slight modification of our (src/main/...)application.properties can also be very helpful: Fallback! - looks like:
logging.level.springframework.web=${WEB_LOG_LEVEL:warn}
logging.level.org.hibernate=${HIBERNATE_LOG_LEVEL:warn}
It tries for the environment variables, and falls back to warn. With this, we can omit #PropertySource/#Profile and/or an additional test-application-properties.
And even better with relaxed binding:
logging.level.springframework.web=${web.log.level:warn}
logging.level.org.hibernate=${hibernate.log.level:warn}
This will accept the above environment variables, but also (previously defined)"properties" + fall back to "warn".
Conflict-free combinations of the proposed.
... -> Chapter 2, Relaxed Binding(, Profiles!) and Spring Boot How To: Properties and Configuration.
In the Solr logs I see error -
java.lang.UnsupportedOperationException: Serialization support for
org.apache.commons.collections.functors.InvokerTransformer is disabled for
security reasons. To enable it set system property
'org.apache.commons.collections.enableUnsafeSerialization' to 'true',
but you must ensure that your application does not de-serialize
objects from untrusted sources.
I am trying to add flag -Dorg.apache.commons.collections.enableUnsafeSerialization=true, but it don't help.
How to correctly enable this property? (I haven't access to the solrconfig.xml)
You can add it to SOLR_OPTS environment variable or pass it directly to start script:
bin/solr start -Dorg.apache.commons.collections.enableUnsafeSerialization=true
As per Configuring solrconfig.xml docs:
In general, any Java system property that you want to set can be passed through the bin/solr script using the standard -Dproperty=value syntax. Alternatively, you can add common system properties to the SOLR_OPTS environment variable defined in the Solr include file (bin/solr.in.sh or bin/solr.in.cmd).
I have a remote host set up with Spark standalone instance (one master and one slave on the same machine for now). I also have local Java code with spark-core dependency and a packaged jar with actual Spark Application. I'm trying to start it using SparkLauncher class as described in it's Javadoc.
Here is dependency:
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
And here is the code of the louncher:
new SparkLauncher()
.setVerbose(true)
.setDeployMode("cluster")
.setSparkHome("/opt/spark/current").setAppResource(Resources.getResource("validation.jar").getPath())
.setMainClass("com.blah.SparkTestApplication")
.setMaster("spark://" + sparkMasterHostWithPort))
.startApplication();
The error I'm getting is either path not found /opt/spark/current/ or, if I remove setSparkHome call, Spark home not found; set it explicitly or use the SPARK_HOME environment variable.
Here is my naive question(s): is there any workaround allowing me not to have Spark binaries installed on the local host where I want to run only the Launcher? Why Spark Java code referenced in the dependencies is not capable / is not enough to connect to some configured remote Spark Master and submitting the application jar? Even if I put Spark binaries, application code and if needed even the Spark Java jar to hdfs location and use other deployment approach, like YARN, would it be enough to use Launcher just to trigger submission and start remotely?
The reason is that I want to avoid installing Spark binaries on multiple client nodes only to submit and start dynamically created/modified Spark applications from there, it sounds like a waste to me. Not to mention necessity to package application in jar for each submission.
Short answer: you must have spark binaries on the client machine and SPARK_HOME environment variable pointing to it.
Long answer: however if you want to launch the job on remote cluster then you could make use of the following configurations in your spark job:
val spark = SparkSession.builder.master("yarn")
.config("spark.submit.deployMode", "cluster")
.config("spark.driver.host", "remote.spark.driver.host.on.the.cluster")
.config("spark.driver.port", "35000")
.config("spark.blockManager.port", "36000")
.getOrCreate()
spark.driver.port and spark.blockManager.port are not mandatory, but needed if you are working in a closed environment, like let's say kubernetes network, and have some port gateway service defined for spark client pod.
Having remote host defined in master setting of the SparkLauncher will not work. You need to get the hadoop configurations from the cluster, usually it is located in /etc/hadoop/conf on the cluster nodes. Place hadoop config directory in the client machine and point HADOOP_CONF_DIR environment variable to it. This should be enough to get started.