I have an application that i have to deliver in a packaged JAR which is then run by the client in a complicated environment that makes editing env variables or JVM arguments very cumbersome (additionally client is not too technical). Currently we are using some external proeprty files for configuring the database and so on and this is going well so far.
I would like to allow the client to configure some aspects of Log4j2 using these properties files, i can see that Log4j2 allows multiple ways of performing property substitutions https://logging.apache.org/log4j/log4j-2.1/manual/configuration.html#PropertySubstitution
I can also see that it is possible to load property bundles but if i understand the docs correctly these bundles need to be loaded from classpath and i have not stumbled upon a possibility of define this properties file by giving its direct path (such as "load the properties file /tmp/myconfig.properties").
So ultimately - is it possible to use variables from an external .properties file that is NOT in classpath but in a specified filesystem location? Or maybe some other way exists to load this data from external file? (i already noted that using env variables or jvm arguments is out of the question in my case).
Related
I have a Spring Boot (2.6.5) application that has a build in default property file (contained in the jar).I also specify an external property file by using --spring.config.additional-location=/data/config/myapp.properties.
This works as expected, both files are loaded and properties of both files are used where the external pairs override the internal in the case of duplicate keys.
Now I want to split up the myapp.properties into multiple smaller property files. The reason for this is that they are created by a Kubernetes configMap, and some parts are used in multiple applications, so it makes sense to split those in smaller files instead of duplicating everything.
I tried to set the parameter to a directory, like --spring.config.additional-location=/data/config/ and then placing the various .properties files in this directory. However, when I do this, Spring will no longer load the files. So it seems that when I don't specify a specific filename, it only scans for application.properties.
How can I configure Spring to look for and load all files of type .properties in the given directory? I could specify each file individually, but that would require me to always keep track of the files that are present which is difficult to maintain.
Reference: features.external-config.files.wildcard-locations
In your case if you use --spring.config.additional-location=file:data/config/*/ then the files like data/config/first/application.properties and data/config/second/application.properties will be loaded.
Note: If you have a different config.name, like --spring.config.name=myapp
then all the directories should contain myapp.properties
I have two modules that will use ESAPI with the same properties files (ESAPI and validation.properties).
These modules output to wars that are contained in an ear.
I have the properties files inside one of the war files, where they are found at server start. The other war file seems to work fine and does not complain that it can't find the properties files in the log.
I am using ESAPI to sanitize html and url parameters - I wonder if I even need these property files to be accessible to the second module, or either one since there is no configuration and everything is being done with defaults.
First, let me describe how ESAPI 2.x goes about finding its ESAPI.properties file.
The reference implementation class for ESAPI's SecurityConfiguration interface is
org.owasp.esapi.reference.DefaultSecurityConfiguration
With this default implementation, resources like ESAPI.properties and
Validation.properties can be put in several locations, which are searched in the following order:
1) Inside a directory set with a call to SecurityConfiguration.setResourceDirectory(). E.g.,
ESAPI.securityConfiguration().setResourceDirectory("C:\myApp\resources");
Of course, if you use this technique, it must be done before any other ESAPI calls are made that use ESAPI.properties (which are most of them).
2) Inside the directory defined by the System property "org.owasp.esapi.resources". You can set this on the java command line as follows (for example):
java -Dorg.owasp.esapi.resources="C:\temp\resources" ...
You may have to add this to the start-up script that starts your web server. For example, for Tomcat, in the "catalina" script that starts Tomcat, you can set the JAVA_OPTS variable to the '-D' string above.
3) Inside the
System.getProperty( "user.home" ) + "/.esapi"
directory (supported for backward compatibility) or inside the
System.getProperty( "user.home" ) + "/esapi"
4) The first ".esapi" or "esapi" directory encountered on the classpath. Note this may be complicated by the fact that Java uses multiple class loaders and if you are have multiple applications in a given application server, they may be using different classpaths. For this reason, this option is not generally recommended, but is offered for reasons of backward compatibility with earlier ESAPI 1.4.x versions.
Once ESAPI finds a valid property file (e.g., ESAPI.properties) that it can read, it stops searching for others.
Now, that said, if you want to share a single ESAPI.properties file across all of your .war files, I would recommend going with option #2 and set the System property "org.owasp.esapi.resources" to some common secured directory that both of them can access. Also, you should use a full path name.
The answer was to place the esapi directory containing the properties files in
src/main/application
in one of the modules. This path puts it's contents at the root of the ear.
I'm running ESAPI on a maven project with java 1.8.0_71. I've put ESAPI.properties and validation.properties in
src/main/resources
This worked for me:
Attempting to load ESAPI.properties via the classpath.
SUCCESSFULLY LOADED ESAPI.properties via the CLASSPATH from '/ (root)' using current thread context class loader!
Attempting to load validation.properties via the classpath.
SUCCESSFULLY LOADED validation.properties via the CLASSPATH from '/ (root)' using current thread context class loader!
How to specify custom log4j appender in Hadoop 2 (amazon emr)?
Hadoop 2 ignores my log4j.properties file that contains custom appender, overriding it with internal log4j.properties file. There is a flag -Dhadoop.root.logger that specifies logging threshold, but it does not help for custom appender.
I know this question has been answered already, but there is a better way of doing this, and this information isn't easily available anywhere. There are actually at least two log4j.properties that get used in Hadoop (at least for YARN). I'm using Cloudera, but it will be similar for other distributions.
Local properties file
Location: /etc/hadoop/conf/log4j.properties (on the client machines)
There is the log4j.properties that gets used by the normal java process.
It affects the logging of all the stuff that happens in the java process but not inside of YARN/Map Reduce. So all your driver code, anything that plugs map reduce jobs together, (e.g., cascading initialization messages) will log according to the rules you specify here. This is almost never the logging properties file you care about.
As you'd expect, this file is parsed after invoking the hadoop command, so you don't need to restart any services when you update your configuration.
If this file exists, it will take priority over the one sitting in your jar (because it's usually earlier in the classpath). If this file doesn't exist the one in your jar will be used.
Container properties file
Location: etc/hadoop/conf/container-log4j.properties (on the data node machines)
This file decides the properties of the output from all the map and reduce tasks, and is nearly always what you want to change when you're talking about hadoop logging.
In newer versions of Hadoop/YARN someone caught a dangerously virulent strain of logging fever and now the default logging configuration ensures that single jobs can generate several hundred of megs of unreadable junk making your logs quite hard to read. I'd suggest putting something like this at the bottom of the container-log4j.properties file to get rid of most of the extremely helpful messages about how many bytes have been processed:
log4j.logger.org.apache.hadoop.mapreduce=WARN
log4j.logger.org.apache.hadoop.mapred=WARN
log4j.logger.org.apache.hadoop.yarn=WARN
log4j.logger.org.apache.hadoop.hive=WARN
log4j.security.logger=WARN
By default this file usually doesn't exist, in which case the copy of this file found in hadoop-yar-server-nodemanager-stuff.jar (as mentioned by uriah kremer) will be used. However, like with the other log4j-properties file, if you do create /etc/hadoop/conf/container-log4j.properties it will be used on all your YARN stuff. Which is good!
Note: No matter what you do, a copy of container-log4j-properties in your jar will not be used for these properties, because the YARN nodemanager jars are higher in the classpath. Similarly, despite what the internet tells you -Dlog4j.configuration=PATH_TO_FILE will not alter your container logging properties because the option doesn't get passed on to yarn when the container is initialized.
1.in order to change log4j.properties at the name node, u can change /home/hadoop/log4j.properties.
2.in order to change log4j.properties for the container logs, u need to change it at the yarn containers jar, since they hard-coded loading the file directly from project resources.
2.1 ssh to the slave (on EMR u can also simply add this as bootstrap action, so u dont need to ssh to each of the nodes).
ssh to hadoop slave
2.2 override the container-log4j.properties at the jar resources:
jar uf /home/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar container-log4j.properties
Look for hadoop-config.sh in the deployment. That is the script being sourced before executing the hadoop command. I see the following code in hadoop-config.sh, see if modifying that helps.
HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_ROOT_LOGGER:-INFO,console}"
So I'm beginning to use the Java Properties class in order to set key-value pairs for my project. The way I'm designing my project is so that there are default properties which will be created using a config file as well as another config file for either overwriting or adding additional properties. The default config file will be in my Eclipse MainFramework project while the other config file will be in the local project where tests are stored.
MainFramework
Validation
TestProject1
TestProject2
In this example, MainFramework has the default config file and each TestProject may or may not have it's own local config file. Is there a way to have my desired functionality through Java's Properties class.
The java properties object is a Hashtable. If you read the properties for your main configuration file and then read a second properties file into the same object it will override the existing properties if they exist in both places, or add new ones if they don't already exist. Properties that are only found in the original file will remain as well.
How about using a 3rd party configuration library to achieve this?
Typesafe's config supports the usage of properties files, and can handle merging a global configuration with a subconfiguration, among many other features.
Apache commons configuration also supports property files as configuration sources and mechanisms for combining different sources.
I personally found Typesafe a bit easier to understand and use, but have a look at some examples to see what fits your style. They are both available through maven.
How can I load the configuration information for hibernate dynamically from a config file. Netbeans currently hard codes that information into an xml file that is then compiled into the jar. I'm a newbie to Java/Netbeans coming from PHP land and am use to a central bootstrap that pulls from a .ini or something similar, but netbeans tends to hardcode this information upon generation of the models,etc in an xml file that is then compiled in the jar. I'm looking for conventional methods of setting up configuration for various client machines using various database configurations. I don't want to have to compile the app on each machine it must be installed on.
The configuration file is read using the Configuration class. By default, it uses the hibernate.cfg.xml file found in the classpath, but you can use the configure method taking a file as parameter, and store the config file on the file system rather than in the jar.
You can also put the static mapping, which never changes between configs, in a file inside the jar, and put the varying config inside an external file. Look at the javadoc for Configuration to know how to add resources and config files to the configuration.