Log4j different .property files for Appenders - java

I was wondering if there is a way to define the appenders (file, console etc) on a different file than the one defining the actual logging properties.
The idea came up from a system I am developing and we have the following requirement:
Different versions of the system will be deployed on the same server. So in order not to maintain different log4j properties file, that will all set the same properties and differ on the file appenders (so as to know which log was recorded from which version of the system).
Thank you in advance

You can use DOMConfigurator or PropertyConfigurator to load your log4j settings from an external file. You can invoke this API multiple times under a run to load the settings from different sources.
In your case, you can load the Appender details alone dynamically from another property file based on the version.Just like suffixing some version id to the file name and loading it from your code in a generic way.

If each version running on a different VM process (on different ports), you can add an argument to the virtual machine. e.g.:
-Dmysystem.version=1.0.1
If you are using the XML configuration:
<param name="file" value="/logs/system.v${mysystem.version}.log" />
Of if you are using the properties:
log4j.appender.ROLL.File=/logs/system.v${mysystem.version}.log
In both cases, the generated file might be:
/logs/system.v1.0.1.log
In this way, you can maintain a single configuration file and dynamic filenames.

Related

Log4j2 read properties from an external property file

I have an application that i have to deliver in a packaged JAR which is then run by the client in a complicated environment that makes editing env variables or JVM arguments very cumbersome (additionally client is not too technical). Currently we are using some external proeprty files for configuring the database and so on and this is going well so far.
I would like to allow the client to configure some aspects of Log4j2 using these properties files, i can see that Log4j2 allows multiple ways of performing property substitutions https://logging.apache.org/log4j/log4j-2.1/manual/configuration.html#PropertySubstitution
I can also see that it is possible to load property bundles but if i understand the docs correctly these bundles need to be loaded from classpath and i have not stumbled upon a possibility of define this properties file by giving its direct path (such as "load the properties file /tmp/myconfig.properties").
So ultimately - is it possible to use variables from an external .properties file that is NOT in classpath but in a specified filesystem location? Or maybe some other way exists to load this data from external file? (i already noted that using env variables or jvm arguments is out of the question in my case).

Add and Replace placeholder in XML from property files

I am developing a web application which will be deployed in different environments like Dev, Test and Production. Some of the configurations will be specific to environments and so I need to change the configurations in the property files with respect to the environments where the project is deployed.
I added the Environment variable in machine and obtained it using the method
System.getEnv("Environment")
which would return the value like, dev, test and prod and I load the respective property files (File names are dev-config.properties, test-config.properties).
I am implementing the PicketLink libraries in jBoss AS to integrate the application with SSO-SAML. One of the configurations is to add a file named picketlink-idfed.xml, where I mention the IDP URL and application url. Both the URLs will be different for environments. I cannot have different file names like dev-picketlink-idfed.xml and so I cannot follow the above approach of using different files.
Question is that I need to use the same file picketlink-idfed.xml, just by adding the placeholder and replace it with the respective values from the property files. Attached the sample content of picketlink-idfed.xml.
<IdentityURL>https://idfed.test.com.au/adfs/ls/idpinitiatedsignon.aspx?LoginToRP=https%3A%2F%2Fapp.test.com.au%2Fdisplays%2FHome.action
</IdentityURL>
<ServiceURL>https://app.test.com.au/displays/Home.action
</ServiceURL>
In the above line the values for tags IdentityURL and ServiceURL will be changed for different environments. Can you please tell me on how to address this issue?
Note: For time being, I am replacing the contents for every deployment, which is not good practice. Also, I am not using Maven or Ant build files. I cannot use these due to insufficient time.

Custom log4j appender in Hadoop 2

How to specify custom log4j appender in Hadoop 2 (amazon emr)?
Hadoop 2 ignores my log4j.properties file that contains custom appender, overriding it with internal log4j.properties file. There is a flag -Dhadoop.root.logger that specifies logging threshold, but it does not help for custom appender.
I know this question has been answered already, but there is a better way of doing this, and this information isn't easily available anywhere. There are actually at least two log4j.properties that get used in Hadoop (at least for YARN). I'm using Cloudera, but it will be similar for other distributions.
Local properties file
Location: /etc/hadoop/conf/log4j.properties (on the client machines)
There is the log4j.properties that gets used by the normal java process.
It affects the logging of all the stuff that happens in the java process but not inside of YARN/Map Reduce. So all your driver code, anything that plugs map reduce jobs together, (e.g., cascading initialization messages) will log according to the rules you specify here. This is almost never the logging properties file you care about.
As you'd expect, this file is parsed after invoking the hadoop command, so you don't need to restart any services when you update your configuration.
If this file exists, it will take priority over the one sitting in your jar (because it's usually earlier in the classpath). If this file doesn't exist the one in your jar will be used.
Container properties file
Location: etc/hadoop/conf/container-log4j.properties (on the data node machines)
This file decides the properties of the output from all the map and reduce tasks, and is nearly always what you want to change when you're talking about hadoop logging.
In newer versions of Hadoop/YARN someone caught a dangerously virulent strain of logging fever and now the default logging configuration ensures that single jobs can generate several hundred of megs of unreadable junk making your logs quite hard to read. I'd suggest putting something like this at the bottom of the container-log4j.properties file to get rid of most of the extremely helpful messages about how many bytes have been processed:
log4j.logger.org.apache.hadoop.mapreduce=WARN
log4j.logger.org.apache.hadoop.mapred=WARN
log4j.logger.org.apache.hadoop.yarn=WARN
log4j.logger.org.apache.hadoop.hive=WARN
log4j.security.logger=WARN
By default this file usually doesn't exist, in which case the copy of this file found in hadoop-yar-server-nodemanager-stuff.jar (as mentioned by uriah kremer) will be used. However, like with the other log4j-properties file, if you do create /etc/hadoop/conf/container-log4j.properties it will be used on all your YARN stuff. Which is good!
Note: No matter what you do, a copy of container-log4j-properties in your jar will not be used for these properties, because the YARN nodemanager jars are higher in the classpath. Similarly, despite what the internet tells you -Dlog4j.configuration=PATH_TO_FILE will not alter your container logging properties because the option doesn't get passed on to yarn when the container is initialized.
1.in order to change log4j.properties at the name node, u can change /home/hadoop/log4j.properties.
2.in order to change log4j.properties for the container logs, u need to change it at the yarn containers jar, since they hard-coded loading the file directly from project resources.
2.1 ssh to the slave (on EMR u can also simply add this as bootstrap action, so u dont need to ssh to each of the nodes).
ssh to hadoop slave
2.2 override the container-log4j.properties at the jar resources:
jar uf /home/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar container-log4j.properties
Look for hadoop-config.sh in the deployment. That is the script being sourced before executing the hadoop command. I see the following code in hadoop-config.sh, see if modifying that helps.
HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_ROOT_LOGGER:-INFO,console}"

logback xml configuration for web application

I am using logback SLF4J for logging the debug/error statements. Could you please let me know how to use single logback.xml configuration file for multiple environments (dev/qa/prod)? Right now, i am editing xml file for each environment to specify dbname...I appreciate your help.
Couple of options (most of them documented here)
Use properties in the log configuration which are set externally (either java properties or OS environment variables)
Use JNDI settings (creating db datasources is pretty common)
Generate a logback.xml file as part of the deployment process
JMX configurator which allows you to reload the configuration from a named file
Package a WAR file for each environment (don't really recommend this included for completeness)

netbeans and hibernate configuration

How can I load the configuration information for hibernate dynamically from a config file. Netbeans currently hard codes that information into an xml file that is then compiled into the jar. I'm a newbie to Java/Netbeans coming from PHP land and am use to a central bootstrap that pulls from a .ini or something similar, but netbeans tends to hardcode this information upon generation of the models,etc in an xml file that is then compiled in the jar. I'm looking for conventional methods of setting up configuration for various client machines using various database configurations. I don't want to have to compile the app on each machine it must be installed on.
The configuration file is read using the Configuration class. By default, it uses the hibernate.cfg.xml file found in the classpath, but you can use the configure method taking a file as parameter, and store the config file on the file system rather than in the jar.
You can also put the static mapping, which never changes between configs, in a file inside the jar, and put the varying config inside an external file. Look at the javadoc for Configuration to know how to add resources and config files to the configuration.

Categories