I'm trying to configure logging for my web application (WAR), in log4j.properties:
log4j.rootCategory=WARN, RF
log4j.appender.RF=org.apache.log4j.RollingFileAppender
log4j.appender.RF.File=???/my-app.log
What file path I should specify for my-app.log? Where to keep this file? Currently I'm deploying my application to Tomcat6, but who knows what happens in the future. And who knows how exactly Tomcat will be configured/installed on another machine, in the future.
What I finally did is this:
In continuous integration settings.xml I define a property log.dir
In log4j.properties I define: log4j.appender.RF.File=${log.dir}/my-app.log
In pom.xml I instruct Maven to filter .properties files
That's it. Now I can control the location of my log files on the destination container without any changes to the source code.
Logging is a deployment configuration descriptor so you really cannot generalize. Configuration depends on the host machine and other, non functional requirements of the project.
Generally in tomcat I log into ${catalina.home}/logs/myapp.log but as you can imagine if I deploy in weblogic there isn't any catalina.home so the log will go to c:\logs\myapp.log.
I agree with #cherouvim. In general, you should put the log file outside of the webapp, and preferably in the same place that the container puts its log file.
You don't want to put them in the webapp tree, because they will get clobbered if your webapp is redeployed.
What file path I should specify for my-app.log? Where to keep this file?
If the question is about your personal machine, it doesn't really matter. Put them where it's handy for you (e.g. next to the server logs).
If the question is about a development, IST, UAT, etc environment, logs should typically be written to a separate/dedicated partition. But you should ask this question to the sysadmins, many companies have exploitation standard, standardized layouts.
Currently I'm deploying my application to Tomcat6, but who knows what happens in the future.
This is a shot in the dark since but here is a normalized path I've used in the past: /var/log/tomcat/<PROJECTNAME>/myApp-<instance-#>.log.
And because I'm not better than you at fortune-telling, yeah, who knows what happens in the future :)
And who knows how exactly Tomcat will be configured/installed on another machine, in the future.
That's the beauty of a configuration file, you can configure it as required... and even change it :)
Related
There are multiple applications deployed on my Tomcat server.
At first everyone had it's one logback.xml file packaged in WEB-INF/classes with it.
Then I've put another directory outside the Tomcat's deploy directory on the common classpath, put a single logback.xml there and excluded the other ones from the applications. The reason for that was that I wanted logging to be conveniently configurable in one place.
Unfortunately there's the requirement now to log every application to it's own file.
Since I think that this is not so easy to achieve with this setup, I'm wondering whether this setup is that good at all. What do you think?
Unfortunately there's the requirement now to log every application to it's own file.
I think, that this is the only correct way to do it. It is ok to have several log files for single application, but to have many applications writing in the same log is bad practice.
What you want to do to have a single configuration file is to use a SiftAppender.
LOGS need to be easy to read and easy to parse by any user. If you have a single log file where multiple applications writing to the same file you might jumble up the various log entries. Since you are the developer who has knowledge of all 7 applications you might be able to get it but a new developer will have a difficult time understanding the logs. Logs should be concise and easy to decipher so that support issues can be analysed just be analyzing the log entries.
I would suggest you follow these tips
Massively messed up production issue:
I have inherited a massive ( 1 million line code base ) web application that my predecessors botched up completely.
They thought it would be a wonderful idea to just add the WEB-INF/classes directory the the system classpath in the startupWeblogic script instead of properly packaging up the application in an ear or war file, and manually point all the paths in the console to the various non-standard paths they just conceived of themselves.
Now my problem is I have to install another application as a proper war file that uses classes with the same packages and names, just even older code, into the same Weblogic 10.3.6 instances. But as you can imagine the stuff that is hacked into the system classpath takes precedence over everything in the additional webapp, even with the prefer web app lib preference set in the weblogic.xml file.
Notes:
Repackaging the offending application is not an option on my timeline, it is going to be done, but just not in the timeline I have to meet. Running on other instances of Weblogic isn't in my timeline either, I don't have the time to go through the provisioning process to get the assets in time.
Given this how can I get this additional webapp to play nice and deploy in the same weblogic instance as the one that is hacked into the system classpath.
If someone can give me an answer that solves this issue, I will make sure to put a massive bounty on this when I am able to and award it to you after the fact. The sooner the answer the bigger the bounty will be!
Did you try prefer-application-packages within the weblogic-application.xml as well?
The mechanism that Weblogic calls the Filtering Classloader, here are the links:
http://docs.oracle.com/cd/E15051_01/wls/docs103/programming/classloading.html#wp1097187
http://hasamali.blogspot.in/2011/08/weblogic-identifying-class-conflict-and.html
http://atheek.wordpress.com/2011/12/20/weblogic-filtering-classloaders/
I have a webapp in a war archive which is deployed on cloudfoundry.
One of the libraries ("somelib.jar") used by the app is made by another developer.
I would like a way for him to upload several different versions of somelib.jar and test the behaviour of the app.
I have managed to get the jar uploaded to WEB-INF/lib directory of the deployment. I have also managed to unpack the jar into WEB-INF/classes. However, I have not managed to get the new version of the jar to be used. I tried various hacks such as those described in this question and this question without any luck.
Everytime, the classes/jars that get loaded the first time get used after that, even if we replace the actual .class or .jar file in the above directories.
Is there any easy way to achieve what I want?
Note: Since I dont have control of Tomcat (where it runs), I cannot configure Tomcat or make any changes to the server. I just have control on my war file, so everything needs to be done programmatically.
EDIT: the reason I want this is to reduce our testing time. Currently someone gives me a new version of somelib.jar, I repackage it into my application, upload to CF, send him a notification, then he tests the behavior of the new jar. What I would have preferred is that he upload his jar directly to CF and do the testing whenever he has a new version without the unnecessary intermediate delay.
In tomcat 7, you can version your WAR file and the new versions will gradually kick in.
http://www.tomcatexpert.com/blog/2011/05/31/parallel-deployment-tomcat-7
In order for you to control the application server yourself, you would need to deploy a standalone app into Cloud Foundry.
This blog should help you out with that:
http://blog.cloudfoundry.com/2012/05/11/running-standalone-web-applications-on-cloud-foundry/
This way you can custom configure your tomcat.
Everytime, the classes/jars that get loaded the first time get used after that, even if we replace the actual .class or .jar file in the above directories
That's the way that normal Tomcat (Java EE) classloading works. Your classes are loaded when first deployed, and any changes will be ignored (JSPs are managed slightly differently, but only in a development environment).
You should be able to solve this problem by using the Equinox OSGi bridge servlet. I haven't done this myself, but here's a writeup by a person that I respect.
We have developed a web based application in java(STRUTS 2.0). Now we want to deploy the application. The client is having one pre UAT environment ,UAT environment and a production environment.
Now when we are deploying for pre-UAT we have created the copy of our project and renamed it to pre-UAT. Similarly we are planning for UAT environment and one we already have for development. So in all we will be having 3 copies of our code.
I want to ask is this approach correct or what is the standard approach followed. This is not our final release as we are first releasing a version and then we will be working on other modules.
So please can anyone guide me for approach to follow for creating this 3 different environments.Thanks in advance
I am not sure what you refer to by "we will be having 3 copies of our code". If you are implying that you actually copied the code-base around multiple times, please stop reading and refer to this:
Why is "copy and paste" of code dangerous?
And once you finish reading, do some research about source control and how to use branching/tagging for concurrent development.
If you were referring to multi-environment deployment:
Assuming your application is designed correctly (and I'm treading very carefully here), one WAR file (you were mentioning you're using Tomcat, so I am concluding that your application is packaged as a WAR) should be sufficient. The application code should be environment-independent and should read its environment-specific configuration from external resources, such as a database, configuration files or JNDI.
If your application code is environment-independent, then all you need to do is simply deploy the WAR file to each of the environments (the same WAR file), plus the environment-specific set of external artifacts (such as configuration files).
In my source code, I'd like to get programmatically, the last modified date of the current EAR from which my code is deployed.
I'm using Oracle WebLogic.
How could I do that?
Thx for your answers
I'd suggest stepping back and looking at the problem you're trying to solve, Eric.
Do you want to know when the application was built or the particular version of the application you've got deployed? If that's the case, you're probably best served by incorporating something into the build process to set this. Ideally a manifest of the specific component versions used to package up your application.
If you want to know when the application was first deployed by an administrator, or most recently deployed that gets more tricky. Relying on the filesystem to solve this problem is a bad idea because you're at the mercy of whatever WebLogic Server is doing, which is admittedly more than a bit opaque.
If you absolutely need to do this, WebLogic Server's standard staging behaviour puts a version of the file in a particular subdirectory on each server instance, then very quickly pulls it apart. (it's the 'servers//stage' subdirectory underneath the root directory of the domain ($DOMAIN_HOME) $DOMAIN_HOME is the current directory for all server processes at runtime, so the relative path should work fine.
That should give you the time that file was deployed across the network, but you'd definitely want to test the observed behaviour from rebooting your server instance.
The problem with that is that it doesn't give you anything you couldn't determine more elegantly via either the build process, or WLST scripting around the deployment process.
If it's the last time the application itself was deployed (regardless of the version) then application lifecycle event listeners are definitely the best way to go. Unfortunately there's no MBean that gives you the uptime of an individual application.
There's a great reference on lifecycle listeners here:
http://download.oracle.com/docs/cd/E17904_01/web.1111/e13712/app_events.htm#i178290
You could either check the file properties or see inside the MANIFEST.MF present inside the EAR.