Log rotation in Spring Boot service - java

I am deploying a Spring Boot 2.0.0-RC1 application as an init.d service, but I can't figure out how to configure the log rotation.
The app logs to /var/log/appname.log, but if I configure logrotate the logging stops after a rotation, because a new file is created, and the stdout/stderr redirection defined in the embedded script does not work anymore.
If I configure the log rotation in my logging system there are two problems: I can't create the files in /var/log, and I still have the redirection defined in the embedded script.
What is the proper solution for this?

I'm facing the same problem in several applications and adding copytruncate param is the solution because your Spring Boot application doesn’t understand that the file has changed (truncated) and is acting like a tail -f command (see How does the “tail” command's “-f” parameter work? for details).
Example:
/opt/payara41/glassfish/domains/domain1/logs/* {
daily
copytruncate
rotate 3
dateext
notifempty
}
copytruncate
Truncate the original log file to zero size in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever.
Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log
file stays in place

I found the solution, it's the option copytruncate in logrotate.

Related

Server log rotation

I have an application running in Wildfly 8.2.1. In addition to the server.log file in the log directory, my application creates and uses other log files too (also in the log directory). They all end in .log. This is dynamic and programmatic using org.apache.log4j.FileAppender, since the names, contents, and number of files differs from one client to the next.
What I'd like is for Wildfly to automatically rotate these log files too in addition to its own (i.e. server.log). I see in standalone.xml there is a periodic-rotating-file-handler tag with a file subtag that has a path attribute. From reading the Wildfly logging documentation, it seems like I can't use wild cards here? So, path="*.log"? Is this true? If so, how can I achieve the end goal of Wildfly automatically rotating my log files instead of doing it myself?
If you'd like to rotate log files you'd need to use a rotating file handler. The periodic-rotating-file-handler will only rotate it's own file not other files associated with other file handlers.
Since you seem to be creating a log4j file appender have a look at the org.apache.log4j.RollingFileAppender.

Log4j not recreate log file after deleting it

My app deployed on Tomcat uses log4j to write a log file. If I delete that file, then the app does not recreate it. I also tried to recreate it manually, but it remains always empty. Is there any way to delete the log file (not from the app), create a new one in the same path with the same name, and that it can be written by the application?
Is there any way to delete the log file (not from the app), create a new one in the same path with the same name, and that it can be written by the application?
Nope. You need to get the application itself to restart logging.
The problem is that the log4j appender still has a handle for the deleted file, and will continue to write to it ... unaware that it has been deleted.
A better approach would be to have the application itself take care of "rotating" the logfile. Look at the classes that implement the log4j Appender interface for some ideas.

More than 20 minutes to start Spring application. How to reduce it?

I'm maintaining an old Spring project and it's startup time is very long. The app is running under Tomcat 7 and using Hibernate 4.3 + PostgreSQL 9.5. This is the log from IntelliJ IDEA that is being written on app start (uploaded to Pastebin due to SO post length limitations). Notice that time gap between 3rd and 2nd lines from the bottom. It seems that nothing is happening during that time. I've tried to set all log levels to TRACE but still haven't seen any other output to log. The question is how can I reduce the startup time? This is the things I've already tried:
set default-lazy-init=true on the main context configuration file;
set hibernate.hbm2ddl.auto to none on the persistance config;
set hibernate.temp.use_jdbc_metadata_defaults to false on the persistance config.
Non of them produced any measurable results. What else can I try? At least how can I understand what is happening during all this time?
I would suggest you to copy the .war file of your application to the webapps folder and let tomcat do the deployment, it is quicker and will save your time.
Tomcat supports hot deployment. Tomcat monitors the deployment directory for changes, so you can just copy the .war file into that directory, and the server will undeploy/redeploy.
You can write a script using ANT to automate the deployment process.

will multiple weblogic managed nodes with same log4j result in file lock?

My production setup has 1 physical server with 2 weblogic managed nodes running and deployed with a package war file.
The package war file contains the log4j configuration file which specifies the log file to be written to /log/mypath/mylogfile.log.
Will multiple weblogic managed nodes attempting to read/write to the same log file result in file lock/IO issues?
Yes, you will have issues that will prevent the logs from rolling. Adding the the server name as a variable name with alleviate this, but will give you two log files instead of one. The log path will look like this:
/log/mypath/mylogfile.${weblogic.Name}.log
I find that if there is too much logging going on, such as using full debugging to troubleshoot a high volume production system, we can get stuck threads. I have seen this happen with just one managed server, let alone with several. It might depend on log4j version but it has been a periodic problem for us with high log levels.

Filter log4j 2.0 messages to separate log files per-webapp

Executive Summary
How do I filter by the servlet in which the log message was invoked? (presently using 2.0 beta8)
Why on earth I would want to do that...
I have several existing web applications. They were written to rely on a proprietary logging system. I have re-implemented a key class from the proprietary system from scratch and added it as a class the proprietary system as a jar and log4j 2.0 as jars in tomcat, thereby utilizing the class loading load order in tomcat to divert the proprietary system into log4j. This succeeds and my log4j config now controls everything (Yay!).
But... (There's always a "But"!)
I was very pleased until I discovered that with all 4 applications deployed in the same container, they were not coordinating their writes to the single log file in the single configuration I had placed in conf/log4j2.xml (and specifed by passing -Dlog4j.configurationFile=/mnt/kui/tomcat/conf/log4j2.xml on the command line). I found some log messages with much earlier time stamps (hours earlier) in the middle of the log file. Out of order logs (and possibly overwritten log lines?) are not desirable of course.
I actually don't want them all in one file anyway and would prefer a log per application controlled by a single config file. Initially I thought this would be easy to achieve since log4j automatically sets up a LoggingContext with the name of the web application.
However I can't seem to find a filter implementation that will allow me to filter on the LoggingContext. I understand that from each application's perspective there is only one logging context (I think), but the same config file is read by 4 applications so from the config perspective LoggingContext is not unique.
I'm looking for a way to route each application to it's own file without having a config file for every application, or having to add classes to all the applications or edit war files (including web.xml). I'm sooo... close but It's not working.
Just to complicate matters, there is a jar file we wrote that is shared among all 4 applications that uses this logging too and one application has converted to using log4j directly in it's classes (but it still uses proprietary classes that reference the proprietary logging class that I replaced).
I have already seen http://logging.apache.org/log4j/2.x/manual/logsep.html and my case seems closest to '"Shared" Web Applications and REST Service Containers' but that case doesn't seem very well covered by that page.
You may want to look at the RoutingAppender which can be used to separate log files based on data in your ThreadContextMap. You could use the web app name as a unique key.
About the out of order logs, there was an issue with FastFileAppender in older betas. If append was false, the old file was not truncated but new log events would start to overwrite the old file from the beginning. (So after your most recent log event you would see yesterday's log events, for example). What version are you using?

Categories