How to roll a log file programmatically in Logback - java

According to this link: http://logback.qos.ch/manual/appenders.html (in RollingFileAppender)
It seems to me that Logback provides only a limited way of how to roll a log file.
Based on time, it can roll a log file only in specific interval like once per hour, once
per minute. It does not mention how to roll a log file programmatically which is what I
desire. I need some way to allow users to decide when to roll a log file and the log file
will be used later by the user.
I did some research using Google but found nothing.
Could you please tell me how to roll a log file programmatically
Thank in advance.
Edit: At least I need some way to specify an interval like roll a log file once a ten minutes.

I suggest to make your own implementation of the TriggeringPolicy. So, make your own implementation (it will check a global variable set by user) and configure the logback with your class.
Not sure about your "Edit:". Sounds like a standard TimeBasedRollingPolicy configuration.

within (the app start of) logback.groovy one could e.g. rollover with arbitrary complex groovy code based on this idea:
appender( 'FILE', RollingFileAppender ) {
...
if ( myConditionTrue )
component.rollover() // directly rollover on app start / logback.groovy load
}
if one would want to rollover based on some checked condition periodically one could do it based on scan(interval):
appender( ... ){
...
if ( myConditionTrue )
component.rollover()
}
scan(30) // reparse logback.groovy every 30 s if the setup had been changed
some nice other things are mentioned here:
SiftingAppender/MDC usage based on discriminator:id, MDC.put('id','<some-id>'), log.info(ClassicConstants.FINALIZE_SESSION_MARKER)
directly calling it from within the running app ala log.getAppender('FILE').rollover()

Related

Mute Stanford coreNLP logging

First of all, Java is not my usual language, so I'm quite basic at it. I need to use it for this particular project, so please be patient, and if I have omitted any relevant information, please ask for it, I will be happy to provide it.
I have been able to implement coreNLP, and, seemingly, have it working right, but is generating lots of messages like:
ene 20, 2017 10:38:42 AM edu.stanford.nlp.process.PTBLexer next
ADVERTENCIA: Untokenizable: 【 (U+3010, decimal: 12304)
After some research (documentation, google, other threads here), I think (sorry, I don't know how I can tell for sure) coreNLP is finding the slf4j-api.jar in my classpath, and logging through it.
Which properties of the JVM can I use to set logging level of the messages that will be printed out?
Also, in which .properties file I could set them? (I already have a commons-logging.properties, a simplelog.properties and a StanfordCoreNLP.properties in my project's resource folder to set properties for other packages).
Om’s answer is good, but two other possibly useful approaches:
If it is just these warnings from the tokenizer that are annoying you, you can (in code or in StanfordCoreNLP.properties) set a property so they disappear: props.setProperty("tokenize.options", "untokenizable=NoneKeep");.
If slf4j is on the classpath, then, by default, our own Redwoods logger will indeed log through slf4j. So, you can also set the logging level using slf4j.
If I understand your problem, you want to disable all StanfordNLP logging message while the program is executing.
You can disable the logging message. Redwood logging framework is used as logging framework in Stanford NLP. First, clear the Redwood's default configuration(to display log message) then create StanfordNLP pipeline.
import edu.stanford.nlp.util.logging.RedwoodConfiguration;
RedwoodConfiguration.current().clear().apply();
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Hope it helps.
In accordance with Christopher Manning's suggestion, I followed this link
How to configure slf4j-simple
I created a file src/simplelogger.properties with the line org.slf4j.simpleLogger.defaultLogLevel=warn.
I am able to solve it by setting a blank output stream to system error stream.
System.setErr(new PrintStream(new BlankOutputStream())); // set blank error stream
// ... Add annotators ...
System.setErr(System.err); // Reset to default
Accompanying class is
public class BlankOutputStream extends OutputStream {
#Override
public void write(int b) throws IOException {
// Do nothing
}
}
Om's answer disables all logging. However, if you wish to still log errors then use:
RedwoodConfiguration.errorLevel().apply();
I also use jdk logging instead of slf4j logging to avoid loading slfj dependencies as follows:
RedwoodConfiguration.javaUtilLogging().apply();
Both options can be used together and in any order. Required import is:
import edu.stanford.nlp.util.logging.RedwoodConfiguration;

How to configure log4j to only keep log files for the last n days?

My name is Luis Ribeiro and I am trying to set log4j, so that it will delete the older rotate logs.
The solution that we have for now is to use cron with a script.
For example like this: How to configure log4j to only keep log files for the last seven days?
But here are some major problems:
Work with hundred of machines(n)
Work with many crons in many machines(n * m)
Work with different structures and OS(n * m * z)
Cron will delete even when the application stop and there is a lost of information
The ideal is that when application runs, log4j will take care complete of the log rotation.
It will rotate once a day: RollingFile: Daily and filePattern="logs/${filename}.[ %d{yyyy-MM-dd} | -%i | any type of counter ].log.gz" with TimeBasedTriggeringPolicy
actual log and n rotation files are kept. Older are deleted: app.log, app.{-1 day}.log.gz,..., app.{-n days}.log.gz
The pattern name isn't important it can be a number in the file name
We cannot use size as trigger. We don't know how much the program will do during the day. Log size varies very, very much
It should be structure and os independent. We prefer to enhance the log4j Properties or XMl file, than adding scripts and cron triggers.
I try to use the DefaultRolloverStrategy with TimeBasedTriggeringPolicy. But the are problems:
filePattern=${filename}.%d{yyyy-MM-dd}-%i.log.gz will result in: app.log, app.{-1 day}-1.log.gz, app.{-2 day}-1.log.gz,..., app.{-(n + 1) days}-1.log.gz,... => It will never be deleted
filePattern=${filename}-%i.log.gz result in java.lang.IllegalStateException: Pattern does not contain a date
Is there any away to enhance the log4j so it will take care of all these tasks?
With best regards,
Luis
Because DailyRollingFileAppender does not have attribute MaxBackupIndex, so you have to remove log by yourself.
Or you can perform crontab for housecleaning like:
find /path/to/logs -type f -mtime +dayToKeep -exec rm -f {} \;

vertx LoggerHandler not adding logback

I am trying to use LoggerHandler to log all incoming requests. I am using logback.xml to specify appenders. I am setting system property for logging.
System.setProperty("org.vertx.logger-delegate-factory-class-name",
"org.vertx.java.core.logging.impl.SLF4JLogDelegateFactory");
Still it is logging everything in console not in file.
This worked for me with Vert.x 3.4.1:
import static io.vertx.core.logging.LoggerFactory.LOGGER_DELEGATE_FACTORY_CLASS_NAME;
import io.vertx.core.logging.LoggerFactory;
// ...
setProperty (LOGGER_DELEGATE_FACTORY_CLASS_NAME, SLF4JLogDelegateFactory.class.getName ());
LoggerFactory.getLogger (LoggerFactory.class); // Required for Logback to work in Vertx
The key is to get a logger, which I guess initializes the Logging subsystem, the class that you use to get a Logger seems irrelevant as I tried with two different ones.
I run these lines as the first ones of the program in production code and in the tests to work properly in both contexts.
I was able to get it to work by setting the VM options as such:
-Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.Log4jLogDelegateFactory
Then in my log4j.properties, I had to add this:
log4j.category.io.vertx = TRACE
I know this question is getting a bit old, but the only way I was able to get the vertx LoggerHandler to not use JUL was to call LoggerFactory.initialise() after setting the system property as described in the question.
Even better, I set the property in my build.gradle, like so:
run {
systemProperty(
"vertx.logger-delegate-factory-class-name",
"io.vertx.core.logging.SLF4JLogDelegateFactory"
)
args = ['run', mainVerticleName, "--redeploy=$watchForChange", "--launcher-class=$mainClassName", "--on-redeploy=$doOnChange",
"-Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.SLF4JLogDelegateFactory" ]
}
And then at the very top of my MainVerticle::start I have:
LoggerFactory.initialise()
And, boom. Everything is now formatted correctly, including all the startup output.

How can I tell Camel to only copy a file if the Last Modified date is past a certain time?

I'm wondering if this is possible to achieve with Apache Camel. What I would like to do, is have Camel look at a directory of files, and only copy the ones whose "Last Modified" date is more recent than a certain date. For example, only copy files that were modified AFTER February 7, 2014. Basically I want to update a variable for the "Last Run Date" every time Camel runs, and then check if the files were modified after the Last Run.
I would like to use the actual timestamp on the file, not anything provided by Camel... it is my understanding that there is a deprecated method in Camel that used to stamp files when Camel looked at them, and then that would let you know whether they have been processed already or not. But this functionality is deprecated so I need an alternative.
Apache recommends moving or deleting the file after processing to know whether it has been processed, but this is not an option for me. Any ideas? Thanks in advance.
SOLVED (2014-02-10):
import java.util.Date;
import org.apache.camel.builder.RouteBuilder;
public class TestRoute extends RouteBuilder {
static final long A_DAY = 86400000;
#Override
public void configure() throws Exception {
Date yesterday = new Date(System.currentTimeMillis() - A_DAY);
from("file://C:\\TestOutputFolder?noop=true").
filter(header("CamelFileLastModified").isGreaterThan(yesterday)).
to("file://C:\\TestInputFolder");
}
}
No XML configuration required. Thanks for the answers below.
Yes you can implement a filter and then return true|false if you want to include the file or not. In that logic you can check the file modification and see if the file is more than X days old etc.
See the Camel file docs at
http://camel.apache.org/file2
And look for the filter option, eg where you implement org.apache.camel.component.file.GenericFileFilter interface.
Take a look at Camel's File Language. Looks like file:modified might be what you are looking for.
example:
filterFile=${file:modified} < ${date:now-24h}

Unable to set log level in a Java web start application?

Some logging levels appear to be broke?
I run a Java web start (which I will begin to call JWS from now on) application straight from a GlassFish 3.1.2.2 instance. The client has a static logger like so:
private final static Logger LOGGER;
static {
LOGGER = Logger.getLogger(App.class.getName());
// Not sure how one externalize this setting or even if we want to:
LOGGER.setLevel(Level.FINER);
}
In the main method, I begin my logic with some simple testing of the logging feature:
alert("isLoggable() INFO? " + LOGGER.isLoggable(Level.INFO)); // Prints TRUE!
alert("isLoggable() FINE? " + LOGGER.isLoggable(Level.FINE)); // ..TRUE
alert("isLoggable() FINER? " + LOGGER.isLoggable(Level.FINER)); // ..TRUE
alert("isLoggable() FINEST? " + LOGGER.isLoggable(Level.FINEST)); // ..FALSE
My alert methods will display a JOptionPane dialog box for "true GUI logging". Anyways, you see the printouts in my comments I added to the code snippet. As expected, the logger is enabled for levels INFO, FINE and FINER but not FINEST.
After my alert methods, I type:
// Info
LOGGER.info("Level.INFO");
LOGGER.log(Level.INFO, "Level.INFO");
// Fine
LOGGER.fine("Level.FINE");
LOGGER.log(Level.FINE, "Level.FINE");
// Finer
LOGGER.finer("Level.FINER");
LOGGER.log(Level.FINER, "Level.FINER");
LOGGER.entering("", "Level.FINER", args); // <-- Uses Level.FINER!
// Finest
LOGGER.finest("Level.FINEST");
LOGGER.log(Level.FINEST, "Level.FINEST");
I go to my Java console and click on the tab "Advanced", then I tick "Enable logging". Okay let's start the application. Guess what happens? Only Level.INFO prints! Here's my proof (look at the bottom):
I've done my best to google for log files on my computer and see if not Level.FINE and Level.FINER end up somewhere on the file system. However, I cannot find the log messages anywhere.
Summary of Questions
Why does it appear that logging of Level.FINE and Level.FINER does not work in the example provided?
I set the logging level in my static initializing block, but I'd sure like to externalize this setting to a configuration file of some sort, perhaps packaged together with the EAR file I deploy on GlassFish. Or why not manually write in some property in the JNLP file we download from the server. Is this possible somehow?
Solution for problem no 1.
After doing a little bit more reading on the topic, I concluded that a logger in Java uses a handler to publish his logs. And this handler in his turn has his own set of "walls" for what levels he handles. But this handler need not be attached directly to our logger! You see loggers are organized in a hierarchical namespace and a child logger may inherit his parents handlers. If so, then By default a Logger will log any output messages to its parent's handlers, and so on recursively up the tree (see Java Logging Overview - Oracle).
I ain't saying I get the full picture just yet, and I sure didn't find any quotes about how all of this relates to a Java Web Start application. Surely there has to be some differences. Anyways, I did manage to write together this static initializing block that solves my immediate problem:
static {
LOGGER = Logger.getLogger(App.class.getName());
/*
* This logic can be externalized. See the next solution!
*/
// DEPRECATED: LOGGER.setLevel(Level.FINER);
if (LOGGER.getUseParentHandlers())
LOGGER.getParent().getHandlers()[0].setLevel(Level.FINER);
else
LOGGER.setLevel(Level.FINER);
}
Solution for problem no 2.
The LogManager API docs provided much needed information for the following solution. In a subdirectory of your JRE installation, there is a subdirectory called "lib" and in there you shall find a "logging.properties" file. This is the full path to my file on my Windows machine:
C:\Program Files (x86)\Java\jre7\lib\logging.properties
In here you can change a lot of flavors. One cool thing you could do is to change the global logging level. In my file, this was done on row 29 (why do we see only a dot in front of the "level"? The root-parent of all loggers is called ""!). That will produce a hole lot of output; on my machine I received about one thousand log messages per second. Thus changing the global level isn't even plausible enough to be considered an option. Instead, add a new row where you specify the level of your logger. In my case, I added this row:
martinandersson.com.malivechat.app.App.level = FINER
However, chances are you still won't see any results. In solution no 1, I talked about how loggers are connected to handlers. The default handler is specified in logging.properties, most likely on row 18. Here's how my line reads:
handlers= java.util.logging.ConsoleHandler
Also previously, I talked about how these handlers in their turn use levels for what should trouble their mind. So, find the line that reads something like this (should now be on row 44?):
java.util.logging.ConsoleHandler.level = INFO
..and in my case I swapped "INFO" to "FINER". Problem solved.
But!
My original inquiry into this matter still hasn't provided an answer how one can set these properties closer in par with the application deployment. More specifically, I would like to attach these properties in a separate file, bundled with the application EAR file I deploy on GlassFish or something like that. Do you have more information? Please share!

Categories