I have 3 question about logging inside srping
First:
spring documentation:
By default, If you use the ‘Starter POMs’, Logback will be used for
logging. Appropriate Logback routing is also included to ensure that
dependent libraries that use Java Util Logging, Commons Logging, Log4J
or SLF4J will all work correctly.
I don't understand that if a third-party library uses a different logger, what problem will be created in the program? If that library uses another logger, that logger is located as a dependency in its jar file, and when the library is added that logger is also added and there is no problem.
second:
I saw in a tutorial that it says that trace and debug are disabled by default in spring because it causes performance problems. I understand why trace is a problem because it must report everything that happens in the program. But why does debug cause performance problems? When I did: debug=true, it didn't take me that much time. So what's the problem?
Third:
In this tutorial, it says that logback does not have a FATAL level. Why not? Is it possible that the spring boot program does not have some of the required settings, but it can still start without the need for FATAL?
Different logging implementations require different configurations. Log4j uses XML and Java-Util-Log (JUL) uses properties. Also the xml sematics differs.
So you do not like to configure all logging implementations individually. You like to configure one logging configuration to rule them all, one source-of-truth for logging-config. This have nothing to do with the main intend of the software you are running. Later logging-frameworks generalize older logging-frameworks, so you need the latest logging-framework to rule them all.
Let me rephrase first: Why do we differ between debug and trace? Debug (or de-bug) is a special condition that let you inspect a bug for debugging purposes. Debug may show clients real-world firstname and family-name, in order to output those informations you need code under debugging-circumstances only. To log them may cause legal problems because you are processing/storing personal informations without permission in log-files. In order to de-bug a software you need the debug-log in 90% of all cases. Only in rare cases you need the trace-log. That meens they differ.
Thats a good one. Fatal for me meens the server has hardware problems (burning hard-drive, loose of power supply). This is indicated by errors. Seriously? I have no idea. I may argue that everything that is fatal-worthit should be an error.
Related
We are planning to use open-telemetry as the logging tool in a new Spring-boot based micro-service. We have explored two ways to use otel, first is using manual instrumentation provided by Java and other is using Spring-cloud-sleuth using #NewSpan annotation
However, I could not find a way to specify different logging levels like INFO/ERROR/DEBUG. We need to have an ability to control and restrict the excessive logging. However, additional debug logs can help in troubleshooting if required.
How can we set the logging levels with open-telemetry java or spring-sleuth?
Uodate: As commented by #Jan , I was getting confused between tracing and logging. However, I can not find any documentation about support for logging with opentelemetry-java. Is there any way to do it? Also, if we use different tools for tracing and logging, will it be considered a bad practice?
So generally how do we use combination of tracing and logging? Can we use different frameworks for both? For example, if we use open-telemetry for tracing and slf4j/log4j for logging, will it be considered a bad practice?
OpenTelemetry will never have a "logging" API meant to be used by ordinary users, it is supposed to capture logs produced by actual logging libraries (like log4j or logback).
OpenTelemetry support for logs is still in the alpha state (meaning that the logging SPI/SDK might still change in non-backward compatible ways), but we already have dedicated OpenTelemetry appenders for several of the most commonly used logging libraries, like log4j or logback. You should be able to use these and expect relatively little breaking changes.
I have been working on a project that logs using what are essentially just println statements with a prefixed string tag. I have been looking into implementing support for an actual logging library such as Logback the past few days and had some questions relating to best-practices about logging in general. I know a lot of what I'm doing is probably stupid, but I want to change :)
When I'm extending the code and adding new features, such as testing a new codec, I have been using liberal logging to ensure the code behaves as expected (instead of actual unit tests), and then using constant booleans at the top to disable that logging when the codec is finished (in case it's needed again or a bug is found, I can flip the boolean while testing). I don't know if the granularity that debug level provides would be enough and would prefer some way to define levels differently for different features. Leaving these enabled by default would really bloat the console and probably effect performance -- is this what filters are usually used for?
I've also found myself in more than one case prepending spaces to my messages so that I can better follow the flow of the code. I've found this to be really helpful. In a way, the tabbed messages are like a debug-debug level.
Doing something
Reading a file
header of file: ...
body of file: ...
Back at main
What are good practices for logging? Can someone refer me a good resource that I can dig into or explain if what I'm doing is stupid and why it's stupid? What are some alternatives? An open source project as an example would be extremely helpful. Thanks, I appreciate any guidance.
Some advice:
Never substitute unit tests for logging only.
Regarding logging you should log whatever makes you find bugs quicker.
Log libraries support async logging which will not affect the performance of your application (log4j2 async logging). Logback supports async too.
Do not use booleans inside your code to decide to log or not to log. Use the logging levels (TRACE, DEBUG, INFO, WARN, ERROR) and set them accordingly. Usually in PROD environment you will use a WARN level and in DEV you can set it on debug.
Logging on different levels depending on the package is quite useful (you can use appenders to customize this and other stuff).
In resume, the important is to read the documentation of whatever library you are using
I was wondering if I can create a project in eclipse or for the matter any Java IDE in which I can write my log4j initialization code and save it as a project so I can just import it in any workspace. I know how to configure a servlet in which I initialize the logger in the init() method and load the servlet on startup but that requires an entry in the web-xml which changes depending on the application.
Is there any way I can create a resuable project where there is no requirement for dependency on the DD ?
I know how to configure a servlet in which I initialize the logger in the init() method and load the servlet on startup ...
There is probably a better way.
For instance, the way I do log4j configuration on a servlet (using Tomcat) is to simply put the "log4j.properties" file on the classpath; e.g. in ".../webapps/MyApp/WEB-INF/classes/". Log4j's default strategy for locating the logging properties will find it there ... with no need for you to write any Java code.
Configuring the logging system from Java code is (IMO) a bad idea because it means that you have to change, rebuild and redeploy Java code in order to tweak the logging.
In addition to #Stephen C's answer, some application servers already come with a predefined logging congfiguration. If you're using JBoss, it already has its own log4j configuration in a file called jboss-log4j.xml, which defines a standard configuration, which you can adapt to your needs.
Other than that, I recommend to bundle a configuration with your application like described in the other answer.
Even smarter and more flexible is to use a log wrapper in your application, which will abstract from the underlying logging framework. Take a look at these:
SLF4J (preferred): http://www.slf4j.org/
Commons Logging: http://commons.apache.org/logging/
If you use one of these, you can then configure them to use the logging framework of the server you're deploying to. Many of the popular open-source frameworks use these log wrapper frameworks for similar reasons. Take a look at them, then adopt one as your standard - I strongly recommend SLF4J.
One of the major problems with using a logging framework inside your web application is that you usually end up with wanting to write to a file, and this is not allowed within the servlet API, causing you to become subtly vendor dependent and not work well with multi-computer deployments.
I would strongly suggest that you consider converting your code to use SLJF4 for your actual logging statements as it allows you to
become back end independent.
Use the "{}" placeholders to write simply log.debug("a={}, b={}", a, b) and avoid the possibly expensive generation of the actual logging string if the debug logs were not enabled without having to add a guarding if log.debugging-enabled statement.
These two along was reason enough for me to switch.
A very interesting way to handle logging then is to use the java.util.logging bridge to send all log statements to the Java log system which most web containers handle. Then the web container does all the work for you, and you can use the vendors tooling for investigating log files. Very useful!
I am using SLF4J as logging facade and let users decide where and what to log. Now in case of a crash, I want to send a file to the server that contains debugging information--which basically means a log-file. And since we already have all that log-statements scattered in the code, why not use them?
So basically, I want to create a log file programmatically via SLF4J, transparent for the user who still can plug in his own logging backend and configuration.
My first idea was to implement the org.slf4j.impl.StaticLoggerBinder, deliver my own implementation of a logger that does its logging and then delegates to the user-configured logger. However, I see certain issues with this: If the user puts a normal logging backend, then multiple instances of org.slf4j.impl.StaticLoggerBinder are on the classpath. This will issue a warning AND I might not be able to make sure, that my implementation is the one to get called.
Are there better solutions to this? A whole different approach? Is the idea inherently bad? How to accomplish this?
The point of SLF4J is not to let application end-users choose their logging framework (why would they care?) but to let developers include a library without being tied into the library's choice of logging framework.
So if you want to upload debug information from a deployed application, it's fine to fix the logging implementation. The user can still edit the implementation's configuration file, if they want.
Since SLF4J is open-source, you can modify it to use another class than org.slf4j.impl.StaticLoggerBinder. Then your custom StaticLoggerBinder class could load the original, user-provided org.slf4j.impl.StaticLoggerBinder (if it exists).
Another idea is using a custom LoggerFactory (not org.slf4j.LoggerFactory) in your application which returns a Logger delegate. This delegate class delegate logging method calls to the original Logger implementation and also sends the logs to the server if it is necessary.
Anyway, both would look an awkward hack to me, creating two artifacts (one for end-users and another one for developers) smells better.
(Finally, I don't know what kind of library/application it is, but in my working environment it would not be acceptable if a library sends data to a third party server. Are you sure that you really need to to do this?)
Should new projects use logback instead of log4j as a logging framework ?
Or with other words :'Is logback better than log4j (leaving the SLF4J-'feature' of logback beside)?'
You should use SLF4J+Logback for logging.
It provides neat features like parametrized messages and (in contrast to commons-logging) a Mapped Diagnostic Context (MDC, javadoc, documentation).
Using SLF4J makes the logging backend exchangeable in a quite elegant way.
Additionally, SLF4J supports bridging of other logging frameworks to the actual SLF4J implementation you'll be using so logging events from third party software will show up in your unified logs - with the exception of java.util.logging that can't be bridged the same way that other logging frameworks are.
Bridging jul is explained in the javadocs of SLF4JBridgeHandler.
I've had a very good experience using the SLF4J+Logback combination in several projects and LOG4J development has pretty much stalled.
SLF4J has the following remaining downsides:
It does not support varargs to stay compatible with Java < 1.5
It does not support using both parametrized message and an exception at the same time.
It does not contain support for a Nested Diagnostic Context (NDC, javadoc) which LOG4J has.
The author (of both Logback and Log4j) has a list of reasons to change at http://logback.qos.ch/reasonsToSwitch.html.
Here are a few that stuck out at me;
Faster implementation
Based on our previous work on log4j,
logback internals have been
re-written to perform about ten times
faster on certain critical execution
paths. Not only are logback
components faster, they have a
smaller memory footprint as well.
Automatic reloading of configuration
files
Logback-classic can automatically
reload its configuration file upon
modification. The scanning process is
both fast and safe as it does not
involve the creation of a separate
thread for scanning. This technical
subtlety ensures that logback plays
well within application servers and
more generally within the JEE
environment.
Stack traces with packaging data
When logback prints an exception, the
stack trace will include packaging
data. Here is a sample stack trace
generated by the logback-demo
web-application.
14:28:48.835 [btpool0-7] INFO
c.q.l.demo.prime.PrimeAction - 99 is
not a valid value
java.lang.Exception: 99 is invalid
at
ch.qos.logback.demo.prime.PrimeAction.execute(PrimeAction.java:28)
[classes/:na] at
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
[struts-1.2.9.jar:1.2.9] at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
[struts-1.2.9.jar:1.2.9] at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
[struts-1.2.9.jar:1.2.9] at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
[servlet-api-2.5-6.1.12.jar:6.1.12]
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
[jetty-6.1.12.jar:6.1.12] at
ch.qos.logback.demo.UserServletFilter.doFilter(UserServletFilter.java:44)
[classes/:na] at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1115)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:361)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
[jetty-6.1.12.jar:6.1.12]
From the above, you can recognize
that the application is using Struts
version 1.2.9 and was deployed under
jetty version 6.1.12. Thus, stack
traces will quickly inform the reader
about the classes invervening in the
exception but also the package and
package versions they belong to. When
your customers send you a stack
trace, as a developer you will no
longer need to ask them to send you
information about the versions of
packages they are using. The
information will be part of the stack
trace. See "%xThrowable" conversion
word for details.
This feature can be quite helpful to
the point that some users mistakenly
consider it a feature of their IDE.
Automatic removal of old log archives
By setting the maxHistory property of
TimeBasedRollingPolicy or
SizeAndTimeBasedFNATP, you can
control the maximum number of
archived files. If your rolling
policy calls for monthly rollover and
you wish to keep one year's worth of
logs, simply set the maxHistory
property to 12. Archived log files
older than 12 months will be
automatically removed.
There may be a bias there, but the same guy did write both frameworks and if he is saying use Logback over Log4j he's probably worth listening to.
I would use slf4j for logging in all cases. This allow you to choose which actual logging backend you want to use, at deploy time instead of code time.
This has proven to be very valuable to me. It allows me to use log4j in old JVM's, and logback in 1.5+ JVM's, and also java.util.logging if needed.
Logback more Java EE aware:
in general (from code to documentation) it’s keeping in mind containers – how multiple apps coexist, how class loaders implemented etc. Contexts for loggers, JNDI, JMX configuration included etc.
from developer prospective almost same, Logback adds
Parameterized logging (no need to use if(logger.isDebugEnabled()) to avoid string concatenation overhead )
Log4j – only giant plus is old JVM support, small (IMO)
NDC (Logback only MDC), some extensions. For example I wrote extension for configureAndWatch for Log4j, no such thing for Logback
the original log4j and logback were designed and implemented by the same guy.
several open source tools have used SLF4J. I don't see any significant deficiencies in this tool. So unless you have a lot extensions to log4j in your codebase, I would go ahead with logback.
I would think that your decision should come down to the same one it would if you were deciding between using log4j or Jakarta Commons Logging - are you developing a library which will be included in other applications? If so, then it doesn't seem fair to force users of your library to also use your logging library of choice.
If the answer is no, I would just go with what is simpler to add and what you are more comfortable with. Sounds like logback is just as extensible and reliable as log4j, so if you're comfortable using it, go ahead.
I'm not familiar with SLF4J, and I've only taken a brief look at logback, but two things come to mind.
First, why are you excluding a tool from examination? I think it's important to keep an open mind and examine all possibilities to choose the best one.
Second, I think that in some projects one tool is a better than another tool, and the opposite might be true in a different project. I don't think that one tool is always better than another tool. There is, after all, no silver bullet.
To answer your question - Yes and no. It depends on the project, and how familiar the team is with one tool. I wouldn't say "don't use log4j" if the entire team is very comfortable with it, it meets all the needs, and logback doesn't offer anything that we need to complete the task.