How to set different logging levels in openelemetry-java? - java

We are planning to use open-telemetry as the logging tool in a new Spring-boot based micro-service. We have explored two ways to use otel, first is using manual instrumentation provided by Java and other is using Spring-cloud-sleuth using #NewSpan annotation
However, I could not find a way to specify different logging levels like INFO/ERROR/DEBUG. We need to have an ability to control and restrict the excessive logging. However, additional debug logs can help in troubleshooting if required.
How can we set the logging levels with open-telemetry java or spring-sleuth?
Uodate: As commented by #Jan , I was getting confused between tracing and logging. However, I can not find any documentation about support for logging with opentelemetry-java. Is there any way to do it? Also, if we use different tools for tracing and logging, will it be considered a bad practice?

So generally how do we use combination of tracing and logging? Can we use different frameworks for both? For example, if we use open-telemetry for tracing and slf4j/log4j for logging, will it be considered a bad practice?
OpenTelemetry will never have a "logging" API meant to be used by ordinary users, it is supposed to capture logs produced by actual logging libraries (like log4j or logback).
OpenTelemetry support for logs is still in the alpha state (meaning that the logging SPI/SDK might still change in non-backward compatible ways), but we already have dedicated OpenTelemetry appenders for several of the most commonly used logging libraries, like log4j or logback. You should be able to use these and expect relatively little breaking changes.

Related

three questions logging in spring

I have 3 question about logging inside srping
First:
spring documentation:
By default, If you use the ‘Starter POMs’, Logback will be used for
logging. Appropriate Logback routing is also included to ensure that
dependent libraries that use Java Util Logging, Commons Logging, Log4J
or SLF4J will all work correctly.
I don't understand that if a third-party library uses a different logger, what problem will be created in the program? If that library uses another logger, that logger is located as a dependency in its jar file, and when the library is added that logger is also added and there is no problem.
second:
I saw in a tutorial that it says that trace and debug are disabled by default in spring because it causes performance problems. I understand why trace is a problem because it must report everything that happens in the program. But why does debug cause performance problems? When I did: debug=true, it didn't take me that much time. So what's the problem?
Third:
In this tutorial, it says that logback does not have a FATAL level. Why not? Is it possible that the spring boot program does not have some of the required settings, but it can still start without the need for FATAL?
Different logging implementations require different configurations. Log4j uses XML and Java-Util-Log (JUL) uses properties. Also the xml sematics differs.
So you do not like to configure all logging implementations individually. You like to configure one logging configuration to rule them all, one source-of-truth for logging-config. This have nothing to do with the main intend of the software you are running. Later logging-frameworks generalize older logging-frameworks, so you need the latest logging-framework to rule them all.
Let me rephrase first: Why do we differ between debug and trace? Debug (or de-bug) is a special condition that let you inspect a bug for debugging purposes. Debug may show clients real-world firstname and family-name, in order to output those informations you need code under debugging-circumstances only. To log them may cause legal problems because you are processing/storing personal informations without permission in log-files. In order to de-bug a software you need the debug-log in 90% of all cases. Only in rare cases you need the trace-log. That meens they differ.
Thats a good one. Fatal for me meens the server has hardware problems (burning hard-drive, loose of power supply). This is indicated by errors. Seriously? I have no idea. I may argue that everything that is fatal-worthit should be an error.

Logging best practices: structure / conditional logging / filtering

I have been working on a project that logs using what are essentially just println statements with a prefixed string tag. I have been looking into implementing support for an actual logging library such as Logback the past few days and had some questions relating to best-practices about logging in general. I know a lot of what I'm doing is probably stupid, but I want to change :)
When I'm extending the code and adding new features, such as testing a new codec, I have been using liberal logging to ensure the code behaves as expected (instead of actual unit tests), and then using constant booleans at the top to disable that logging when the codec is finished (in case it's needed again or a bug is found, I can flip the boolean while testing). I don't know if the granularity that debug level provides would be enough and would prefer some way to define levels differently for different features. Leaving these enabled by default would really bloat the console and probably effect performance -- is this what filters are usually used for?
I've also found myself in more than one case prepending spaces to my messages so that I can better follow the flow of the code. I've found this to be really helpful. In a way, the tabbed messages are like a debug-debug level.
Doing something
Reading a file
header of file: ...
body of file: ...
Back at main
What are good practices for logging? Can someone refer me a good resource that I can dig into or explain if what I'm doing is stupid and why it's stupid? What are some alternatives? An open source project as an example would be extremely helpful. Thanks, I appreciate any guidance.
Some advice:
Never substitute unit tests for logging only.
Regarding logging you should log whatever makes you find bugs quicker.
Log libraries support async logging which will not affect the performance of your application (log4j2 async logging). Logback supports async too.
Do not use booleans inside your code to decide to log or not to log. Use the logging levels (TRACE, DEBUG, INFO, WARN, ERROR) and set them accordingly. Usually in PROD environment you will use a WARN level and in DEV you can set it on debug.
Logging on different levels depending on the package is quite useful (you can use appenders to customize this and other stuff).
In resume, the important is to read the documentation of whatever library you are using

Is there any way I can create a reusable logging project for use in web-applications?

I was wondering if I can create a project in eclipse or for the matter any Java IDE in which I can write my log4j initialization code and save it as a project so I can just import it in any workspace. I know how to configure a servlet in which I initialize the logger in the init() method and load the servlet on startup but that requires an entry in the web-xml which changes depending on the application.
Is there any way I can create a resuable project where there is no requirement for dependency on the DD ?
I know how to configure a servlet in which I initialize the logger in the init() method and load the servlet on startup ...
There is probably a better way.
For instance, the way I do log4j configuration on a servlet (using Tomcat) is to simply put the "log4j.properties" file on the classpath; e.g. in ".../webapps/MyApp/WEB-INF/classes/". Log4j's default strategy for locating the logging properties will find it there ... with no need for you to write any Java code.
Configuring the logging system from Java code is (IMO) a bad idea because it means that you have to change, rebuild and redeploy Java code in order to tweak the logging.
In addition to #Stephen C's answer, some application servers already come with a predefined logging congfiguration. If you're using JBoss, it already has its own log4j configuration in a file called jboss-log4j.xml, which defines a standard configuration, which you can adapt to your needs.
Other than that, I recommend to bundle a configuration with your application like described in the other answer.
Even smarter and more flexible is to use a log wrapper in your application, which will abstract from the underlying logging framework. Take a look at these:
SLF4J (preferred): http://www.slf4j.org/
Commons Logging: http://commons.apache.org/logging/
If you use one of these, you can then configure them to use the logging framework of the server you're deploying to. Many of the popular open-source frameworks use these log wrapper frameworks for similar reasons. Take a look at them, then adopt one as your standard - I strongly recommend SLF4J.
One of the major problems with using a logging framework inside your web application is that you usually end up with wanting to write to a file, and this is not allowed within the servlet API, causing you to become subtly vendor dependent and not work well with multi-computer deployments.
I would strongly suggest that you consider converting your code to use SLJF4 for your actual logging statements as it allows you to
become back end independent.
Use the "{}" placeholders to write simply log.debug("a={}, b={}", a, b) and avoid the possibly expensive generation of the actual logging string if the debug logs were not enabled without having to add a guarding if log.debugging-enabled statement.
These two along was reason enough for me to switch.
A very interesting way to handle logging then is to use the java.util.logging bridge to send all log statements to the Java log system which most web containers handle. Then the web container does all the work for you, and you can use the vendors tooling for investigating log files. Very useful!

What is the issue with the runtime discovery algorithm of Apache Commons Logging

Dave Syer (SpringSource) writes in his blog:
Unfortunately, the worst thing about commons-logging, and what has made it unpopular with new tools, is also the runtime discovery algorithm.
Why? What is the issue with its runtime discovery algorithm? Performance?
Why? What is the issue with its runtime discovery algorithm? Performance?
No, it's not performance, it's classloader pain. JCL discovery process relies on classloader hacks to find the logging framework at runtime but this mechanism leads to numerous problems including unexpected behavior, hard to debug classloading problems resulting in increased complexity. This is nicely captured by Ceki (the author of Log4J, SLF4J and Logback) in Think again before adopting the commons-logging API (which also mentions memory leaks problems observed with JCL).
And this is why SLF4J, which uses static bindings, has been created.
Ceki being the author of SLF4J, you might think his articles are biased but, believe me, they are not and he is providing lots of references (evidences) to prove his point.
To sum up:
Yes, JCL is known to be broken, better stay away from it.
If you want to use a logging facade (not all projects need that), use SLF4J.
SLF4J provides a JCL-to-SLF4J bridge for frameworks still using JCL like Spring :(
I find Logback, Log4J's successor, to be a superior logging implementation.
Logback natively implements the SLF4J API. This means that if you are using Logback, you are actually using the SLF4J API.
See also
Commons Logging was my fault
Think again before adopting the commons-logging API
SLF4J Vs JCL / Dynamic Binding Vs Static Binding
Commons logging is a light weight logging facade which is placed on top of the heavy weight logging API be that log4j, java.util.logging or another supported logging API.
The discovery algorithm is what commons logging uses to determine what logging API you use at runtime so it can direct log calls through its API to the underlying logging API. The benefit of this is that if you wanted to create a library that does logging, you do not want to tie down users of your library to any particular heavy weight logging system. Callers of your code can configure logging via log4j, java.util.logging etc. and commons logging will forward to that API at runtime.
Common gripes for commons logging:
Even though you don't use it, a library you depend on might so you have to include it in your classpath anyway.
Runs the discovery algorithm for each classloader you want to do logging in, which can produce unwanted results so make sure you put commons-logging.jar in the right classloader.
Greater complexity than the underlying logging framework.
Less features that underlying logging framework.
A perceived greater complexity as well as unpredictability in complex classpath hierarchies without any perceived benefits make users of commons-logging agitated. Given also that this choice may be forced on you does not make users sympathetic. See this article for a compelling argument against using commons-logging.
I can't speak about the "believed unpopular" aspect, I can only speak for myself:
Commons Logging is a facade over top of whatever your "real" logging framework may be: Log4j, Logback or whatever.
The idea of a logging facade is that your app gains the flexibility to decide at runtime which logging implementation it wants to work with. The facades are clever enough to find logging implementations at runtime.
My older Java apps use Log4j, directly. Works fine, I see no need to change them. My newer Java apps will probably use Logback. I think the ability to dynamically choose a logging framework is something none of my apps will ever need. Of course, other peoples' mileage may vary.
EDIT: Looks like I was wrong about the rationale for Commons Logging. The links given by #Pascal Thivent, especially the first one, explain this far better.
Commons Logging contains logic to determine at runtime whether to use log4j or java.util.logging.*.
That code used to be seriously broken, essentially only working with JUL.
Based on the experiences with this, slf4j was written which uses static binding (or used to, Im not sure with version 1.6) to choose the appropriate framework to use of log4j, JUL or the log4j fork logback (and more), and it includes a bridge to allow existing Commons Logging code to use slf4j transparently.
If you can, then go for slf4j.

Are there technical reasons to prefer using logback instead of log4j?

Should new projects use logback instead of log4j as a logging framework ?
Or with other words :'Is logback better than log4j (leaving the SLF4J-'feature' of logback beside)?'
You should use SLF4J+Logback for logging.
It provides neat features like parametrized messages and (in contrast to commons-logging) a Mapped Diagnostic Context (MDC, javadoc, documentation).
Using SLF4J makes the logging backend exchangeable in a quite elegant way.
Additionally, SLF4J supports bridging of other logging frameworks to the actual SLF4J implementation you'll be using so logging events from third party software will show up in your unified logs - with the exception of java.util.logging that can't be bridged the same way that other logging frameworks are.
Bridging jul is explained in the javadocs of SLF4JBridgeHandler.
I've had a very good experience using the SLF4J+Logback combination in several projects and LOG4J development has pretty much stalled.
SLF4J has the following remaining downsides:
It does not support varargs to stay compatible with Java < 1.5
It does not support using both parametrized message and an exception at the same time.
It does not contain support for a Nested Diagnostic Context (NDC, javadoc) which LOG4J has.
The author (of both Logback and Log4j) has a list of reasons to change at http://logback.qos.ch/reasonsToSwitch.html.
Here are a few that stuck out at me;
Faster implementation
Based on our previous work on log4j,
logback internals have been
re-written to perform about ten times
faster on certain critical execution
paths. Not only are logback
components faster, they have a
smaller memory footprint as well.
Automatic reloading of configuration
files
Logback-classic can automatically
reload its configuration file upon
modification. The scanning process is
both fast and safe as it does not
involve the creation of a separate
thread for scanning. This technical
subtlety ensures that logback plays
well within application servers and
more generally within the JEE
environment.
Stack traces with packaging data
When logback prints an exception, the
stack trace will include packaging
data. Here is a sample stack trace
generated by the logback-demo
web-application.
14:28:48.835 [btpool0-7] INFO
c.q.l.demo.prime.PrimeAction - 99 is
not a valid value
java.lang.Exception: 99 is invalid
at
ch.qos.logback.demo.prime.PrimeAction.execute(PrimeAction.java:28)
[classes/:na] at
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
[struts-1.2.9.jar:1.2.9] at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
[struts-1.2.9.jar:1.2.9] at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
[struts-1.2.9.jar:1.2.9] at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
[servlet-api-2.5-6.1.12.jar:6.1.12]
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
[jetty-6.1.12.jar:6.1.12] at
ch.qos.logback.demo.UserServletFilter.doFilter(UserServletFilter.java:44)
[classes/:na] at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1115)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:361)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
[jetty-6.1.12.jar:6.1.12] at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
[jetty-6.1.12.jar:6.1.12]
From the above, you can recognize
that the application is using Struts
version 1.2.9 and was deployed under
jetty version 6.1.12. Thus, stack
traces will quickly inform the reader
about the classes invervening in the
exception but also the package and
package versions they belong to. When
your customers send you a stack
trace, as a developer you will no
longer need to ask them to send you
information about the versions of
packages they are using. The
information will be part of the stack
trace. See "%xThrowable" conversion
word for details.
This feature can be quite helpful to
the point that some users mistakenly
consider it a feature of their IDE.
Automatic removal of old log archives
By setting the maxHistory property of
TimeBasedRollingPolicy or
SizeAndTimeBasedFNATP, you can
control the maximum number of
archived files. If your rolling
policy calls for monthly rollover and
you wish to keep one year's worth of
logs, simply set the maxHistory
property to 12. Archived log files
older than 12 months will be
automatically removed.
There may be a bias there, but the same guy did write both frameworks and if he is saying use Logback over Log4j he's probably worth listening to.
I would use slf4j for logging in all cases. This allow you to choose which actual logging backend you want to use, at deploy time instead of code time.
This has proven to be very valuable to me. It allows me to use log4j in old JVM's, and logback in 1.5+ JVM's, and also java.util.logging if needed.
Logback more Java EE aware:
in general (from code to documentation) it’s keeping in mind containers – how multiple apps coexist, how class loaders implemented etc. Contexts for loggers, JNDI, JMX configuration included etc.
from developer prospective almost same, Logback adds
Parameterized logging (no need to use if(logger.isDebugEnabled()) to avoid string concatenation overhead )
Log4j – only giant plus is old JVM support, small (IMO)
NDC (Logback only MDC), some extensions. For example I wrote extension for configureAndWatch for Log4j, no such thing for Logback
the original log4j and logback were designed and implemented by the same guy.
several open source tools have used SLF4J. I don't see any significant deficiencies in this tool. So unless you have a lot extensions to log4j in your codebase, I would go ahead with logback.
I would think that your decision should come down to the same one it would if you were deciding between using log4j or Jakarta Commons Logging - are you developing a library which will be included in other applications? If so, then it doesn't seem fair to force users of your library to also use your logging library of choice.
If the answer is no, I would just go with what is simpler to add and what you are more comfortable with. Sounds like logback is just as extensible and reliable as log4j, so if you're comfortable using it, go ahead.
I'm not familiar with SLF4J, and I've only taken a brief look at logback, but two things come to mind.
First, why are you excluding a tool from examination? I think it's important to keep an open mind and examine all possibilities to choose the best one.
Second, I think that in some projects one tool is a better than another tool, and the opposite might be true in a different project. I don't think that one tool is always better than another tool. There is, after all, no silver bullet.
To answer your question - Yes and no. It depends on the project, and how familiar the team is with one tool. I wouldn't say "don't use log4j" if the entire team is very comfortable with it, it meets all the needs, and logback doesn't offer anything that we need to complete the task.

Categories