I'm aware that by using the flag verbose:class, we can get the jvm to log out when a class is loaded and from where. However, I want to see some additional information - which class loader loaded the class, and ideally the class which was being executed that caused the loading. (Not entirely sure that latter part even make sense!)
Is there any way to get the jvm to log this info, or any other suggestions of how to get it? Thanks
You can see what triggered a class load in some cases if you use -XX:+TraceClassLoading and -XX:+TraceClassResolution you'll see a collection of Loading messages (when the .class bytes get loaded) and subsequent RESOLVE messages when the classes themselves get resolved. So by figuring out which RESOLVE messages you're seeing you should be able to determine which class is causing a dependent class to be loaded.
Unfortunately this doesn't tell you anything about your classloaders. So although it will print out the JAR that it's loading from, if that doesn't uniquely identify your classloader then it may not be possible to answer the question using standard tools. However, if you're using an embedded engine such as Tomcat or OSGi that provide their own classloaders, there may be additional debugging flags that you can turn on in order to identify which classloader instance is being used.
If your problem is debuging classloading I would consider using a debugger.
Using intellij i was able to set a breakpoint in the url-classloader.
You can configure this breakpoint to log a custom message instead of breaking.
If you want to be able to turn this on in production you could of course write your own classloader.
This isn't dificult, but you will have to figure out how to log to the logging framework without logging the loading of the loading framework. I guess the easiest way would be to ignore some predefined packages when logging.
If you choose this route I can probably provide you with a shell of a solution.
Just ask.
Related
I have 3 question about logging inside srping
First:
spring documentation:
By default, If you use the ‘Starter POMs’, Logback will be used for
logging. Appropriate Logback routing is also included to ensure that
dependent libraries that use Java Util Logging, Commons Logging, Log4J
or SLF4J will all work correctly.
I don't understand that if a third-party library uses a different logger, what problem will be created in the program? If that library uses another logger, that logger is located as a dependency in its jar file, and when the library is added that logger is also added and there is no problem.
second:
I saw in a tutorial that it says that trace and debug are disabled by default in spring because it causes performance problems. I understand why trace is a problem because it must report everything that happens in the program. But why does debug cause performance problems? When I did: debug=true, it didn't take me that much time. So what's the problem?
Third:
In this tutorial, it says that logback does not have a FATAL level. Why not? Is it possible that the spring boot program does not have some of the required settings, but it can still start without the need for FATAL?
Different logging implementations require different configurations. Log4j uses XML and Java-Util-Log (JUL) uses properties. Also the xml sematics differs.
So you do not like to configure all logging implementations individually. You like to configure one logging configuration to rule them all, one source-of-truth for logging-config. This have nothing to do with the main intend of the software you are running. Later logging-frameworks generalize older logging-frameworks, so you need the latest logging-framework to rule them all.
Let me rephrase first: Why do we differ between debug and trace? Debug (or de-bug) is a special condition that let you inspect a bug for debugging purposes. Debug may show clients real-world firstname and family-name, in order to output those informations you need code under debugging-circumstances only. To log them may cause legal problems because you are processing/storing personal informations without permission in log-files. In order to de-bug a software you need the debug-log in 90% of all cases. Only in rare cases you need the trace-log. That meens they differ.
Thats a good one. Fatal for me meens the server has hardware problems (burning hard-drive, loose of power supply). This is indicated by errors. Seriously? I have no idea. I may argue that everything that is fatal-worthit should be an error.
I am using SLF4J as logging facade and let users decide where and what to log. Now in case of a crash, I want to send a file to the server that contains debugging information--which basically means a log-file. And since we already have all that log-statements scattered in the code, why not use them?
So basically, I want to create a log file programmatically via SLF4J, transparent for the user who still can plug in his own logging backend and configuration.
My first idea was to implement the org.slf4j.impl.StaticLoggerBinder, deliver my own implementation of a logger that does its logging and then delegates to the user-configured logger. However, I see certain issues with this: If the user puts a normal logging backend, then multiple instances of org.slf4j.impl.StaticLoggerBinder are on the classpath. This will issue a warning AND I might not be able to make sure, that my implementation is the one to get called.
Are there better solutions to this? A whole different approach? Is the idea inherently bad? How to accomplish this?
The point of SLF4J is not to let application end-users choose their logging framework (why would they care?) but to let developers include a library without being tied into the library's choice of logging framework.
So if you want to upload debug information from a deployed application, it's fine to fix the logging implementation. The user can still edit the implementation's configuration file, if they want.
Since SLF4J is open-source, you can modify it to use another class than org.slf4j.impl.StaticLoggerBinder. Then your custom StaticLoggerBinder class could load the original, user-provided org.slf4j.impl.StaticLoggerBinder (if it exists).
Another idea is using a custom LoggerFactory (not org.slf4j.LoggerFactory) in your application which returns a Logger delegate. This delegate class delegate logging method calls to the original Logger implementation and also sends the logs to the server if it is necessary.
Anyway, both would look an awkward hack to me, creating two artifacts (one for end-users and another one for developers) smells better.
(Finally, I don't know what kind of library/application it is, but in my working environment it would not be acceptable if a library sends data to a third party server. Are you sure that you really need to to do this?)
I have a new puzzle for you :-).
I was thinking on how should an application handle his own start up. Like : checking for required libraries, correct versions, database connectivity, database compatibility, etc. To be specific, here is the test case. I use SWT and Log4J, for obvious reasons. Now, the questions :
Should the app check itself for the required dependencies? If yes, should the user be given specific details of what it's missing? Or just a message, and details to the logs?
What if the log4J library is unavailable?
What is the best to do the test? Verifying the file existance (using file.exists(), at specified path), or loading a class, say Class.forName("org.apache.log4j.Logger")? What should be the proper order to do the checks? For instance, if i test for SWT, i have no idea if logger is available or not, and the error will occur when i try to access that. Backwards, if i test for the logger 1st : a) The lib could be unavailable - i cannot log the error; b) SWT could be unavailable - unable to display the user message.
I've discovered apache.commons.lang framework today, and i find very useful the method org.apache.commons.lang.SystemUtils.isJavaVersionAtLeast(Float value)
, and manny others, i am sure. However, importing too much libs to your project dont make it hard to maintain? Versions change, compatibilities are lost, eg. one cannot control a 3rd party developement style or direction.
Thank u for your answers.
I agree with your need. Checking for required runtime environment provides:
immediate feedback, instead of randomly breaking when accessing some functionnality
hopefully more skilled user, as the immediate feedback is available to the guy that is installing the software, hopefully more skilled than an average user, or at least less confident (installing is always a special operation). A more skilled user is less disturbed if the error is coming in the console, he doesn't depend on a graphical interface.
improved reporting : the error message can be explicit (you're in charge), while default error messages come in many flavours (they are not always that helpful on 1. what's wrong 2. suggesting a fix).
But please note that the runtime requirements could be checked in two situations:
when installing : long verifications are always acceptable ; if a library is not here, a required database or WebService is not accessible, it won't be here at runtime either, so you can complain immediately.
when starting the execution : you can verify again (and some verifications may only happen at that point)
This suggests creating an installer for your application.
Potentially, errors would not all be blocking for the installation. Some would rather accumulate as a list of tasks to be done after installation, maybe nicely formatted in a file with all reference information.
Here, we once again hit the notion of error level in validation (similar to what happens for Log4j) : some validation errors are at fatal level, others are errors, possibly also warnings ...
In our projects, we have some sort of initialization and validation going on on startup. Based on our day-to-day experience, I would suggest the following:
When the application gets big, you don't want to have all init centralized in one class, so we have a modular structure.
A small kernel is configured with a list of modules classes. It's whole init sequence is under strict control, ready for any exceptions (translating them to appropriate messages, but memorizing the stack traces that are so useful to the developpers), making no assumption on the available libraries and so on... CheckStyle can be configured specially for this code.
The interface (of course, abstract class is possible) that the modules implement typically have several initialization methods. They could be:
getDependencies : returns a list of modules that this one depends on.
startup : when the whole application is starting. This will be called only once during startup, and cannot be called again.
start : when the module gets ready for regular operation
stop : reverse from start
shutdown : reverse from startup.
The kernel instanciates each of the module in turn. Then he calls one init method on all of them, then another init method and so on as needed. Each init method can:
signal error conditions (using levels, like Log4J).
an exception thrown would be caught by the kernel, and translated to an error condition
consult another module for its status (because dependencies are the general case), and react accordingly. If needed, the dependencies could be made declaratively.
The kernel takes care of module dependencies generically:
He sorts the modules so that dependencies are respected.
He doesn't initialize a module if one of its dependencies couldn't make it.
If asked to stop a module, he will first stop the modules that depends on it.
A nice feature of this kernel approach is that it is easy to aggregate the errors, at various levels (although fatal could stop it), and report all of them at the end, using whatever means is available (SWT or not, Log4J or not ...). So instead of discovering the problems one after the other, and having to start again each time, you could deliver in one blow (nicely prioritized of course).
Concerning your precise questions:
Should the app check itself for the required dependencies?
Yes (see higher)
If yes, should the user be given specific details of what it's missing? Or just a message, and details to the logs?
As said higher, when installing the user is more prepared to deal with this.
When starting, we use an easy message for the end-user, but give access to the full stack traces for the developper (we have a button that copies in the clipboard the application environment, the stack traces and so on).
What if the log4J library is unavailable?
Log without it (see higher).
What is the best to do the test? Verifying the file existance (using file.exists(), at specified path), or loading a class, say Class.forName("org.apache.log4j.Logger")?
I would load a class. But if it failed, I might check the file existence on disk to give a improved message, including "how to fix".
What should be the proper order to do the checks? For instance, if i test for SWT, i have no idea if logger is available or not, and the error will occur when i try to access that. Backwards, if i test for the logger 1st : a) The lib could be unavailable - i cannot log the error; b) SWT could be unavailable - unable to display the user message.
As I said higher, I suggest these low-level errors get accumulated in a small area of code (kernel), where you could use anything that is available to display them. If nothing is available, you could simply log in the console without Log4J.
The short answer is no. The JVM appropriately handles this functionality on initialization, or at runtime. If a required class is not found on the classpath, a ClassNotFoundException will be thrown. If a class was found, but a required method was not, a NoSuchMethodException is thrown.
Regarding 1 through 3 , there are 2 main use cases here:
application packaging is under your control, and can make sure that all required dependencies are packaged properly. Run-time validations are not useful here.
application packaging is not under your control, and you deliver the main jar and the instructions on what the requirements are. Run-time validations might be useful, but someone who wants to package your application usually has enough skill to understand what a ClassNotFoundException: org.apache.logging.LogManager means.
Regarding 4, as long as you keep the same version of the dependency included in your project, you will have no problems in keeping control. Upgrading to a newer version is a conscious decision, which requires thought and testing.
Is there a way to determine which classes are loaded from which JARs at runtime?
I'm sure we've all been in JAR hell before. I've run across this problem a lot troubleshooting ClassNotFoundExceptions and NoClassDefFoundErrors on projects. I'd like to avoid finding all instances of a class in JARs and using process of elimination on the code causing a CNFE to find the culprit.
Will any profiling or management tools give you this kind of information?
This problem is super annoying purely because we should have this information at the time the class gets loaded. There has to be a way to get to it, or record it and find it, yet I know of nothing that will do this, do you?
I know OSGi and versioned bundles/modules aim to make this a non issue... but it doesn't seem to be going away any time soon.
Note: I found this question is a subset of my question related to classes loaded from versioned jars.
Somewhat related, this post explains a strategy to search for a class within JARs (either under the current directory) or in your M2_REPO: JarScan, scan all JAR files in all subfolders for specific class
Also somewhat related, JBoss Tattletale
Passing the -verbose:class switch to the java command will print each class loaded and where it was loaded from.
Joops is also a nice tool for finding missing classes ahead of time.
From code you can call:
myObject.getClass().getProtectionDomain().getCodeSource()
(Note, getProtectionDomain may unfortunately return null (bad design), so "proper code" would check for that.)
There is an MBean for the JVM flag mentioned by Jason Day above.
If you are using JBoss, you can twiddle this on demand using JMX, if you add the native JMX MBean server to your config. Add the following -D's:
-Dcom.sun.management.jmxremote.port=3333
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djboss.platform.mbeanserver
-Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBuilderImpl
-DJBOSS_CLASSPATH="../lib/jboss-system-jmx.jar"
And then you can see this setting under the java.lang:Classloading MBean and can cut it on/off on the fly. This is helpful if you only want it on while executing a certain piece of code.
There is also an MBean which will allow you to enter a fully qualified classname and see where it was loaded from in the class hierarchy. The MBean is called LoaderRepository and you'll want to invoke the displayClassInfo() operation, passing in the FQCN.
In WebSphere (WAS) you can use a feature called "Class Loader Viewer"
Enable the class loader viewer first by clicking Servers > Server Types > WebSphere application servers > server_name > Class loader viewer service, enable the service and restart the server.
Then you can go to Troubleshooting > Class Loader Viewer and searching for your class or package name.
https://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/ttrb_classload_viewer.html?lang=en
You can easily export a JMX operation to access package info for any loaded class in you process like:
public static final class Jmx {
#JmxExport
public static Reflections.PackageInfo getPackageInfo(#JmxExport("className") final String className) {
return Reflections.getPackageInfo(className);
}
}
and here is a simple unit test to export and invoke it:
#Test
public void testClassLocator() throws IOException, InstanceNotFoundException, MBeanException, ReflectionException {
Registry.export(Jmx.class);
Reflections.PackageInfo info = (Reflections.PackageInfo) Client.callOperation(
"service:jmx:rmi:///jndi/rmi://:9999/jmxrmi",
Jmx.class.getPackage().getName(),
Jmx.class.getSimpleName(), "getPackageInfo", Registry.class.getName());
System.out.println(info);
Assert.assertNotNull(info);
}
this is all based using some small utilities library from spf4j (http://www.spf4j.org)
you can see this code at and the test at
I wanted to add to my jdk6\jre\lib\security\java.policy file an interdiction to create some classes that are blacklisted by appengine. For example I want my local jvm to throw an exception when the application tries to instantiate javax.naming.NamingException.
It is possible?
I will try to explain my specific problem here. Google offers an service (GAE-google app engine) that has some limitations on what classes can be used. For example doesn't instantiate JNDI classes that are in javax.naming package. They also offer an testing server that can be used to tests this application on my machine, but this server allows such classes and can exacute the code. You find out that you used a blacklisted class only after you upload your application to google. I was thinking if such class blacklist enforcement couldn't be done on the development jvm. Else i'm thinking that this would be easy they might already provide such a policy file.
You could write a small loader application that creates a new, custom classloader. Your application classes could then be loaded using this classloader.
In the custom classloader, you can then throw ClassNotFoundException when your application tries to access a class that you want to blacklist.
You will need to overload the load() method. This method will be responsible for throwing the exception on your blacklisted classes ordelegating to the parent Classloader if the class is allowed. A sample implementation:
public Class loadClass(String name) throws ClassNotFoundException {
if(name.equals("javax.lang.ClassIDontLike")){
throw new ClassNotFoundException("I'm sorry, Dave. I'm afraid I can't do that.");
}
return super.loadClass(name, false);
}
(Of course, a real implementation can be way more sophisticated than this)
Because the classes of your application are loaded through this Classloader, and you are only delegating the loadClass() invokations to the parent classloader when you want to, you can blacklist any classes that you need.
I am pretty sure that this is the method that Google uses to blacklist classes in their server. They load every app in a specific Classloader. This is also similar to the way that Tomcat isolates the different Web Applications.
Wouldn't you rather get compilation errors than runtime errors while testing your program? You could configure your IDE or compiler to warn you when an undesired class is instantiated. I know AspectJ has some nice features for this: You can define compilation warnings/errors on join points and get feedback in e.g. Eclipse. To use this in Eclipse, you simply install the AspectJ plugin and write a suitable aspect. To get the errors while compiling from a command line or script, you would actually have to use the AspectJ compiler, but I doubt that you would need that.
The Java documentation lists all possible policy permissions here:
http://java.sun.com/javase/6/docs/technotes/guides/security/permissions.html
Class creation / loading is not mentioned, so I believe you cannot enforce this using a policy.
At any rate, why do you want to throw an exception when an exception class is loaded? Maybe you could explain your problem, then someone might be able to propose a solution.
Edit:
One way to prevent loading of certain classes would be to remove them from the JRE installation. Most system classes are contained in rt.jar in your JDK/JRE installation. You should be able to modify it with any ZIP-tool.
Just create a special installation of your JRE, and modify its rt.jar. That is an ugly hack, but should be OK for testing purposes...