I'm running two Java programs from within MATLAB, which means they both share the same JVM instance with MATLAB. This is a problem when trying to use Log4J, because it's statically configured. If I run PROG1 (and it configures Log4J), then run PROG2 (which also tries to configure Log4J), then PROG2 subsequently fails to log its output to the correct place in certain versions of MATLAB.
I want to somehow configure PROG1's project in Eclipse to forever forbid the use of Log4J, but it should still be allowed in PROG2. Just removing log4j.jar from the PROG1 project's classpath doesn't solve the problem, because there's no way to prevent it from being re-added as a direct or indirect dependency.
Ideally, it would be nice if I could configure the project so Eclipse will always check for this at compile-time and add markers (like its existing compile warnings & errors), and tag any Log4J accesses (including our own utility classes for configuring Log4J) in that project with a brief message explaining this problem.
The next best thing I can think of is to further complicate the Ant script, but that still leaves open the possibility that I or someone else in the future might add Log4J logging to the PROG1 project and not realize the consequences until after spending a significant amount of time and later running a build.
Any ideas how I can forever forbid the use of Log4J in a specific project, such that it's also adequately protected from the same problem in the future?
Related
This is regarding vulnerability reported with CVE-2021-44228 against the log4j-core jar and has been fixed in Log4J v2.15.0.
We use Logback API via slf4j. This is confirmed with below code.
final StaticLoggerBinder binder = StaticLoggerBinder.getSingleton();
System.out.println(binder.getLoggerFactory());
System.out.println(binder.getLoggerFactoryClassStr());
//output:
//ch.qos.logback.classic.LoggerContext[default]
//ch.qos.logback.classic.util.ContextSelectorStaticBinder
mvn dependency:tree shows log4j-core API (version <2.15) in classpath (both direct & transitive dependency).
Is the application still vulnerable due to maintaining log4j-core in classpath? Thank you!
In order for a vulnerability to be a risk to you, several things need to come together:
the corresponding library exists in your environment
the corresponding library calls do happen in your environment at runtime
3rd party users figure a way to get their (unchecked) input to that library call
Nobody here can tell you whether "2." and ".3" are applicable in your environment.
But: when you eliminate 1., you know that "2." and "3." are no longer possible. Or the other way round, as long as you 100% convinced that there is no path how a user can enter data into your system that makes it to the corresponding API, then you should be fine even with leaving the library sitting in your environment. But as said, having the library is the mandatory first element of the chain. So while that is present, it is possible that somebody writes code tomorrow that gets you to "2" and "3"!
Thus, keep in mind the perspective of higher management: most likely, the business decision might be: reduce the risk to 0, so make sure you don't even have the corresponding JAR sitting on your machines.
In my bigcorp environment, orders were pretty simple: don't waste any time analysing whether your code uses the corresponding interfaces. When your projects contain the vulnerable JAR, upgrade it immediately. Period.
I am coding a java web app.
When I started, every time I needed to use an external package, I would download the jars manually and download all dependencies of each jar manually and place them in the libraries folder (in Netbeans).
As time went on, I started using a dependency manager (Ant).
Now, I would like to use my dependency manager for all of my external libraries.
If, after executing this change I run my application and it successfully deploys (no ClassNotFoundExceptions and no NoClassDefFoundErrors), is it safe to assume that I have not missed anything and that my application will run smoothly as far as the external packages go?
Or, do I need to individually test out each functionality in my web app to confirm that the changes I made to the libraries didn't change how the application runs?
It's actually depends on the code inside these libraries. Only part of classes are loaded at startup, thus you can miss something. Also there might be a possibility that you're loading some classes in runtime manually, i.e. Class.forName(String) and this code has not been triggered at startup. Thus, I would say you can't be 100% sure.
Generally in Java here are 3 build approaches:
Imperative - you're saying "How to assembly your code". The typical example of this is Apache Ant.
Declarative - you're saying "Which code you want to assembly". The typical example of this is Apache Maven
Mixed - which takes benefits of previous systems. This is Gradle.
How it helps!
I'm using the Websphere Application Server 8.5.5.6 and 8.5.5.8 and from time to time run into problems when some jar or the other in my application conflicts with one that is already present on the WAS. It's easy to fix of course, simply mark the dependency as "provided" in maven and there you go, but since IBM seemingly choose to write the AS with the most obscure error messages possible it takes ages to find something like that out.
My question which google hasn't been able to answer so far:
Is there a complete list somewhere which libraries in which versions are provided with Websphere?
Assuming you're referring primarily to Open Source packages, the official list is here: https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/opensourcesoftwareapis.html
Beyond that, most of the stuff visible to apps should be Java EE/SE APIs, which I assume you were already expecting, and IBM-specific implementations (things in com.ibm.* packages), which are hopefully at low risk of collision.
At least if you are on Windows: take Process Monitor (not Process Explorer), and fire it up filtering on Path contains .jar. Then start WebSphere. At some point it will starting loading jars from various directories. Process Monitor will show you which are those jars, and where they are being loaded from.
This should provide you with first hand information without reading IBM documents.
Besides, probably you are aware of that, but in any case: you should be careful with marking a dependency as "provided", since the version of the library used by your application might differ from the version used by WebSphere.
This is a follow-up (somehow of my Third-party dependencies to an OSGi application) where it was suggested that some libraries e.g. log4j are already available as bundles.
In Eclipse Indigo I could not find a log4j bundle available to Import Package as part of my installation and so I created a Plugin Project from JAR archive to bundle log4j and also a Feature Project to bundle the log4j.xml configuration following this post.
To be honest I don't understand why the fragment project is needed but this process works.
So my question now is:
Since the log4j.xml is delivered in the export as part of the feature jar, it requires some "effort" for someone to find it and update the debug levels, so I was wondering is this indeed the correct process?
I had in mind that the final exported product would deliver the log4j configuration in an easy to find location, but now (although the logging works) I am concerned whether what I do, is indeed correct.
Any help here?
If you really need to expose the file, you could put it anywhere you want, and then make sure your program calls one of these methods at startup:
org.apache.log4j.xml.DOMConfigurator.configure(String filename)
org.apache.log4j.PropertyConfigurator.configure(String
configFilename)
Or use the "configureAndWatch"-variants if you would like to make changes to the config without restarting your application.
Edit: I write "If you really need to", because I have experienced that I never need to turn on debug-logging after deployment, because it is always turned on! This is OK for applications where I have normal (but not extreme) requirements on response-time and throughput. Logging to an UDP-appender is fast (and does not fill up the disk). Or using rolling file appender is quite safe, and fast enough for my use. Always having the debug-log available is a life-saver when nailing down those hard-to-reproduce bugs.
I suggest take a look at Pax-Logging this will give you all kinds of logging frameworks for usage in a OSGi environment. And you're able to use an external configuration file (no extender needed) to configure your logging.
The fragment is one option to extend the log4j bundles classpath to include the required configuration file. It is probably the simplest way of configuring application wide properties.
This is not meant to be altered after deployment though as it will be embedded within a jar file. You will have to come up with a different approach if you expect to make it configurable after deployment.
NOTE:
I am afraid you misunderstood the answer about the jars that are already available as bundles. This does not mean that they are part of your OSGi platform of choice (Indigo), only that they are ready to be deployed to an OSGi platform as is. Your creation of a plugin project was unnecessary, you simply needed to add the jar to your target platform to resolve your missing imports.
I guess this is kind of a follow-on to question 1522329.
That question talked about getting a list of all classes used at runtime via the java -verbose:class option.
What I'm interested in is automating the build of a JAR file which contains my class(es), and all other classes they rely on. Typically, this would be where I am using code from some third party open source product's "client logic" but they haven't provided a clean set of client API objects. Their complete set of code goes server-side, but I only need the necessary client bits.
This would seem a common issue but I haven't seen anything (e.g. in Eclipse) which helps with this. Am I missing something?
Of course I can still do it manually by: biting the bullet and including all the third-party code in a massive JAR (offending my purist sensibilities) / source walkthrough / trial and error / -verbose:class type stuff (but the latter wouldn't work where, say, my code runs as part of a J2EE servlet, and thus I only want to see this for a given Tomcat webapp and, ideally, only for classes related to my classes therein).
I would recommend using a build system such as Ant or Maven. Maven is designed with Java in mind, and is what I use pretty much exclusively. You can even have Maven assemble (using the assembly plugin) all of the dependent classes into one large jar file, so you don't have to worry about dependencies.
http://maven.apache.org/
Edit:
Regarding the servlet, you can also define which dependencies you want packaged up with your jar, and if you are making a stand alone application you can have the jar tool make an executable jar.
note: yes, I am a bit of a Maven advocate, as it has made the project I work on much easier. No I do not work on the project personally. :)
Take a look at ProGuard.
ProGuard is a free Java class file shrinker, optimizer, obfuscator, and preverifier. It detects and removes unused classes, fields, methods, and attributes. It optimizes bytecode and removes unused instructions. It renames the remaining classes, fields, and methods using short meaningless names. Finally, it preverifies the processed code for Java 6 or for Java Micro Edition.
What you want is not only to include the classes you rely on but also the classes, the classes you rely on, rely on. And so on, and so forth.
So that's not really a build problem, but more a dependency one. To answer your question, you can either solve this with Maven (apparently) or Ant + Ivy.
I work with Ivy and I sometimes build "ueber-jar" using the zipgroupfileset functionality of the Ant Jar task. Not very elegant would say some, but it's done in 10 seconds :-)