javax.xml.parsers.SAXParserFactory ClassCastException - java

I get on my local machine the following exception when running the tests by maven (mvn test).
ch.qos.logback.core.joran.event.SaxEventRecorder#195ed659 - Parser configuration error occured
java.lang.ClassCastException: com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl cannot be cast to javax.xml.parsers.SAXParserFactory
After googling around I came across several pages which describe the main problem behind it (several SAXParserFactoryImpl in different classloaders).
-> http://www.xinotes.org/notes/note/702/
My question is, how can I figure out which library is also providing the SAXParserFactoryImpl, so that I can exclude it. I am using Maven, IntelliJ and JDK 1.6.0_23. The issue occurs on the command line as well as when running the tests from IntelliJ.
But the strange issue is, that on the build server this issue doesn't occur.
Update 1
Just figured out when I run the first time mvn test after an mvn clean, the error doesn't appear. But as soon as I run mvn test again (without clean, the exception occurs) (when I run it from IntelliJ).
When I run it on the cmd line, then several mvn test calls do work.

I found the issue. It was related to PowerMockito who tried to load the SAXParserFactory. The reason why I haven't figured that one out was because the stacktrace contained only twice PowerMockito, and this at the middle :-)
So If you figure out this problem in IntelliJ and you do use PowerMockito, annotate your test class with the following annotation:
#PowerMockIgnore(["javax.management.*", "javax.xml.parsers.*",
"com.sun.org.apache.xerces.internal.jaxp.*", "ch.qos.logback.*", "org.slf4j.*"])
This has solved the problem in my case.

Your JDK probably has its own SAXParserFactoryImpl.
Check for jars like xercesImpl, xml/xml-api and sax.
One your server the one from the server is probably going to be used.
You can use a jarfinder: http://www.jarfinder.com/index.php/java/search/~SAXParserFactoryImpl~

I encountered the same error today. After a lot of digging, I found that the solutions here, or on other places are not helpful.
However, after playing around, I found a solution that works deterministically, unlike the accepted answer which does not apply to all cases.
The answer is, look through stack trace to find any ClassCast exceptions, and just add them to \#PowerMockIgnore list. Keep repeating until the issue is solved. Worked like magic for me.

Related

Jar hell for missing classes folder when running ESRestTestCase

So I'm running into a Jar hell problem when trying to run individual integration tests, using -Dtest=, that runs as a ESRestTestCase (ESTestCase). The issue here seems to be that some elasticsearch classpath validation class requires target/classes to exist. However, this project is only for testing so that requirement doesn't make sense.
This happened with Elasticsearch 7.0.0 and Java 1.8.0.251. Not sure if this is a problem with later versions.
java.lang.RuntimeException: found jar hell in test classpath
at org.elasticsearch.bootstrap.BootstrapForTesting.<clinit>(BootstrapForTesting.java:98)
at org.elasticsearch.test.ESTestCase.<clinit>(ESTestCase.java:229)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:623)
Caused by: java.nio.file.NoSuchFileException: <MY PROJECT FOLDER PATH HERE>/target/classes
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:199)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:86)
at org.elasticsearch.bootstrap.BootstrapForTesting.<clinit>(BootstrapForTesting.java:96)
... 4 more
At first I tried various methods for making sure the classes folder was still created. I ran into new problems that seemed fixed in later versions so I upgraded to Elasticsearch 7.6.2 (And lucene 8.4.0). Upgrading seemed to remove the need for the folder to exist but triggered other issues (similar to this). I worked around such issues with -Dtests.security.manager=false. Then I got thread leak issues which I "solved" by setting #ThreadLeakScope(ThreadLeakScope.Scope.NONE) on the test class.
Then finally it seemed to work when running maven in command line for individual tests. But when trying to run the maven task with debugger in Intellij the debugger does not seem to attach.
So I try to run it as JUnit templated test in Intellij. Here I get jar hell when running it but it works to debug. The jar hell was later solved by setting idea.no.launcher=true in idea.properties.
Alright, good enough for now. These workarounds are far from ideal but it works for my current purposes. A lot of things will change before moving into mainline anyway. And maybe this will help someone else. Hopefully I can post a better solution later.

Cobertura - classes are not instrumented anymore

I am using the gradle-cobertura-plugin in my Jenkins-build. Yesterday I fixed an issue in this plugin that overwrote the configured auxiliaryClasspath. This issue prevented some classes being present in the coverage report. The fix is something quite simple:
I changed the following:
auxiliaryClasspath = project.files("${project.buildDir.path}/intermediates/classes/${classesDir}")
to
if (auxiliaryClasspath != null) {
auxiliaryClasspath += project.files("${project.buildDir.path}/intermediates/classes/${classesDir}")
} else {
auxiliaryClasspath = project.files("${project.buildDir.path}/intermediates/classes/${classesDir}")
}
Running the build locally with gradle cobertura everything works fine and the missing classes showed in the report.
After installing the patched version of the plugin on Jenkins the coverage on Jenkins went to zero.
Looking around what happened I found that the classes in the instrumented_classes-folder are not instrumented anymore! Rolling everything back (build.gradle, uninstalling my plugin, clearing the gradle cache, etc.) the behaviour stays the same. As it is working locally I'm wondering what causes this issue.
I assume there is something going awefully wrong that's perhaps logged and silently ignored, but I don't have a clue where to look for this information. The Jenkins logs are clean, so I think adding a logger for the code responsible for instrumenting might help. Unfortunately I don't have any idea what loggers to enable. org.sourceforge.cobertura didn't output anything.
So my question is: Did anybody else see this behaviour and might throw in a clue how to resolve this issue?
OK, I figured it out. After much trial and error I found that a little change in the coverageExcludes-property was the culprit. After changing it several times the classes are instrumented once again. Funny that it did work locally but not on jenkins. I think I have to dive a little bit deeper in this if it occurs again.
For now I'm happy that it works. :-)

NoSuchMethodError with Camel RouteDefinition class

I am trying to debug a Java / Maven project with a lot of depencies on various libraries.
When I run it on a Linux server the program starts up fine, but when I try to run it in Eclipse it throws the following exception:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.camel.model.RouteDefinition.getErrorHandlerBuilder()Lorg/apache/camel/ErrorHandlerFactory;
at org.apache.camel.spring.spi.SpringTransactionPolicy.wrap(SpringTransactionPolicy.java:69)
at org.apache.camel.model.PolicyDefinition.createProcessor(PolicyDefinition.java:133)
at org.apache.camel.model.ProcessorDefinition.makeProcessor(ProcessorDefinition.java:437)
at org.apache.camel.model.ProcessorDefinition.addRoutes(ProcessorDefinition.java:183)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:817)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:165)
at org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:697)
at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:1654)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:1441)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:1338)
at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:67)
at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:54)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:1316)
Now, I can see that the RouteDefinition class is in the camel-core-2.9.3,jar and I can see that this library is imported. So how come it doesn't see that method?
How do I go about debugging this?
Could I get info from the process running on the Linux server? For example can I get the list of Jars that are imported and the order in which they are imported?
Many thanks!
The error that you're getting is caused by Maven pulling in the wrong version. Try deleting all versions out of your local repo, add it explicitly to your pom, clean out all of your builds, pray to the eclipse gods, etc. If it still gives you the error, check your local repo to see which wrong versions it pulled in, figure out what depends on them, and add explicit ignores for them while keeping the explicit include.

Missing Annotation Processor with -XDdev leads to successful build without building class files

I was running into a curious problem today and would like to get some more info on that, as my google-fu proved to be insufficient for that.
What happened
The scenario is as follows: I have a straightforward Netbeans project, which contains a .java file that makes use of some annotations, which are handled by the Netbeans annotation processor (org.openide.filesystems.annotations.LayerGeneratingProcessor to be precise) to create a .xml file during compilation.
All of this worked fine up until today, when I accidentally forgot to add the dependency for the annotation processor to my new project (i.e. core/org-openide-filesystems.jar). Without that dependency being present I witnessed the strangest behavior: a build (via Netbeans as well as directly via ant on the commandline) would report to be successful, yet no .class files were generated at all.
What really threw me off was that the build call come back with a success. Not a single warning or other indicator that something was amiss.. just no classes generated and a tiny little .jar file that only contained the Bundle.properties files, but again no .class files.
The workaround
So much for the scenario itself. After a while I eventually came to find out about a javac option that would lead the compiler to finally tell me that something went wrong: -XDdev. I have never seen this option before and from my googling all I could find was that these kind of options are referred to as Hidden Options. But I haven't found a good listing of what hidden options are available and what they're good for. Any reference on that would be much appreciated.
Anyways, adding this option to the compile, the actual javac call would spit out a large stacktrace that eventually boils down to a ClassNotFoundException for the LayerGeneratorProcessor class. Lo and behold, once I added the dependency for that class to the project everything builds fine again.
The remaining problem
What is funny (as in scary) is that even despite this exception being printed to stderr and indicating that annotation processing failed, the overall javac call succeeds! It still comes back with build successful and acts as if everything was fine. Without the -XDdev option, one would not even have any indication at all from the output that something went wrong.
Finally, my actual question: is there some way to turn this behavior into a proper error? While -XDdev is fine to find out the problem, it requires you to look at the build output, which especially in a CI context will not be feasible. I would like to protect others and myself from accidentally forgetting the dependency in the future by somehow switching this behavior to a proper build error such that we are also notified by the CI system in those cases.

IntelliJ - Failed to start: 0 passed, 1 not started

Just been playing around for the first time with IntelliJ IDEA Community edition, first time I have worked with it so if I'm missing something, please excuse me.
I have a bunch of unit tests which I run, however, when running them in IntelliJ (with the standard setup out of the box), I intermittently get the following error in the console:
03:14:17 Failed to start: 58 passed, 1 not started
I have searched the web but to no avail. If I run just the test that failed, it may or may not print out a similar error:
03:19:54 Failed to start: 0 passed, 1 not started
If I keep trying, eventually it works and tells me that all of my tests have passed.
The image is not the error as an exclamation mark, it is a different error icon (), which I do not recognise. The error in Event Log window appears as red text.
It always appears to happen with only one test and it is always the same test for any given set of tests. I.E. In a different project, the same issue also appears, but for a different test (but it's always the same one in each project or set of tests).
One more thing to note is that this ONLY happens when debugging and not when running, so it may be something to do with connecting the debugger?
It all works perfectly fine with Eclipse.
Any ideas what could be causing this?
The issue for me is Failed to start: 1, passed: 0 . I'm using Spring Boot 2.4.0 with Junit5 to test the Controller Class. I just commented out the version tag in the junit-jupiter-engine dependency. Then it worked. Really strange. It might helpful for someone.
I got the same error. It was something weird sent to System.out that made IntellJ IDEA test "not started".
I've created a ticket for IntelliJ IDEA, you can vote for it if you still encounter this problem.
In my case problem was in pom.
I moved from fulling working application to spring-boot implementation and only imported spring-boot-starter-test in dependency for testing.
I solved by excluding junit part from spring-boot-starter-test and added junit dependency of latest version in separate block.
Sometimes similar error happens with scala code when you mix sclamock's MockFactory with scalatest's AsyncFlatSpec.
So, be sure to use AsyncMockFactory like below.
class ExampleSpec extends AsyncFlatSpec with AsyncMockFactory
Looks like this may have been a bug on IntelliJ, it has been raised with them.
I had this problem (in Android Studio but its a customised IntelliJ) and the reason was WHERE the cursor was when I ran tests using CTRL-SHIFT-F10.
#Parameterized.Parameters
public static Collection data()
Once I moved the cursor into a test method or not inside any method, it worked.
I had the same issue. Whatever be the number of scenarios, it was showing 1 extra scenario in NOT STARTED stage. I was using Scenario Outline to run tests and had commented the rows in the Example tables.
I later found out that commenting the whole example table (which I didn't wanted to run) resolved the issue rather than commenting each row.
I had the same issue that cracked me up a little in IntelliJ IDEA 2017.2.1. The test case ran without any recognizable errors or irregularities, but in the end JUnit claimed the case as not started.
Figured out it was caused by trying to print into a PrintWriter that has already been closed.
In my case I was trying to mock a class having a public static method. Problem solved when everything is set free from static context.
I came along not started tests, when attempting to test code that called System.exit(1) . IntelliJ would not start my tests until I removed the exiting behavior like this:
At first I replaced all direct lines in the code from
System.exit(1)
to
onFailure.run();
with
unnable onFailure = () -> System.exit(1);
in the code itself. In the Test-Code I replaced the Runnable with a testable mock Runnable
Runnable mockOnFailure =
() -> {
throw new CustomError(
"Some descriptive message here.");
};
and than I expected that Error to be thrown like so (using AssertJ for nice assertion statements)
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
assertThatExceptionOfType(CustomError.class).isThrownBy(
() -> {
callingCodeThatCallsOnFailure();
}
);
Now the tests are all being startet by the IDE as desired.
Feel free to reuse that if it is of help to you. I do not claim any ownership or copyright to any of those lines of code.

Categories