Why path on server throw NullPonterException? - java

I have a problem with the path to the file. Locally (Windows), the tests pass, but when I upload it to the server(Linux), I have:
NullPointerException: Cannot get property 'text' on null object
Sample code:
public final String MAIN = "dir1/dir2/dir3/"
public final String CAT_1 = MAIN + "subdirectory/"
somewhere in the method ...
def object = Utils.class.getClassLoader().getResource(CAT_1 + "file.xml").text
unmarshaller.unmarshal(new StringSource(object), SomeClass.class).value

Why path on server throw NullPonterException?
Utils.class.getClassLoader().getResource(CAT_1 + "file.xml").text will throw a NullPointerException if CAT_1 + "file.xml" can't be loaded because getResource is going to return null.
Apparently that resource is not available at the path you are expecting it to be available at.

Related

Using ClassLoader when loading .jar resources

I just don't understand why using the ClassLoader causes these two cases to act differently. Can someone explain how/why the ClassLoader changes the search such that it requires a full path into the jar?
package com.example;
import java.io.InputStream;
public class SomeClass {
private static InputStream stm;
static {
for (final String s : new String[] { "file.png", "com/example/file.png", "/com/example/file.png" }) {
// case 1 - w/o classLoader
stm = SomeClass.class.getResourceAsStream(s);
System.out.println("w/o : " + (stm == null ? "FAILED to load" : "loaded") + " " + s);
// case 2 - w/ classLoader
stm = SomeClass.class.getClassLoader().getResourceAsStream(s);
System.out.println("w/classloader: " + (stm == null ? "FAILED to load" : "loaded") + " " + s);
}
}
public static void main(final String args[]) {}
}
Produces:
w/o : loaded file.png
w/classloader: FAILED to load file.png
w/o : FAILED to load com/example/file.png
w/classloader: loaded com/example/file.png
w/o : loaded /com/example/file.png
w/classloader: FAILED to load /com/example/file.png
Because the path is evaluated differently depending on which getResourceAsStream() method you call.
If you call Class.getResourceAsStream() on the class com.example.SomeClass, then the path is evaluated relatively to SomeClass if it does not start with a /, so file.png becomes /com/example/file.png and com/example/file.png becomes /com/example/com/example/file.png.
If you call ClassLoader.getResourceAsStream() on its ClassLoader, then the path is implicitly absolute and must not start with a /, so file.png becomes /file.png and com/example/file.png becomes /com/example/file.png.
If you use an absolute path like /com/example/file.png, you have to use the Class method.
Unfortunately this implicit absoluteness of ClassLoader.getResourceAsStream() is only documented implicitly in the documentation of Class.getResourceAsStream(), which states that it strips the leading slash before it delegates to its ClassLoaders method.

Eclipse XML Parser "Providers" conflicting with rt.jar

Please note: Although there are several questions on SO that paste in a similar exception & stack trace, this question is definitely not a dupe of any of them, as I'm trying to understand where my classloading is going awry.
Hi, Java 8/Groovy 2.4.3/Eclipse Luna here. I'm using the BigIP iControl Java client (for controlling a powerful load balancer programmatically) which in turn uses Apache Axis 1.4. In its use of Axis 1.4 I am getting the following stacktrace (from Eclipse console):
Caught: javax.xml.parsers.FactoryConfigurationError: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
javax.xml.parsers.FactoryConfigurationError: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
at org.apache.axis.utils.XMLUtils.getDOMFactory(XMLUtils.java:221)
at org.apache.axis.utils.XMLUtils.<clinit>(XMLUtils.java:83)
at org.apache.axis.configuration.FileProvider.configureEngine(FileProvider.java:179)
at org.apache.axis.AxisEngine.init(AxisEngine.java:172)
at org.apache.axis.AxisEngine.<init>(AxisEngine.java:156)
at org.apache.axis.client.AxisClient.<init>(AxisClient.java:52)
at org.apache.axis.client.Service.getAxisClient(Service.java:104)
at org.apache.axis.client.Service.<init>(Service.java:113)
at iControl.LocalLBPoolLocator.<init>(LocalLBPoolLocator.java:21)
at iControl.Interfaces.getLocalLBPool(Interfaces.java:351)
at com.me.myapp.F5Client.run(F5Client.groovy:27)
Hmmm, let's have a look at that XMLUtils.getDOMFactory method:
private static DocumentBuilderFactory getDOMFactory() {
DocumentBuilderFactory dbf;
try {
dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
}
catch( Exception e ) {
log.error(Messages.getMessage("exception00"), e );
dbf = null;
}
return( dbf );
}
OK, LN 221 is that call to DocumentBuilderFactory.newInstance() so let's have a look at it:
public static DocumentBuilderFactory newInstance() {
return FactoryFinder.find(
/* The default property name according to the JAXP spec */
DocumentBuilderFactory.class, // "javax.xml.parsers.DocumentBuilderFactory"
/* The fallback implementation class name */
"com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl");
}
The plot thickens! Now let's take a final look at FactoryFinder.find:
static <T> T find(Class<T> type, String fallbackClassName)
throws FactoryConfigurationError
{
final String factoryId = type.getName();
dPrint("find factoryId =" + factoryId);
// lots of nasty cruft omitted for brevity...
// Try Jar Service Provider Mechanism
T provider = findServiceProvider(type);
if (provider != null) {
return provider;
}
if (fallbackClassName == null) {
throw new FactoryConfigurationError(
"Provider for " + factoryId + " cannot be found"); // <<-- Ahh, here we go
}
dPrint("loaded from fallback value: " + fallbackClassName);
return newInstance(type, fallbackClassName, null, true);
}
So if I'm interpreting this right, it's throwing the FactoryConfigurationError because it can't find the main "provider class" (whatever that means) and no fallback has been specified. But hasn't it?!? The call to FactoryFinder.find included the non-null "com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl" string argument. This has me suspicious that something is really wonky with my classpath, and that I have a rogue DocumentBuilderFactory (not the one defined in rt.jar/javax/xml/parsers) somewhere in my code that is passing a NULL arg to this finder method.
But that doesn't make sense either, because Axis 1.4 doesn't appear (at least according to Maven repo) to have any dependencies...which means the only "provider" for javax.xml.* would be the rt.jar. Unless, perhaps, Eclipse is mucking things up somehow? I'm so confused, please help :-/
Update
This is definitely an Eclipse issue. If I package my app as an executable JAR and run it from the command line I don't get this exception.

Getting Spring-XD and the hdfs sink to work for maprfs

This is a question about spring-xd release 1.0.1 working together with maprfs, which is officially not yet supported. Still I would like to get it to work.
So this is what we did:
1) adjusted the xd-shell and xd-worker and xd-singlenode shell scripts to accept the parameter --hadoopDistro mapr
2) added libraries to the new directory $XD_HOME/lib/mapr
avro-1.7.4.jar jersey-core-1.9.jar
hadoop-annotations-2.2.0.jar jersey-server-1.9.jar
hadoop-core-1.0.3-mapr-3.0.2.jar jetty-util-6.1.26.jar
hadoop-distcp-2.2.0.jar maprfs-1.0.3-mapr-3.0.2.jar
hadoop-hdfs-2.2.0.jar protobuf-java-2.5.0.jar
hadoop-mapreduce-client-core-2.2.0.jar spring-data-hadoop-2.0.2.RELEASE-hadoop24.jar
hadoop-streaming-2.2.0.jar spring-data-hadoop-batch-2.0.2.RELEASE-hadoop24.jar
hadoop-yarn-api-2.2.0.jar spring-data-hadoop-core-2.0.2.RELEASE-hadoop24.jar
hadoop-yarn-common-2.2.0.jar spring-data-hadoop-store-2.0.2.RELEASE-hadoop24.jar
3) run bin/xd-singlenode --hadoopDistro mapr and shell/bin/xd-shell --hadoopDistro mapr.
When creating and deploying a stream via stream create foo --definition "time | hdfs" --deploy, data is written to a file tmp/xd/foo/foo-1.txt.tmp on maprfs. Yet when undeploying the stream, the following exceptions appears:
org.springframework.data.hadoop.store.StoreException: Failed renaming from /xd/foo/foo-1.txt.tmp to /xd/foo/foo-1.txt; nested exception is java.io.FileNotFoundException: Requested file /xd/foo/foo-1.txt does not exist.
at org.springframework.data.hadoop.store.support.OutputStoreObjectSupport.renameFile(OutputStoreObjectSupport.java:261)
at org.springframework.data.hadoop.store.output.TextFileWriter.close(TextFileWriter.java:92)
at org.springframework.xd.integration.hadoop.outbound.HdfsDataStoreMessageHandler.doStop(HdfsDataStoreMessageHandler.java:58)
at org.springframework.xd.integration.hadoop.outbound.HdfsStoreMessageHandler.stop(HdfsStoreMessageHandler.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:201)
at com.sun.proxy.$Proxy120.stop(Unknown Source)
at org.springframework.integration.endpoint.EventDrivenConsumer.doStop(EventDrivenConsumer.java:64)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:100)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:115)
at org.springframework.integration.config.ConsumerEndpointFactoryBean.stop(ConsumerEndpointFactoryBean.java:303)
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:229)
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:363)
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:202)
at org.springframework.context.support.DefaultLifecycleProcessor.stop(DefaultLifecycleProcessor.java:106)
at org.springframework.context.support.AbstractApplicationContext.stop(AbstractApplicationContext.java:1186)
at org.springframework.xd.module.core.SimpleModule.stop(SimpleModule.java:234)
at org.springframework.xd.dirt.module.ModuleDeployer.destroyModule(ModuleDeployer.java:132)
at org.springframework.xd.dirt.module.ModuleDeployer.handleUndeploy(ModuleDeployer.java:111)
at org.springframework.xd.dirt.module.ModuleDeployer.undeploy(ModuleDeployer.java:83)
at org.springframework.xd.dirt.server.ContainerRegistrar.undeployModule(ContainerRegistrar.java:261)
at org.springframework.xd.dirt.server.ContainerRegistrar$StreamModuleWatcher.process(ContainerRegistrar.java:884)
at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: java.io.FileNotFoundException: Requested file /xd/foo/foo-1.txt does not exist.
at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:805)
at com.mapr.fs.MapRFileSystem.delete(MapRFileSystem.java:629)
at org.springframework.data.hadoop.store.support.OutputStoreObjectSupport.renameFile(OutputStoreObjectSupport.java:258)
... 29 more
I had a look at the OutputStoreObjectSupport.renameFile() function. When a file on hdfs is finished, this method tries to rename the file /xd/foo/foo-1.txt.tmp to xd/foo/foo1.txt. This is the relevant code:
try {
FileSystem fs = path.getFileSystem(getConfiguration());
boolean succeed;
try {
fs.delete(toPath, false);
log.info("Renaming path=[" + path + "] toPath=[" + toPath + "]");
succeed = fs.rename(path, toPath);
} catch (Exception e) {
throw new StoreException("Failed renaming from " + path + " to " + toPath, e);
}
if (!succeed) {
throw new StoreException("Failed renaming from " + path + " to " + toPath + " because hdfs returned false");
}
}
When the target file does not exist on hdfs, maprfs seems to throw an exception when fs.delete(toPath, false) is called. Yet throwing an exception in this case does not make sense. I assume that other Filesystem implementations behave differently, but this is a point I still need to verify. Unfortuntately I cannot find the sources for MapRFileSystem.java. Is this closed source? This would help me to better understand the issue. Has anybody experience with writing from spring-xd to maprfs? Or renaming files on maprfs with spring-data-hadoop?
Edit
I managed to reproduce the issue outside of spring XD with a simple test case (see below). Note that this exception is only thrown if the inWritingSuffix or the inWritingPrefix is set. Otherwise spring-hadoop will not attempt to rename the file. So this is the still somehow unsatisfactory workaround for me: refrain from using inWritingPrefixes and inWritingSuffixes.
#ContextConfiguration("context.xml")
#RunWith(SpringJUnit4ClassRunner.class)
public class MaprfsSinkTest {
#Autowired
Configuration configuration;
#Autowired
FileSystem filesystem;
#Autowired
DataStoreWriter<String >storeWriter;
#Test
public void testRenameOnMaprfs() throws IOException, InterruptedException {
Path testPath = new Path("/tmp/foo.txt");
filesystem.delete(testPath, true);
TextFileWriter writer = new TextFileWriter(configuration, testPath, null);
writer.setInWritingSuffix("tmp");
writer.write("some entity");
writer.close();
}
#Test
public void testStoreWriter() throws IOException {
this.storeWriter.write("something");
}
}
I created a new branch for spring-hadoop which supports maprfs:
https://github.com/blinse/spring-hadoop/tree/origin/2.0.2.RELEASE-mapr
Building this release and using the resulting jar works fine with the hdfs sink.

How can I get spock to execute a different method at runtime using an Annotation Extension?

First, in case there is a simpler way to solve this problem, here is an outline of what I am trying to accomplish. I want to Annotate my test methods with a KnownIssue annotation (extending AbstractAnnotationDrivenExtension) that takes a defect ID as a parameter and checks the status of the defect before executing the tests. If the defect is fixed, it will continue execution, if it is not fixed I want it to ignore the test, but if it is closed or deleted, I want to induce a test failure with logging stating that the test should be removed or updated and the annotation removed since the defect is now closed or deleted.
I have everything working up until inducing a test failure. What I have tried that doesn't work:
Throwing an exception in the visitFeatureAnnotation method, which causes a failure which causes all tests thereafter not to execute.
Creating a class that extends Spec and including a test method that logs a message and fails, then tried to use feature.featureMethod.setReflection() to set the method to execute to the other method. In this case, I get a java.lang.IllegalArgumentException : object is not an instance of declaring class
I then tried using ExpandoMetaClass to add a method directly to the declaringClass, and point feature.featureMethod.setReflection to point to it, but I still get the same IllegalArgumentException.
Here is what I have inside of my visitFeatureAnnotation method for my latest attempt:
def myMetaClass = feature.getFeatureMethod().getReflection().declaringClass.metaClass
myMetaClass.KnownIssueMethod = { -> return false }
feature.featureMethod.setReflection(myMetaClass.methods[0].getDoCall().getCachedMethod());
Any other ideas on how I could accomplish this, and either induce a test failure, or replace the method with another that will fail?
Ok... I finally came up with a solution. Here is what I got working. Within the visitFeatureAnnotation method I add a CauseFailureInterceptor that I created.
Here is the full source in case anyone is interested, just requires you to extend the KnownIssueExtension and implement the abstract method getDefectStatus:
public abstract class KnownIssueExtension extends AbstractAnnotationDrivenExtension<KnownIssue> {
private static final org.slf4j.Logger LOGGER = LoggerFactory.getLogger(KnownIssueExtension.class)
public void visitFeatureAnnotation(KnownIssue knownIssue, FeatureInfo feature) {
DefectStatus status = null
try{
status = getDefectStatus(knownIssue.value())
} catch(Exception ex){
LOGGER.warn("Unable to determine defect status for defect ID '{}', test case {}", knownIssue.value(), feature.getName())
// If we can't get info from Defect repository, just skip it, it should not cause failures or cause us not to execute tests.
}
if (status != null){
if(!status.open && !status.fixed){
LOGGER.error("Defect with ID '{}' and title '{}' is no longer in an open status and is not fixed, for test case '{}'. Update or remove test case.", knownIssue.value(), status.defectTitle, feature.getName())
feature.addInterceptor(new CauseFailureInterceptor("Defect with ID '" + knownIssue.value() + "' and title '" + status.defectTitle + "' is no longer in an open status and is not fixed, for test case '" + feature.getName() + "'. Update or remove test case."))
}else if (status.open && !status.fixed){
LOGGER.warn("Defect with ID '{}' and title '{}' is still open and has not been fixed. Not executing test '{}'", knownIssue.value(), status.defectTitle, feature.getName())
feature.setSkipped(true)
}else if (!status.open && status.fixed){
LOGGER.error("Defect with ID '{}' and title '{}' has been fixed and closed. Remove KnownIssue annotation from test '{}'.", knownIssue.value(), status.defectTitle, feature.getName())
feature.addInterceptor(new CauseFailureInterceptor("Defect with ID '" + knownIssue.value() + "' and title '" + status.defectTitle + "' has been fixed and closed. Remove KnownIssue annotation from test '" + feature.getName() + "'."))
}else { // status.open && status.fixed
LOGGER.warn("Defect with ID '{}' and title '{}' has recently been fixed. Remove KnownIssue annotation from test '{}'", knownIssue.value(), status.defectTitle, feature.getName())
}
}
}
public abstract DefectStatus getDefectStatus(String defectId)
}
public class CauseFailureInterceptor extends AbstractMethodInterceptor{
public String failureReason
public CauseFailureInterceptor(String failureReason = ""){
this.failureReason = failureReason
}
#Override
public void interceptFeatureExecution(IMethodInvocation invocation) throws Throwable {
throw new Exception(failureReason)
}
}
class DefectStatus{
boolean open
boolean fixed
String defectTitle
}

Java System Parameter causing NoClassDefFoundError

I have a class which takes in various system parameters and prints them out:
public class Test_Class {
public static void main(String[] args){
String fooA = System.getProperty("fooA");
String fooB = System.getProperty("fooB");
String fooC = System.getProperty("fooC");
System.out.println("System Properties:\n"+fooA+"\n"+foob+"\n"+fooC+"\n");
}
}
Then, using IntelliJ, pass in the VM Parameters as such:
-DfooA="StringA" -DfooB="StringB" -DfooC="String C"
Upon running my program I get the following output:
System Properties:
StringA
StringB
String C
Now, if I run the same program through a UNIX server by running the following command:
java -DfooA="StringA" -DfooB="StringB" -DfooC="String C" com.foo.fooUtil.Test_Class
I get the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: C
I have tried a bunch of different ways to pass in fooC, such as -DfooC=\"String C\", -DfooC='String C', -DfooC=\'String C\', basically any idea that came to mind. I have done some research and have been unable to find any solid solution.
For reference, I found the following link online where another person seems to have the same issue but, unfortunately, none of the suggestions work.
http://www.unix.com/shell-programming-scripting/157761-issue-spaces-java-command-line-options.html
How can I pass in a System Parameter with spaces in UNIX? Thank you.
Here is my approach: Why not use a .properties file for storing the system properties instead of passing them through command line? You can access the properties using:
Properties properties = new Properties();
try {
properties.load(new FileInputStream("path/filename"));
} catch (IOException e) {
...
}
And you may iterate as:
for(String key : properties.stringPropertyNames()) {
String value = properties.getProperty(key);
System.out.println(key + " => " + value);
}
Hope that helps!!!

Categories