No suitable native library found. native.libpath.* vs java.library.path - java

I encounter "No suitable native library found" when running some library (HDF5).
Full trace is follows:
java.lang.ExceptionInInitializerError
at ch.systemsx.cisd.hdf5.hdf5lib.HDF5Constants.javaToC(HDF5Constants.java:1938)
at ch.systemsx.cisd.hdf5.hdf5lib.HDF5Constants.<clinit>(HDF5Constants.java:982)
at ch.systemsx.cisd.hdf5.CharacterEncoding.<clinit>(CharacterEncoding.java:29)
at ch.systemsx.cisd.hdf5.HDF5BaseReader.<init>(HDF5BaseReader.java:137)
at ch.systemsx.cisd.hdf5.HDF5BaseWriter.<init>(HDF5BaseWriter.java:147)
at ch.systemsx.cisd.hdf5.HDF5WriterConfigurator.writer(HDF5WriterConfigurator.java:133)
at ch.systemsx.cisd.hdf5.HDF5FactoryProvider$HDF5Factory.open(HDF5FactoryProvider.java:48)
at ch.systemsx.cisd.hdf5.HDF5Factory.open(HDF5Factory.java:47)
at tests.jhdf5.TestHDF5.testReaderWriter(TestHDF5.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
...
Caused by: java.lang.UnsupportedOperationException: No suitable HDF5 native library found for this platform.
at ch.systemsx.cisd.hdf5.hdf5lib.H5.<clinit>(H5.java:41)
... 30 more
As you see, library is called from inside HDF5 java library.
Java library is attached as bunch of jars in global libraries:
As you see, native library is included.
The question is whether this setting is sufficient? May be I require to add some directories to the PATH variable?
UPDATE
I found, that HDF5 code expects library inside native.libpath.jhdf5, the code is from ch.systemsx.cisd.base.utilities.NativeLibraryUtilities#loadNativeLibrary():
public static boolean loadNativeLibrary(String libraryName) {
String linkLibNameOrNull = System.getProperty("native.libpath." + libraryName);
if(linkLibNameOrNull != null) {
return loadLib(linkLibNameOrNull);
} else {
String linkLibPathOrNull = System.getProperty("native.libpath");
if(linkLibPathOrNull != null) {
linkLibNameOrNull = getLibPath(linkLibPathOrNull, libraryName);
return loadLib(linkLibNameOrNull);
} else {
linkLibNameOrNull = tryCopyNativeLibraryToTempFile(libraryName);
return linkLibNameOrNull != null?loadLib(linkLibNameOrNull):loadSystemLibrary(libraryName);
}
}
}
While IntelliJ sets java.library.path:
public static void main(String[] args) {
System.out.println("native.libpath.jhdf5 = " + System.getProperty("native.libpath.jhdf5") );
System.out.println("java.library.path = " + System.getProperty("java.library.path") );
prints:
native.libpath.jhdf5 = null
java.library.path = D:\Users\Dims\Design\!Lib\sis-jhdf5-SNAPSHOT-r32355\sis-jhdf5\lib\nativejar\hdf5-windows-intel.jar
Why?

You may have the same problem as me.
The problem was that i had include the jar for the 3 differents platforms (linux, windows and macOS) thus there was a conflict and this problem happen
If it does not solved the case (as it has do in mine) i suggest to consult the jhdf5 faq https://wiki-bsse.ethz.ch/display/JHDF5/JHDF5+FAQ

Related

Can't use "Handler" approach to adding a URLStreamHandler in AWS Lambda

I'm currently trying to add a URLStreamHandler so I can handle URLs with custom protocols. This works fine when run locally. When deployed to AWS Lambda I get:
java.net.MalformedURLException: unknown protocol: baas
I'm following the "Handler" approach to registering the URLStreamHandler.
I even went as far as copying the code from URL.getURLStreamHandler(String) and added logging into my own code that is run by Lambda:
(Note: this is from the Java 8 source - I realise now that this might not be representative because AWS Lambda uses a Java 11 runtime).
URLStreamHandler handler = null;
String packagePrefixList = null;
packagePrefixList
= java.security.AccessController.doPrivileged(
new sun.security.action.GetPropertyAction(
"java.protocol.handler.pkgs",""));
if (packagePrefixList != "") {
packagePrefixList += "|";
}
// REMIND: decide whether to allow the "null" class prefix
// or not.
packagePrefixList += "sun.net.www.protocol";
LOG.debug("packagePrefixList: " + packagePrefixList);
StringTokenizer packagePrefixIter =
new StringTokenizer(packagePrefixList, "|");
while (handler == null &&
packagePrefixIter.hasMoreTokens()) {
String packagePrefix =
packagePrefixIter.nextToken().trim();
try {
String clsName = packagePrefix + "." + "baas" +
".Handler";
Class<?> cls = null;
LOG.debug("Try " + clsName);
try {
cls = Class.forName(clsName);
} catch (ClassNotFoundException e) {
ClassLoader cl = ClassLoader.getSystemClassLoader();
if (cl != null) {
cls = cl.loadClass(clsName);
}
}
if (cls != null) {
LOG.debug("Instantiate " + clsName);
handler =
(URLStreamHandler)cls.newInstance();
}
} catch (Exception e) {
// any number of exceptions can get thrown here
LOG.debug(e);
}
}
This prints (in Cloudwatch logs):
packagePrefixList: com.elsten.bliss|sun.net.www.protocol (BaasDriver.java:94, thread main)
Try com.elsten.bliss.baas.Handler (BaasDriver.java:108, thread main)
Instantiate com.elsten.bliss.baas.Handler (BaasDriver.java:118, thread main)
com.elsten.bliss.baas.Handler constructor (Handler.java:55, thread main)
So, when run from my own code, in Lambda, it works.
However, the very next line of logging:
java.lang.IllegalArgumentException: URL is malformed: baas://folder: java.lang.RuntimeException
java.lang.RuntimeException: java.lang.IllegalArgumentException: URL is malformed: baas://folder
...
Caused by: java.net.MalformedURLException: unknown protocol: baas
at java.base/java.net.URL.<init>(Unknown Source)
at java.base/java.net.URL.<init>(Unknown Source)
at java.base/java.net.URL.<init>(Unknown Source)
So it seems odd the same code is failing when run in URL. The main difference I can think of is the parent classloader used to load URL and my code are different, and so there's some sort of class loading issue.
The SPI approach can't be used because Lambda doesn't extract META-INF folders!
Initially I thought the old URL.setURLStreamHandlerFactory(URLStreamHandlerFactory) approach was to be avoided, but it turns out this has been improved in recent Java versions, and so I have fallen back to that.
Specifically, a default fallback URLStreamHandlerFactory which is capable of handling streams to http, https, file et al is used as a fallback if the custom one provided cannot handle a stream.
This is a workaround though - it would be interesting to know why the class cannot be loaded.

Eclipse XML Parser "Providers" conflicting with rt.jar

Please note: Although there are several questions on SO that paste in a similar exception & stack trace, this question is definitely not a dupe of any of them, as I'm trying to understand where my classloading is going awry.
Hi, Java 8/Groovy 2.4.3/Eclipse Luna here. I'm using the BigIP iControl Java client (for controlling a powerful load balancer programmatically) which in turn uses Apache Axis 1.4. In its use of Axis 1.4 I am getting the following stacktrace (from Eclipse console):
Caught: javax.xml.parsers.FactoryConfigurationError: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
javax.xml.parsers.FactoryConfigurationError: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
at org.apache.axis.utils.XMLUtils.getDOMFactory(XMLUtils.java:221)
at org.apache.axis.utils.XMLUtils.<clinit>(XMLUtils.java:83)
at org.apache.axis.configuration.FileProvider.configureEngine(FileProvider.java:179)
at org.apache.axis.AxisEngine.init(AxisEngine.java:172)
at org.apache.axis.AxisEngine.<init>(AxisEngine.java:156)
at org.apache.axis.client.AxisClient.<init>(AxisClient.java:52)
at org.apache.axis.client.Service.getAxisClient(Service.java:104)
at org.apache.axis.client.Service.<init>(Service.java:113)
at iControl.LocalLBPoolLocator.<init>(LocalLBPoolLocator.java:21)
at iControl.Interfaces.getLocalLBPool(Interfaces.java:351)
at com.me.myapp.F5Client.run(F5Client.groovy:27)
Hmmm, let's have a look at that XMLUtils.getDOMFactory method:
private static DocumentBuilderFactory getDOMFactory() {
DocumentBuilderFactory dbf;
try {
dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
}
catch( Exception e ) {
log.error(Messages.getMessage("exception00"), e );
dbf = null;
}
return( dbf );
}
OK, LN 221 is that call to DocumentBuilderFactory.newInstance() so let's have a look at it:
public static DocumentBuilderFactory newInstance() {
return FactoryFinder.find(
/* The default property name according to the JAXP spec */
DocumentBuilderFactory.class, // "javax.xml.parsers.DocumentBuilderFactory"
/* The fallback implementation class name */
"com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl");
}
The plot thickens! Now let's take a final look at FactoryFinder.find:
static <T> T find(Class<T> type, String fallbackClassName)
throws FactoryConfigurationError
{
final String factoryId = type.getName();
dPrint("find factoryId =" + factoryId);
// lots of nasty cruft omitted for brevity...
// Try Jar Service Provider Mechanism
T provider = findServiceProvider(type);
if (provider != null) {
return provider;
}
if (fallbackClassName == null) {
throw new FactoryConfigurationError(
"Provider for " + factoryId + " cannot be found"); // <<-- Ahh, here we go
}
dPrint("loaded from fallback value: " + fallbackClassName);
return newInstance(type, fallbackClassName, null, true);
}
So if I'm interpreting this right, it's throwing the FactoryConfigurationError because it can't find the main "provider class" (whatever that means) and no fallback has been specified. But hasn't it?!? The call to FactoryFinder.find included the non-null "com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl" string argument. This has me suspicious that something is really wonky with my classpath, and that I have a rogue DocumentBuilderFactory (not the one defined in rt.jar/javax/xml/parsers) somewhere in my code that is passing a NULL arg to this finder method.
But that doesn't make sense either, because Axis 1.4 doesn't appear (at least according to Maven repo) to have any dependencies...which means the only "provider" for javax.xml.* would be the rt.jar. Unless, perhaps, Eclipse is mucking things up somehow? I'm so confused, please help :-/
Update
This is definitely an Eclipse issue. If I package my app as an executable JAR and run it from the command line I don't get this exception.

Getting Spring-XD and the hdfs sink to work for maprfs

This is a question about spring-xd release 1.0.1 working together with maprfs, which is officially not yet supported. Still I would like to get it to work.
So this is what we did:
1) adjusted the xd-shell and xd-worker and xd-singlenode shell scripts to accept the parameter --hadoopDistro mapr
2) added libraries to the new directory $XD_HOME/lib/mapr
avro-1.7.4.jar jersey-core-1.9.jar
hadoop-annotations-2.2.0.jar jersey-server-1.9.jar
hadoop-core-1.0.3-mapr-3.0.2.jar jetty-util-6.1.26.jar
hadoop-distcp-2.2.0.jar maprfs-1.0.3-mapr-3.0.2.jar
hadoop-hdfs-2.2.0.jar protobuf-java-2.5.0.jar
hadoop-mapreduce-client-core-2.2.0.jar spring-data-hadoop-2.0.2.RELEASE-hadoop24.jar
hadoop-streaming-2.2.0.jar spring-data-hadoop-batch-2.0.2.RELEASE-hadoop24.jar
hadoop-yarn-api-2.2.0.jar spring-data-hadoop-core-2.0.2.RELEASE-hadoop24.jar
hadoop-yarn-common-2.2.0.jar spring-data-hadoop-store-2.0.2.RELEASE-hadoop24.jar
3) run bin/xd-singlenode --hadoopDistro mapr and shell/bin/xd-shell --hadoopDistro mapr.
When creating and deploying a stream via stream create foo --definition "time | hdfs" --deploy, data is written to a file tmp/xd/foo/foo-1.txt.tmp on maprfs. Yet when undeploying the stream, the following exceptions appears:
org.springframework.data.hadoop.store.StoreException: Failed renaming from /xd/foo/foo-1.txt.tmp to /xd/foo/foo-1.txt; nested exception is java.io.FileNotFoundException: Requested file /xd/foo/foo-1.txt does not exist.
at org.springframework.data.hadoop.store.support.OutputStoreObjectSupport.renameFile(OutputStoreObjectSupport.java:261)
at org.springframework.data.hadoop.store.output.TextFileWriter.close(TextFileWriter.java:92)
at org.springframework.xd.integration.hadoop.outbound.HdfsDataStoreMessageHandler.doStop(HdfsDataStoreMessageHandler.java:58)
at org.springframework.xd.integration.hadoop.outbound.HdfsStoreMessageHandler.stop(HdfsStoreMessageHandler.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:201)
at com.sun.proxy.$Proxy120.stop(Unknown Source)
at org.springframework.integration.endpoint.EventDrivenConsumer.doStop(EventDrivenConsumer.java:64)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:100)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:115)
at org.springframework.integration.config.ConsumerEndpointFactoryBean.stop(ConsumerEndpointFactoryBean.java:303)
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:229)
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:363)
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:202)
at org.springframework.context.support.DefaultLifecycleProcessor.stop(DefaultLifecycleProcessor.java:106)
at org.springframework.context.support.AbstractApplicationContext.stop(AbstractApplicationContext.java:1186)
at org.springframework.xd.module.core.SimpleModule.stop(SimpleModule.java:234)
at org.springframework.xd.dirt.module.ModuleDeployer.destroyModule(ModuleDeployer.java:132)
at org.springframework.xd.dirt.module.ModuleDeployer.handleUndeploy(ModuleDeployer.java:111)
at org.springframework.xd.dirt.module.ModuleDeployer.undeploy(ModuleDeployer.java:83)
at org.springframework.xd.dirt.server.ContainerRegistrar.undeployModule(ContainerRegistrar.java:261)
at org.springframework.xd.dirt.server.ContainerRegistrar$StreamModuleWatcher.process(ContainerRegistrar.java:884)
at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: java.io.FileNotFoundException: Requested file /xd/foo/foo-1.txt does not exist.
at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:805)
at com.mapr.fs.MapRFileSystem.delete(MapRFileSystem.java:629)
at org.springframework.data.hadoop.store.support.OutputStoreObjectSupport.renameFile(OutputStoreObjectSupport.java:258)
... 29 more
I had a look at the OutputStoreObjectSupport.renameFile() function. When a file on hdfs is finished, this method tries to rename the file /xd/foo/foo-1.txt.tmp to xd/foo/foo1.txt. This is the relevant code:
try {
FileSystem fs = path.getFileSystem(getConfiguration());
boolean succeed;
try {
fs.delete(toPath, false);
log.info("Renaming path=[" + path + "] toPath=[" + toPath + "]");
succeed = fs.rename(path, toPath);
} catch (Exception e) {
throw new StoreException("Failed renaming from " + path + " to " + toPath, e);
}
if (!succeed) {
throw new StoreException("Failed renaming from " + path + " to " + toPath + " because hdfs returned false");
}
}
When the target file does not exist on hdfs, maprfs seems to throw an exception when fs.delete(toPath, false) is called. Yet throwing an exception in this case does not make sense. I assume that other Filesystem implementations behave differently, but this is a point I still need to verify. Unfortuntately I cannot find the sources for MapRFileSystem.java. Is this closed source? This would help me to better understand the issue. Has anybody experience with writing from spring-xd to maprfs? Or renaming files on maprfs with spring-data-hadoop?
Edit
I managed to reproduce the issue outside of spring XD with a simple test case (see below). Note that this exception is only thrown if the inWritingSuffix or the inWritingPrefix is set. Otherwise spring-hadoop will not attempt to rename the file. So this is the still somehow unsatisfactory workaround for me: refrain from using inWritingPrefixes and inWritingSuffixes.
#ContextConfiguration("context.xml")
#RunWith(SpringJUnit4ClassRunner.class)
public class MaprfsSinkTest {
#Autowired
Configuration configuration;
#Autowired
FileSystem filesystem;
#Autowired
DataStoreWriter<String >storeWriter;
#Test
public void testRenameOnMaprfs() throws IOException, InterruptedException {
Path testPath = new Path("/tmp/foo.txt");
filesystem.delete(testPath, true);
TextFileWriter writer = new TextFileWriter(configuration, testPath, null);
writer.setInWritingSuffix("tmp");
writer.write("some entity");
writer.close();
}
#Test
public void testStoreWriter() throws IOException {
this.storeWriter.write("something");
}
}
I created a new branch for spring-hadoop which supports maprfs:
https://github.com/blinse/spring-hadoop/tree/origin/2.0.2.RELEASE-mapr
Building this release and using the resulting jar works fine with the hdfs sink.

how to set Hadoop DistributedCache?

when I run the hadoop code to add the third jar,just like the following code:
public static void addTmpJar(String jarPath, JobConf conf) throws IOException {
System.setProperty("path.separator", ":");
FileSystem fs = FileSystem.getLocal(conf);
String newJarPath = new Path(jarPath).makeQualified(fs).toString();
String tmpjars = conf.get("tmpjars");
if (tmpjars == null || tmpjars.length() == 0) {
conf.set("tmpjars", newJarPath);
} else {
conf.set("tmpjars", tmpjars + "," + newJarPath);
}
}
I get the following exception:
Error initializing attempt_201405281453_0053_m_000002_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for taskTracker/hadoop/distcache/-7315515059647727905_-860888033_1107570546/nn.hadoop.dev/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201405281453_0053/libjars/mahout-core-0.8-job.jar
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:381)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.getLocalCache(TrackerDistributedCacheManager.java:173)
at org.apache.hadoop.filecache.TaskDistributedCacheManager.setupCache(TaskDistributedCacheManager.java:187)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1320)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1311)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1226)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2603)
at java.lang.Thread.run(Thread.java:744)
any one who can tell how to solve this problem,thanks!
From the commandline you can add a jar to the distributedcache using -libjars, the only prerequisite is that your MR program implements Tool which uses GenericOptionsParser, the latter takes care of adding the jar to the cache.
This page explains the above in more detail

bean validation- hibernate error

I am getting following exception when trying to run my command line application:
java.lang.ExceptionInInitializerError
at org.hibernate.validator.engine.ConfigurationImpl.<clinit>(ConfigurationImpl.java:52)
at org.hibernate.validator.HibernateValidator.createGenericConfiguration(HibernateValidator.java:43)
at javax.validation.Validation$GenericBootstrapImpl.configure(Validation.java:269)
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
at java.lang.String.substring(String.java:1937)
at org.hibernate.validator.util.Version.<clinit>(Version.java:39)
... 34 more
Am I doing anything wrong? Please suggest.
This is strange. I pasted the relevant parts of the static initialization block of o.h.v.u.Version in a class with a main and added some poor man's logging traces:
public class VersionTest {
public static void main(String[] args) {
Class clazz = org.hibernate.validator.util.Version.class;
String classFileName = clazz.getSimpleName() + ".class";
System.out.println(String.format("%-16s: %s", "classFileName", classFileName));
String classFilePath = clazz.getCanonicalName().replace('.', '/') + ".class";
System.out.println(String.format("%-16s: %s", "classFilePath", classFilePath));
String pathToThisClass = clazz.getResource(classFileName).toString();
System.out.println(String.format("%-16s: %s", "pathToThisClass", pathToThisClass));
// This is line 39 of `org.hibernate.validator.util.Version`
String pathToManifest = pathToThisClass.substring(0, pathToThisClass.indexOf(classFilePath) - 1)
+ "/META-INF/MANIFEST.MF";
System.out.println(String.format("%-16s: %s", "pathToManifest", pathToManifest));
}
}
And here the output I get when running it:
classFileName : Version.class
classFilePath : org/hibernate/validator/util/Version.class
pathToThisClass : jar:file:/home/pascal/.m2/repository/org/hibernate/hibernate-validator/4.0.2.GA/hibernate-validator-4.0.2.GA.jar!/org/hibernate/validator/util/Version.class
pathToManifest : jar:file:/home/pascal/.m2/repository/org/hibernate/hibernate-validator/4.0.2.GA/hibernate-validator-4.0.2.GA.jar!/META-INF/MANIFEST.MF
In your case, the StringIndexOutOfBoundsException: String index out of range: -2 suggests that:
pathToThisClass.indexOf( classFilePath )
is returning -1, making the pathToThisClass.substring(0, -2) call indeed erroneous.
And this means that org/hibernate/validator/util/Version.class is somehow not part of the pathToThisClass that you get. I don't have a full explanation but this must be related to the fact that you're using One-Jar.
Could you run the above test class and update your question with the output?
So, as you use One-JAR, the problem probably is in incompatibility between One-JAR and Hibernate Validator. However, in the latest version of One-JAR (0.97) it works fine, therefore use the latest version.

Categories