I made a program that parses turtle files with Jena library. These are the dependencies i use:
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-iri</artifactId>
<version>3.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-core</artifactId>
<version>3.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-arq</artifactId>
<version>3.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-tdb</artifactId>
<version>3.10.0</version>
</dependency>
So the parsing is working well on my java program but when i create my jar and try to run it, i have these kind of errors :
ERROR JenaService:146 - org.apache.jena.n3.turtle.TurtleParseException: Line 28015, column 79: org.apache.jena.iri.impl.IRIImplException:
<http://www.reussir.fr,> Code: 28/NOT_DNS_NAME in HOST: The host component did not meet the restrictions on DNS names.
Any ideas ?
EDIT
I have a warning for the invalid IRI problem by running my program with the IDE, but still giving me errors with the generated jar.
<http://www.reussir.fr,>
There is a comma in the URI in a place where commas are not allowed.
It is better to find and fix the data problem because it can lead to other problems later if not fixed.
I found the problem, the only dependency i really needed was jena-arq, so i removed others dependencies (especially jena-iri which was throwing the TurtleParseException) and the bad-iri errors became warnings like in the IDE execution logs.
Related
I tried to list the ECS clusters using the code as follow:
AmazonECS = amazonECS AmazonECSClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(awsCredentials)).build():
amazonECS.listClusters();
However, it gave the error
java.lang.NoSuchFieldError: CLIENT_ENDPOINT
The error stack is something like this:
com.amazonaws.services.ecs.AmazonECSClient in executeListClusters at
line 2220 com.amazonaws.services.ecs.AmazonECSClient in listClusters
at line 2202 com.amazonaws.services.ecs.AmazonECSClient in
listClusters at line 2245
I am not too sure why this error occurred as the other Amazon services did not give me any similar error whatsoever and I have set the region previously based on the client's preference. Any ideas?
Thanks to Nagaraj Trantri the error is caused by the version mismatched of the AWS SDK that I have according to https://github.com/aws/aws-sdk-java/issues/2509#issuecomment-779370672
I had different version for SQS and S3 in pom.xml. After I updated those to same versions, it worked.
It depends on where to look for these versions mismatch.
I am using spark to connect to secrets manager and thus we have 2 places to look at.
My Application dependencies (build.gradle)
spark.yarn.jars
The versions in the above 2 places should match and then it started working
Use this in the pom.xml file. Error is caused due to mismatch in the 'com.amazonaws' dependency versions declared in the pom.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.739</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sts</artifactId>
</dependency>
</dependencies>
When doing:
new MiniDFSCluster.Builder(config).build()
I get this exception:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:308)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1107)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:978)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
I want to use the Hadoop Minicluster to test my Hadoop HDFS (which does not throw this exception, see java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0).
In my Maven pom.xml I have these dependencies:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0</version>
</dependency>
<!-- for unit testing -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0</version>
<type>test-jar</type>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.6.0</version>
</dependency>
<!-- for unit testing -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.6.0</version>
<scope>test</scope>
<classifier>tests</classifier>
</dependency>
I understood, I do not need to the specific 'hadoop-minicluster' dependency as it already comes with the above included hadoop-hdfs.
I am trying to build the MiniDFSCluster in my #BeforeAll.
I have used different configs for the builder:
config = new HdfsConfiguration(); / config = new Configuration();
And different ways to create a path for the baseDir:
config.set(miniDfsClusterHD.HDFS_MINIDFS_BASEDIR, baseDir);
Also, I downloaded the hadoop.dll and hdfs.dll and winuntils.exe in v2.6.0 and added the path to those in my environment variables.
I studied all the related issues I could find in stackoverflow (without success, obviously) and all guides and code examples I could find in the internet (there are a few and they all do it differently.)
Can somehow please help me, I am out of ideas.
UPDATE:
I am running the test with the following VM options (which should not be necessary, I think):
-Dhadoop.home.dir=C:/Temp/hadoop
-Djava.library.path=C:/Temp/hadoop/bin
I also tried to set the environment variables directly (which should not be necessary when using the VM options):
System.setProperty("hadoop.home.dir", "C:\\Temp\\hadoop-2.6.0");
System.setProperty("java.library.path", "C:\\Temp\\hadoop-2.6.0\\bin");
I have resolved this issue by downloading the source file (org.apache.hadoop.io.nativeio.NativeIO.java) and modifying line in
function access (in your case 557) from:
return access0(path, desiredAccess.accessRight());
to
return true;
I'm trying to create a StandardQueryParser in order to Query and Index, I've already created. I do so in the following line of code:
StandardQueryParser queryParserHelper = new StandardQueryParser();
Which causes the following exception to occur at runtime:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/lucene/search/LegacyNumericRangeQuery
at org.apache.lucene.queryparser.flexible.standard.builders.StandardQueryTreeBuilder.<init>(StandardQueryTreeBuilder.java:63)
at org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.<init>(StandardQueryParser.java:110)
at analysis.Main.main(Main.java:67)
Note the line 67 is the line of code included above.
I'm using Maven and IntelliJ.
I specify Lucene as a dependency through the following, in my pom:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>7.1.0</version>
</dependency>
You need to add the lucene-queryparser jar, as well, to get StandardQueryParser.
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>7.1.0</version>
</dependency>
I'm trying to build a mule project in maven which uses a library that in turn uses apache-commons-codec-1.8 . Mule 3.5 currently supports only v 1.3
In order to get around this Ive implemented classloader control in mule and blocked mule from loading its version of the library by doing the following in mule-deploy.properties.
loader.override=-org.apache.commons.codec
In addition I've updated my pom.xml to include the 1.9 version of the library . Here is a snapshot of running mvn:dependency tree on the project.
However, when I run my test method I get a runtime exception
java.lang.NoSuchMethodError: org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString([B)Ljava/lang/String;
at com.nimbusds.jose.util.Base64URL.encode(Base64URL.java:64)
at com.nimbusds.jose.util.Base64URL.encode(Base64URL.java:91)
at com.nimbusds.jose.Header.toBase64URL(Header.java:238)
at com.nimbusds.jose.JWSObject.<init>(JWSObject.java:101)
at com.package.components.lastmile.originator.TokenSignerTemplate.sign(TokenSignerTemplate.java:109)
at com.package.components.lastmile.originator.TokenSignerTemplate.signClaim(TokenSignerTemplate.java:122)
at com.package.orchestration.LMSFakeClaimsHandler.testSignParse_Positive(LMSFakeClaimsHandler.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
which is clearly because it's referencing the older version of apache-commons. How do I make sure that it references only the latest version and not the older version?
mule-deploy.properties
#Fri Dec 12 09:58:12 PST 2014
loader.override=-org.apache.commons.codec
redeployment.enabled=true
encoding=UTF-8
domain=default
config.resources=..flows.
.
Relevant positions of pom.xml
<dependencies>
....
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.9</version>
</dependency>
....
<!-- Test to check commons-codec works -->
<dependency>
<groupId>org.mule.transports</groupId>
<artifactId>mule-transport-http</artifactId>
<version>${mule.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
P.S: The same snippet seems to work fine on a non mule project, indicating this is a mule related issue.
If you are running the Mule app in Mule server, excluding the lib from the pom will not work, since the codec lib is present in the server itself.
Try to insert the newest codec lib version in the server lib shared folder (maintaining the property loader.override=-org.apache.commons.codec)
Add the following exclusion to your pom.
<dependency>
<groupId>org.mule.transports</groupId>
<artifactId>mule-transport-http</artifactId>
<version>${mule.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</exclusion>
</exclusions>
</dependency>
And then add the dependency of Commons Codec 1.9.
Then your override property in mule-deploy.properties will work as expected.
Update: 12/30:
The override property seems to be the problem.
loader.override=-org.apache.commons.codec is not correct.
Try the following
loader.override=org.apache.commons.codec
Hope this helps.
I have a similar issue with the jackson-xc. I don't know why Mule 3.5 comes with a mix of jackson 1 and 2 libraries
jackson-annotations-2.1.1.jar
jackson-core-2.1.1.jar
jackson-databind-2.1.1.jar
jackson-core-asl-1.9.11.jar
jackson-jaxrs-1.9.11.jar
jackson-mapper-asl-1.9.11.jar
jackson-xc-1.7.1.jar
And with jackson-xc-1.7.1 instead of jackson-xc-1.9.11 that would be aligned to the version of the other jackson 1 libraries.
In my application it is producing the "classic" library issue exception:
Caused by: java.lang.AbstractMethodError
at org.codehaus.jackson.map.AnnotationIntrospector$Pair.findDeserializer(AnnotationIntrospector.java:1335)
Since using
loader.override=...
into the mule-deploy.properties didn't work (with either override and/or blocking on the package org.codehaus.jackson.xc and on the class org.codehaus.jackson.xc.JaxbAnnotationIntrospector) the only solution I have found goes in the direction of Nuno's answer and is to put the jar we want to use in a lib folder with higher priority than lib/opt
lib/shared has been deprecated but you can use lib/user to override.
I would prefer to use the loader.override (classloader-control-in-mule 3.5) and avoid the modification of all the installation, but for now is the only solution that is working for me.
I'm trying to use Hibernate Validator in my project, but it isn't working. On the following line:
SessionFactory sessions = config.buildSessionFactory(builder.build());
I get the following exception:
org.hibernate.cfg.beanvalidation.IntegrationException: Error activating Bean Validation integration
at org.hibernate.cfg.beanvalidation.BeanValidationIntegrator.integrate(BeanValidationIntegrator.java:154)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:311)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1857)
at net.myProject.server.util.HibernateUtil.<clinit>(HibernateUtil.java:32)
... 36 more
Caused by: java.lang.NoSuchMethodError: javax.validation.spi.ConfigurationState.getParameterNameProvider()Ljavax/validation/ParameterNameProvider;
at org.hibernate.validator.internal.engine.ValidatorFactoryImpl.<init>(ValidatorFactoryImpl.java:119)
at org.hibernate.validator.HibernateValidator.buildValidatorFactory(HibernateValidator.java:45)
at org.hibernate.validator.internal.engine.ConfigurationImpl.buildValidatorFactory(ConfigurationImpl.java:217)
at javax.validation.Validation.buildDefaultValidatorFactory(Validation.java:111)
I found this question which seems quite similar to my problem. He describes his solution as
I had yet another bean validator jar in the class path. But not from
maven, so i didn't realize it. Removing that solved the problem.
I think my problem is the same. On http://hibernate.org/validator/documentation/getting-started/ it says:
This transitively pulls in the dependency to the Bean Validation API
(javax.validation:validation-api:1.1.0.Final)
That must be causing this issue, since reverting to an older version (4.3.1.Final) fixes the issue. Is there a way to force Hibernate to not pull in the Bean Validation API?
Edit: I've tried to exclude the javax-validation api:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>5.0.3.Final</version>
<exclusions>
<exclusion>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</exclusion>
</exclusions>
</dependency>
But it didn't seem to have any effect.
Try adding this dependency to your pom.xml
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.0.0.GA</version>
</dependency>
If not consider using hibernate-validator4.2.0.Final I have that one in my config and it is working fine.
For me, the 1.1.0.Final version javax.validation.validation-api had worked. Because, the javax.validation.spi.ConfigurationState interface of 1.1.0.Final has getParameterNameProvider method, which was absent in 1.0.0.GA.
I added the below dependency in pom.xml
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.1.0.Final</version>
<scope>test</scope>
</dependency>
I had the problem again. Thats how I've fixed that:
1-Exclude spring.validator from the 'web' dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator</artifactId>
</exclusion>
</exclusions>
</dependency>
2-After insert the dependecy with a previous version:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>5.1.3.Final</version>
</dependency>
in my case i just deleted the hibernate-validator and it worked .(i also had a combo of both validation api and hibernate-validator and tried everything) or you can go to your maven repository-->org and then delete the hibernate folder and rebuild your project again..
hope it helps..
I thought it would be useful to explain what is going on here.
Hibernate is calling ConfigurationState.getParameterNameProvider:
ValidatorFactoryImpl.java:
public ValidatorFactoryImpl(ConfigurationState configurationState) {
...
configurationState.getParameterNameProvider()
...
}
You can find the documentation of getParameterNameProvider:
getParameterNameProvider
ParameterNameProvider getParameterNameProvider()
Returns the parameter name provider for this configuration.
Returns:
parameter name provider instance or null if not defined
Since:
1.1
So what's the problem? The problem is that the method didn't always exist. It was added at some point in the future.
And the rule when creating interfaces is that they are set in concrete: you shall not change an interface ever. Instead the JavaX validator changed the ConfigurationState interface, and added a few new methods over the years.
The java validation code is passing the Hiberate an outdated ConfiguationState interface; one that doesn't implement the required interfaces.
You need to ensure that javax.validation.Validation.buildDefaultValidatorFactory is updated to to support version 1.1.
Removing this jar javax.validation:validation-api:1.1.0.Final solved my problem.
Make sure you have only one validation jar. If we have two jars then they may conflict resulting in error.
Go to the dependecies project and delete, hibernate.validator, and reinstall that in the most recent version. It has solved the problem for me.