When adding db2jcc4.jar to the system class path, Tomcat 8.0 raises a FileNotFoundException on a jar file that has no apparent reference to my project, pdq.jar.
I couldn't find it anywhere on my system or where it might come from, except through a search which turned up the answer below.
In this case, I have my CATALINA_HOME pointed to C:\tomcat8.0\apache-tomcat-8.0.41 and my project has the following maven dependency defined:
<dependency>
<groupId>com.ibm.db2.jcc</groupId>
<artifactId>db2jcc4</artifactId>
<version>10.1</version>
<scope>system</scope>
<systemPath>${env.CATALINA_HOME}/lib/db2jcc4-10.1.jar</systemPath>
</dependency>
This might happen in the newer versions of Db2 jcc driver:
Beginning with version 4.16 of the IBM Data Server Driver for JDBC and SQLJ, which is shipped with Db2 10.5 on Linux, UNIX, or Windows operating systems, the MANIFEST.MF file for db2jcc4.jar contains a reference to pdq.jar.
IBM Support offers 2 options:
Resolving the problem
To prevent the java.io.FileNotFoundException, you can take one of the following actions:
Edit the MANIFEST.MF file, and remove this line: Class-Path: pdq.jar
Edit the context.xml file for Apache Tomcat, and add an entry like the following one to set the value of scanClassPath to false.
Personally, I prefer the second approach, which can be done as following:
<Context>
...
<JarScanner scanClassPath="false" />
...
</Context>
According to this KB article on IBM, the problem comes from the MANIFEST, which lists pdq.jar, a third party optimization tool.
I had both db2jcc4.jar and db2jcc4.10.1.jar in my lib folder.
While the article suggests editing the MANIFEST file in db2jcc4.jar, version 10.1 does not include this entry at all.
Removing db2jcc4.jar solved my problem, so a solution in this case could also be to upgrade db2jcc4 from an older version to version 10.1, or if that is not possible, edit the manifest file as instructed.
You Just need to update jar db2jcc4.jar to be db2jcc4-10.1.jar
You can find maven dependency / Jar on that link
Kayvan Tehrani's answer explains what's going on here and that this error can be ignored.
Another alternative to clean up the logs is to create a dummy pdq.jar and place it into tomcat's lib folder.
jar -cf pdq.jar ""
(The ": no such file or directory" message from this command is expected.)
Related
I'm setting up a SQL connector for my bukkit plugin, but whenever I compile into a jar file and try running from the server I get
java.sql.SQLEXCEPTION: No suitable driver found.
I have tried adding in the line
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Whenever I put that I get
SQLEXCEPTION: com.microsoft.sqlserver.jdbc.SQLServerDriver
This problem only occurs when I put it into a jar file and not when I test it in my IDE.
I currently use Intellij.
This is my current Jar setup:
https://gyazo.com/94341b7bb47121a0416deaee6279dd30
public ConnectionUtils(String url, String us, String pa) throws SQLException
{
SQLServerDriver dr = new SQLServerDriver();
user = us;
pass = pa;
con = DriverManager.getConnection(url, us, pa);
isConnected = true;
this.url = url;
}
Solution using Project Structure
Outside of using maven, you can look at your Project Structure in IntelliJ and examine your dependencies. Make sure the "export" box next that jar is checked.
Maven Solution
I recommend using maven to handle your dependencies, as you can define the scope of the dependency, as explained here.
For the JDBC dependency, you could use the following dependency declaration in your pom:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.17</version>
</dependency>
This would inherit the default scope which, as stated in the article, is compile.
3.1. Compile
This is the default scope when no other scope is provided.
Dependencies with this scope are available on the classpath of the
project in all build tasks and they're propagated to the dependent
projects.
Manifest
It seems some people online are talking about exactly this issue, and the solution is one that #Bohemian mentioned for ensuring that the required class is packaged with the jar. However, that solution only works if you are executing the jar from the command line, which is not the case with spigot plugins. I suggest creating a MANIFEST.txt and including the driver class-path in there, as suggested by Terence Gronowski on CodeRanch
Creating a Manifest.txt file with the following contents in the
program folder:
Manifest-Version: 1.0 Class-Path: sqljdbc4.jar Main-Class:
ParkplatzVerwaltung (Newline)
Don't forget to end with a newline. The important thing is
"Class-Path: sqljdbc4.jar", showing where the driver is.
Source: https://coderanch.com/t/529484/databases/Jdbc-driver-putting-application-jar
I am trying to deploy a connect-standalone job to stream from an mssql server however am facing an issue (Kafka-Connect is part of my Ambari deployment, not docker). This is the properties file I am using:
name=JdbcSourceConnector
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.user=ue
connection.password=pw
tasks.max=1
connection.url=jdbc:sqlserver://servername
topic.prefix=iblog
query=SELECT * FROM IB_WEBLOG_DUMMY_small
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
poll.interval.ms=5000
table.poll.interval.ms=120000
mode=incrementing
incrementing.column.name=ID
I have added the jar file sqljdbc42.jar to /usr/share/java
and have run export CLASSPATH=/usr/share/java/*
however I still run into the error Failed to find any class that implements Connector and which name matches io.confluent.connect.jdbc.JdbcSourceConnector
Am I doing anything wrong or can I check something else?
Kafka-Connect is part of my Ambari deployment
That would imply you are using Hortonworks installation
You need to
git clone https://github.com/confluentinc/kafka-connect-jdbc/
Checkout a release branch that ideally matches your Kafka version. For example branch v3.1.2 is Kafka 0.10.1.1
mvn clean package will generate some folders in target/ of that project
SCP those files to all Kafka Connect workers in your cluster into /usr/hdp/current/kafka/.../share/java/kafka-connect-jdbc (create this, if not exists)
Restart Kafka processes to pick up the new CLASSPATH settings
You may need some extra Confluent packages that JDBC connect depends on
You need to include the kafka-connect-jdbc jar file, which contains the io.confluent.connect.jdbc.JdbcSourceConnector class.
If you are using maven, you can add it as a dependency:
[Add the following repo to your project if you haven't done so yet.]
<repository>
<id>confluent</id>
<url>http://packages.confluent.io/maven/</url>
</repository>
After this, add the following dependency:
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-connect-jdbc</artifactId>
<version>3.3.0 (or whatever version you want)</version>
</dependency>
https://github.com/confluentinc/kafka-connect-jdbc/issues/356
I too had the same problem. with Couchbase connector not found
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113) java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches com.couchbase.connect.kafka.CouchbaseSourceConnector
Setting classpath was losing the existing classpath and I couldn't append as the classpath
I moved the required jar file from kafka-connect-couchase/*.jar files to /path/kafka_verison/libs/
libs is A folder where all the jar file stored.
I met the same issue, I resolved it by running connect-standalone in the root folder of confluent, in my case this was: /opt/confluent-5.0.1
I have used the JDBC api to connect to HIVE2 referring here, it was successful so for easy of access I thought of creating a webapp around it using JSP as front-end page to enter server name and query. While all parameters are resolved correctly from JSP page to servlet it throws an error while connecting to HIVE server its required
to place libthrift and hive JARS in WEB-INF/lib directory I placed in both WEB-INF/lib and classpath.
Issue is as hive jar comes first in WEB-INF/lib and as it does not have "org.apache.thrift.protocol.TProtocol.getScheme()" method I keep getting no such method error. I referred here and here and moved the libthrift jar to WEB-INF/classes but it dint help:
Jar versions: libthrift-0.9.3 and hive-0.4.1
If only your Hive version was more recent, you could...
get a recent driver such as
http://central.maven.org/maven2/org/apache/hive/hive-jdbc/2.0.0/
I mean the standalone JAR, to get rid of your CLASSPATH issues
although that "standalone" promise is not 100% true because you still need a couple of extra Hadoop JARS, cf. Where is Apache Hive JDBC driver for download?
But alas, you cannot user a driver that is more recent that your server -- here V0.13, i.e. the last version without a "standalone" driver JAR.
So you've got a whole bunch of Hive JARs to collect, plus a couple of Hadoop JARs and various dependencies such as libfb303-*.jar and libthrift-*.jar
$ unzip -l libthrift-0.9.2.jar | grep
org.apache.thrift.protocol.TProtocol.class
2958 11-05-2014 03:47 org/apache/thrift/protocol/TProtocol.class
I have many jar files in my directory:
some-lib-2.0.jar
some-lib-2.1-SNAPSHOT.jar
some-lib-3.RELEASE.jar
some-lib-R8.jar
some-lib-core-1.jar
some-lib-2.patch2.jar
some-lib-2-alpha-4.jar
some-lib.jar
some-lib2-4.0.jar
How can I get library name and version from file name?
Is regex ((?:(?!-\d)\S)+)-(\S*\d\S*(?:-SNAPSHOT)?).jar$ valid for extract name and version?
The version number in the JAR file name is merely a convention and a default for Maven-built JARs. It may have been overridden, and it is not always reliable reading the version number from just the file name.
A more reliable way for reading version number from JAR is to look inside the JAR file. Here you have a couple of options depending on how the JAR was built:
look at META-INF/maven/.../pom.properies and pom.xml and read the version from that - this should be present for Maven-built binaries
sometimes version number if present in META-INF/MANIFEST.MF under Specification-Version or Implementation-Version properties
If this fails, then fall back to reading version number from the JAR name since there is no other information available.
Naming policy could differ across different libraries, so you aren't able to extract name/version from package name using one rule, for details you should check project docs.
In case of Maven you are able to configure the final name of built artifact with finalName pom.xml configuration option. Maven docs provide nice introduction into pom structure. Below is the example from docs:
<build>
...
<finalName>${artifactId}-${version}</finalName>
...
</build>
I'm trying to load test data into a test DB during a maven build for integration testing. persistence.xml is being copied to target/test-classes/META-INF/ correctly, but I get this exception when the test is run.
javax.persistence.PersistenceException:
No Persistence provider for
EntityManager named aimDatabase
It looks like it's not finding or loading persistence.xml.
Just solved the same problem with a Maven/Eclipse based JPA project.
I had my META-INF directory under src/main/java with the concequence that it was not copied to the target directory before the test phase.
Moving this directory to src/main/resources solved the problem and ensured that the META-INF/persistence.xml file was present in target/classes when the tests were run.
I think that the JPA facet put my META-INF/persistence.xml file in src/main/java, which turned out to be the root of my problem.
I'm using Maven2, and I had forgotten to add this dependency in my pom.xml file:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>3.4.0.GA</version>
</dependency>
If this is on windows, you can use sysinternal's procmon to find out if it's checking the right path.
Just filter by path -> contains -> persistence.xml. Procmon will pick up any attempts to open a file named persistenc.xml, and you can check to see the path or paths that get tried.
See here for more detail on procmon: http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
I had the same problem and it wasn't that it couldn't find the persistence.xml file, but that it couldn't find the provider specified in the XML.
Ensure that you have the correct JPA provider dependancies and the correct provider definition in your xml file.
ie. <provider>oracle.toplink.essentials.PersistenceProvider</provider>
With maven, I had to install the 2 toplink-essentials jars locally as there were no public repositories that held the dependancies.
Is your persistence.xml located in scr/test/resources? Cause I was facing similar problems.
Everything is working fine as long as my persistence.xml is located in src/main/resources.
If I move persistence.xml to src/test/resources nothing works anymore.
The only helpful but sad answer is here: http://jira.codehaus.org/browse/SUREFIRE-427
Seems like it is not possible right now for unclear reasons. :-(
we got the same problem, does some tweaking on the project and finaly find following
problem (more clear error description):
at oracle.toplink.essentials.ejb.cmp3.persistence.
PersistenceUnitProcessor.computePURootURL(PersistenceUnitProcessor.java:248)
With that information we recalled a primary rule:
NO WHITE SPACES IN PATH NAMES!!!
Try this. Works for us smile.
Maybe some day this will be fixed.
Hope this works for you. Good luck.