I want to add company artifactory to Zeppelin spark interpreter and try to use this document.
So, the URL of our artifactory looks like
http://artifactory.thecompany.com:8081/artifactory/
The access is not restricted to specific user and artifacts are downloadable both from my machine and from machine where Zepplin is running (I tried this with curl).
I've copied the artifact ID from by build.gradle, so I am pretty sure it is correct. However when I try to add the artifact that should be found in my company's artifactory I get error
Error setting properties for interpreter 'spark.spark': Could not find
artifact
com.feedvisor.dataplatform:data-platform-schema-scala:jar:3.0.19-SNAPSHOT
in central (http://repo1.maven.org/maven2/)
This error message sounds like Zeppelin did not try to look for my dependency in custom repository.
I tried to play with artifactory URL using:
http://artifactory.thecompany.com:8081/artifactory/
http://artifactory.thecompany.com:8081/
as well as with "snapshot" property of "Add New Repository" form (using true and false) but nothing helped. The error message does not disappear and classes from the referenced artifact are not found.
Thanks in advance.
For Zeppelin to use your company's repo by default you can set ZEPPELIN_INTERPRETER_DEP_MVNREPO in your ${Z_HOME}/conf/zeppelin-env.sh:
export ZEPPELIN_INTERPRETER_DEP_MVNREPO=http://artifactory.thecompany.com:8081/artifactory/
Alternatively, you can use Dynamic Dependency Loading feature of the notebook:
%dep
z.reset()
z.addRepo("Artifactory").url("http://artifactory.thecompany.com:8081/artifactory/").snapshot()
z.load("com.feedvisor.dataplatform:data-platform-schema-scala:3.0.19-SNAPSHOT")
Related
I am trying to deploy a connect-standalone job to stream from an mssql server however am facing an issue (Kafka-Connect is part of my Ambari deployment, not docker). This is the properties file I am using:
name=JdbcSourceConnector
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.user=ue
connection.password=pw
tasks.max=1
connection.url=jdbc:sqlserver://servername
topic.prefix=iblog
query=SELECT * FROM IB_WEBLOG_DUMMY_small
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
poll.interval.ms=5000
table.poll.interval.ms=120000
mode=incrementing
incrementing.column.name=ID
I have added the jar file sqljdbc42.jar to /usr/share/java
and have run export CLASSPATH=/usr/share/java/*
however I still run into the error Failed to find any class that implements Connector and which name matches io.confluent.connect.jdbc.JdbcSourceConnector
Am I doing anything wrong or can I check something else?
Kafka-Connect is part of my Ambari deployment
That would imply you are using Hortonworks installation
You need to
git clone https://github.com/confluentinc/kafka-connect-jdbc/
Checkout a release branch that ideally matches your Kafka version. For example branch v3.1.2 is Kafka 0.10.1.1
mvn clean package will generate some folders in target/ of that project
SCP those files to all Kafka Connect workers in your cluster into /usr/hdp/current/kafka/.../share/java/kafka-connect-jdbc (create this, if not exists)
Restart Kafka processes to pick up the new CLASSPATH settings
You may need some extra Confluent packages that JDBC connect depends on
You need to include the kafka-connect-jdbc jar file, which contains the io.confluent.connect.jdbc.JdbcSourceConnector class.
If you are using maven, you can add it as a dependency:
[Add the following repo to your project if you haven't done so yet.]
<repository>
<id>confluent</id>
<url>http://packages.confluent.io/maven/</url>
</repository>
After this, add the following dependency:
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-connect-jdbc</artifactId>
<version>3.3.0 (or whatever version you want)</version>
</dependency>
https://github.com/confluentinc/kafka-connect-jdbc/issues/356
I too had the same problem. with Couchbase connector not found
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113) java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches com.couchbase.connect.kafka.CouchbaseSourceConnector
Setting classpath was losing the existing classpath and I couldn't append as the classpath
I moved the required jar file from kafka-connect-couchase/*.jar files to /path/kafka_verison/libs/
libs is A folder where all the jar file stored.
I met the same issue, I resolved it by running connect-standalone in the root folder of confluent, in my case this was: /opt/confluent-5.0.1
I couldn't able to add maven artifact "ru.stqa.selenium" in eclipse.
I downloaded the catalog file from : "https://github.com/barancev/webdriver-testng-archetype"
Steps I followed is Eclipse-> Window-> Preference -> Maven-> ArchTypes-> Add Local Catalog.
On Local Archtype catalog popup I have put
Catalog file location: Address of pom file from local as "D:\Software\Selenium\webdriver-testng-archetype-master\src\main\resources\archetype-resources".
Description : Some name
Now I am getting this warning message " Archetype catalog is empty".
If I go with Add Remote catalog with remote location as "http://repo1.maven.org/maven2/archetype-catalog.xml" , it works fine.
Curious to know the reason for this strange behavior.
Effective January 15, 2020, The Central Repository no longer supports insecure communication over plain HTTP and requires that all requests to the repository are encrypted over HTTPS.
If you're receiving this error, then you need to replace all URL references to Maven Central with their canonical HTTPS counterparts:
Replace http://repo1.maven.org/maven2/ with https://repo1.maven.org/maven2/
Replace http://repo.maven.apache.org/maven2/ with https://repo.maven.apache.org/maven2/
If for any reason your environment cannot support HTTPS, you have the option of using our dedicated insecure endpoint at http://insecure.repo1.maven.org/maven2/
For further context around the move to HTTPS, please see https://blog.sonatype.com/central-repository-moving-to-https.
If I have a Maven Artifact information (GroupId, ArtifactId, Version) how can I programmatically (using Java) retrieve that Artifact from my local repository?
Specifically, I need to be able to connect to the Maven Repository and create/retrieve a org.apache.maven.artifact.Artifact so I can retrieve the file associated with the Artifact.
I have looked into m2e source code, but the MavenImpl.java (which provides Artifact resolution) is way more complex than what I need and it is difficult to understand how the connection to the repository works.
You'll probably want to look at Aether. See the Wiki for examples.
You can construct a URL from the given information and download the file (note, replace the '.' in the <groupId> with '/'):
<repositoryUrl>/<groupId>/<artifactId>/<version>/<artifactId>-<version>.<type>
This is how we do it in jcabi-aether:
final File repo = this.session.getLocalRepository().getBasedir();
final Collection<Artifact> deps = new Aether(this.getProject(), repo).resolve(
new DefaultArtifact("junit", "junit-dep", "", "jar", "4.10"),
JavaScopes.RUNTIME
);
Give it a list of remote repositories, a location of a local repo, and Maven coordinates of the artifact. As the name shows, the library uses Apache Aether from Sonatype.
I've noticed that a maven build is failing because a particular dependancy cannot be downloaded. In the build log I get a lot of error messages like this:
WARNING] *** CHECKSUM FAILED - Checksum failed on download: local = '10c2aa7dfc8577cb32ee654d2cd5b270d478b823'; remote = '<!DOCTYPE' - RETRYING
Downloading: http://maven.glassfish.org/content/groups/public//org/apache/maven/maven-archiver/2.0.1/maven-archiver-2.0.1.pom
If I inspect the URL it's trying to fetch from, I can see that this resource does ineed seem to be broken. I guess this is what passes for a 404 in Oracle-Land. The odd thing is that if I look at the parent directory I can see an entry for that missing file, but with no file-size.
Clearly something is wrong in the remote server. That's causing all my downloaded artifacts to contain a load of HTML junk. Deleting the files and re-running just causes the exact same set of garbage to be downloaded from Oracle.
The question is - how can I work around this problem? Unfortunately this is a new project that I've just dredged up from the murky mists of time. None of my colleagues have the components on their PCs, otherwise I'd just copy from them. Can I instruct maven to use some alternative source to obtain the missing jars?
You could probably try one of these three options:
Remove the offending repository declaration if you can edit the POM it is defined in. This will solve the problem for everyone who needs to develop on the project.
Override the repository specific repository URL in your settings.xml. You will need to find which POM the repository is defined in and note its name. You can the use something like:
<mirror>
<id>temporary-override</id>
<url>http://repo1.maven.org/maven2/</url>
<mirrorOf>ID_OF_REPO</mirrorOf>
</mirror>
If all else fails override all repositories:
<mirror>
<id>temporary-override</id>
<url>http://repo1.maven.org/maven2/</url>
<mirrorOf>*</mirrorOf>
</mirror>
The Maven site has some documentation about repository mirrors.
I'm trying to write a custom Maven plugin that will parse the SCM changelog of the current Maven project, as well as any of its direct dependencies.
I know that MavenProject.getScm().getConnection() returns the connection URL of the current project.
However, I would also like to retrieve the connection URL of any direct dependencies. (They are already defined in each dependency's pom.xml)
I looked at MavenProject.getDependencies(), but it returns a List of Dependency objects which doesn't seem to contain the information I need.
Does anyone know how I can retrieve this information?
You will have to get instance of MavenProject for each of the dependencies, e.g. obtain instance of the MavenProjectBuilder and build MavenProject instance with it.
See the following question for a sample code snippet for resolving an individual dependency.