Exception in Running maven - java

I imported Example from how to use storm book
and when i run it i got this
INFO] An exception occured while executing the Java class. null 0
i used this command in Terminal
mvn -f pom.xml compile exec:java -Dstorm.topology=TopologyMain
Code:
import spouts.WordReader;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.topology.TopologyBuilder;
import backtype.storm.tuple.Fields;
import bolts.WordCounter; import bolts.WordNormalizer;
public class TopologyMain {
public static void main(String[] args) throws InterruptedException {
//Topology definition
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("word-reader",new WordReader());
builder.setBolt("word-normalizer", new WordNormalizer()) .shuffleGrouping("word-reader");
builder.setBolt("word-counter", new WordCounter(),1) .fieldsGrouping("word-normalizer", new Fields("word"));
//Configuration
Config conf = new Config(); conf.put("wordsFile", args[0]); conf.setDebug(false);
//Topology run
conf.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 1);
LocalCluster cluster = new LocalCluster(); cluster.submitTopology("Getting-Started-Toplogie", conf, builder.createTopology());
Thread.sleep(1000);
cluster.shutdown();
}
}
I also tried this:
mvn -f pom.xml clean install
then tried to use this
mvn exec:java -Dexec.mainClass="TopologyMain" -Dexec.args="src/main/resources/words.txt"
error here
[0] Inside the definition for plugin 'exec-maven-plugin' specify the following:
<configuration> ... <mainClass>VALUE</mainClass></configuration>

You're using the -D argument incorrectly. It should be instead:
mvn -f pom.xml compile exec:java -Dexec.mainClass=storm.topology.TopologyMain
This will specify the main class to execute. It should be packaged with package storm.topology, which is not evident in the code you pasted.
Also, I don't know why you're explicitly specifying your POM file. You should create a pom.xml file in the project's root directory, and then you won't have to specify it on the command-line. Ideally you should be typing,
mvn clean install
mvn exec:java -Dexec.mainClass="storm.topology.TopologyMain"
This will clean your project, compile it, install any dependencies, and then execute the project with TopologyMain as the entry point.

Related

cucumber options are being ignored from docker run in jenkinsfile but not in local

When I launch cucumber appium tests in local with this arguments:
-ea -Dplatform=android -Dcucumber.options="--tags #mytag"
it works, but when I launch the same from a docker run, it ignores the cucumber options.
I need to launch it from a Jenkins job and run it in docker.
In my Jenkinsfile:
sh "docker run --env JAVA_OPTS='-ea -Dplatform=$platform -Dcucumber.options=$cucumberOptions'...
In the Jenkins pipeline log:
docker run --env 'JAVA_OPTS=-ea -Dplatform=android -Dcucumber.options="--tags #Check_Pricing_Payment_Org"'...
My TestTunner class:
#RunWith(Cucumber.class)
#CucumberOptions(
features = {"src/test/resources/functionalTests/"},
plugin = {"pretty","json:target/cucumber-reports/Cucumber.json","html:target/cucumber-reports/htmlReports.html" },
glue= {"stepDefinitions"}
)
public class TestRunner {
}
Any help or clue will be appreciated. Thank you!
EDIT
As a workaround, we are now using this in our Jenkinsfile:
script {
if (env.STEP_TO_RUN.toBoolean()) {
stage('First satage') {
withMaven(maven:'mvn') {
sh 'export GOOGLE_APPLICATION_CREDENTIALS="whatevercredentials.json" && mvn clean test -Dcucumber.options="--tags #MyTag --tags #OtherTag"'
}
}
}
}
And in our CI job STEP_TO_RUN was added as a chain boolean parameter.
This way we made it work in gcloud. Hope this helps someone!

Jenkins - mvn not found

Hello I'm new to jenkins and getting this issue. I'm using jenkins in windows azure
mvn clean package /var/lib/jenkins/workspace/vcc#tmp/durable-b5407f14/script.sh: 2:
/var/lib/jenkins/workspace/vcc#tmp/durable-b5407f14/script.sh: mvn:
not found.
Jenkinsfiles:
node {
stage('init') {
checkout scm
}
stage('build') {
sh '''
mvn clean package
cd target
cp ../src/main/resources/web.config web.config
cp todo-app-java-on-azure-1.0-SNAPSHOT.jar app.jar
zip todo.zip app.jar web.config
'''
}
stage('deploy') {
azureWebAppPublish azureCredentialsId: env.AZURE_CRED_ID,
resourceGroup: env.RES_GROUP, appName: env.WEB_APP, filePath: "**/todo.zip"
}
}
can any body help me how can I resolve this mvn issue.
P.S I'm following this tutorial
https://learn.microsoft.com/en-us/azure/jenkins/tutorial-jenkins-deploy-web-app-azure-app-service
You may try to add maven tool to your pipeline:
tools {
maven 'M3'
}
stages {
stage('init') {
checkout scm
}
stage('build') {
sh '''
mvn clean package
cd target
cp ../src/main/resources/web.config web.config
cp todo-app-java-on-azure-1.0-SNAPSHOT.jar app.jar
zip todo.zip app.jar web.config
'''
}
stage('deploy') {
azureWebAppPublish azureCredentialsId: env.AZURE_CRED_ID,
resourceGroup: env.RES_GROUP, appName: env.WEB_APP, filePath: "**/todo.zip"
}
}
I add this line right before sh command in the build stage : def mvnHome = tool name: 'Apache Maven 3.6.0', type: 'maven'
and instead of mvn you should use ${mvnHome}/bin/mvn
thank this youtube film to help me.
pipeline{
stage('com'){
def mvnHome = tool name: 'Apache Maven 3.6.0', type: 'maven'
sh "${mvnHome}/bin/mvn -B -DskipTests clean package"
}
}
You may wanna check if Jenkins has the pipeline-maven plugin installed.
If you don't have it, search and install the pipeline-maven plugin.
Once the plugin is installed, you can use maven as follows
node{
stage('init'){
//init sample
}
stage('build'){
withMaven(maven: 'mvn') {
sh "mvn clean package"
}
}
}

How to build Hello-World project with maven structure?

I am new with java and start to reading maven but the document is not clear for me. I have a simple Hello-World project like so :
package main;
public class Hello
{
public static void main(String[] args)
{
System.out.println("Hello World");
}
}
I want to implement this in maven structure .What should I do for this ?
I download and install appache-maven-3.3.3-bin.zip and set the environment variable .
See this page.
Using this command:
mvn -B archetype:generate \
-DarchetypeGroupId=org.apache.maven.archetypes \
-DgroupId=com.mycompany.app \
-DartifactId=my-app
It will generate a new directory my-app that is a complete Maven project with the recommended layout, that is:
$ find my-app/
my-app/
my-app//pom.xml
my-app//src
my-app//src/main
my-app//src/main/java
my-app//src/main/java/com
my-app//src/main/java/com/mycompany
my-app//src/main/java/com/mycompany/app
my-app//src/main/java/com/mycompany/app/App.java
my-app//src/test
my-app//src/test/java
my-app//src/test/java/com
my-app//src/test/java/com/mycompany
my-app//src/test/java/com/mycompany/app
my-app//src/test/java/com/mycompany/app/AppTest.java
Customize the groupId and artifactId according to your needs.
See Introduction to the Standard Directory Layout for more details about the layout.

Java won't recognize classpath set by command line

Here is my code:
import java.net.URL;
import java.net.URLClassLoader;
public class App {
public static void main(String[] args) {
System.out.println("java.class.path="+System.getProperty("java.class.path"));
ClassLoader cl = ClassLoader.getSystemClassLoader();
URL[] urls = ((URLClassLoader)cl).getURLs();
for(URL url: urls){
System.out.println(url.getFile());
}
}
}
When I run this in Eclipse with the LWJGL and Slick2d libraries, I get, as expected:
java.class.path=/home/the-genius/workspace/classpath/bin:/home/the-geniu/workspace
/libs/slick/lib/slick.jar:/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar
/lwjgl.jar:/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/lwjgl_util.jar:
/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/jinput.jar
/home/the-genius/workspace/classpath/bin/
/home/the-genius/workspace/libs/slick/lib/slick.jar
/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/lwjgl.jar
/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/lwjgl_util.jar
/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/jinput.jar
However, when I export it as a runnable jar and execute via
java -cp /home/the-genius/workspace/classpath/bin:/home/the-geniu/workspace
/libs/slick/lib/slick.jar:/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar
/lwjgl.jar:/home/the-genius/workspace/libs/lwjgl/lwjgl-2.9.2/jar/lwjgl_util.jar: -jar app.jar
I get
java.class.path=classpath.jar
/home/the-genius/classpath.jar
Is there any reason for this to be happening? How do I fix it?
I'm running on Ubuntu, if that makes a difference. I've also tried it using both OpenJDK-7 and Sun Java-7.
If you use -cp and -jar option at the same, the former is ignored. To fix it, you can either run it without -jar(add your jar file to the classpath and call the main class: java -cp app.jar App) or add the classpath to the jar manifest file.

Setting external jars to hadoop classpath

I am trying to set external jars to hadoop classpath but no luck so far.
I have the following setup
$ hadoop version
Hadoop 2.0.6-alpha
Subversion https://git-wip-us.apache.org/repos/asf/bigtop.git -r ca4c88898f95aaab3fd85b5e9c194ffd647c2109
Compiled by jenkins on 2013-10-31T07:55Z
From source with checksum 95e88b2a9589fa69d6d5c1dbd48d4e
This command was run using /usr/lib/hadoop/hadoop-common-2.0.6-alpha.jar
Classpath
$ echo $HADOOP_CLASSPATH
/home/tom/workspace/libs/opencsv-2.3.jar
I am able see the above HADOOP_CLASSPATH has been picked up by hadoop
$ hadoop classpath
/etc/hadoop/conf:/usr/lib/hadoop/lib/:/usr/lib/hadoop/.//:/home/tom/workspace/libs/opencsv-2.3.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/:/usr/lib/hadoop-hdfs/.//:/usr/lib/hadoop-yarn/lib/:/usr/lib/hadoop-yarn/.//:/usr/lib/hadoop-mapreduce/lib/:/usr/lib/hadoop-mapreduce/.//
Command
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier /user/root/1987.csv /user/root/result
I tried with -libjars option as well
$ sudo hadoop jar FlightsByCarrier.jar FlightsByCarrier /user/root/1987.csv /user/root/result -libjars /home/tom/workspace/libs/opencsv-2.3.jar
The stacktrace
14/11/04 16:43:23 INFO mapreduce.Job: Running job: job_1415115532989_0001
14/11/04 16:43:55 INFO mapreduce.Job: Job job_1415115532989_0001 running in uber mode : false
14/11/04 16:43:56 INFO mapreduce.Job: map 0% reduce 0%
14/11/04 16:45:27 INFO mapreduce.Job: map 50% reduce 0%
14/11/04 16:45:27 INFO mapreduce.Job: Task Id : attempt_1415115532989_0001_m_000001_0, Status : FAILED
Error: java.lang.ClassNotFoundException: au.com.bytecode.opencsv.CSVParser
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at FlightsByCarrierMapper.map(FlightsByCarrierMapper.java:19)
at FlightsByCarrierMapper.map(FlightsByCarrierMapper.java:10)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:757)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:158)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:153)
Any help is highly appreciated.
Your external jar is missing on the node running maps. You have to add it to the cache to make it available. Try :
DistributedCache.addFileToClassPath(new Path("pathToJar"), conf);
Not sure in which version DistributedCache was deprecated, but from Hadoop 2.2.0 onward you can use :
job.addFileToClassPath(new Path("pathToJar"));
If you are adding the external JAR to the Hadoop classpath then its better to copy your JAR to one of the existing directories that hadoop is looking at. On the command line run the command "hadoop classpath" and then find a suitable folder and copy your jar file to that location and hadoop will pick up the dependencies from there. This wont work with CloudEra etc as you may not have read/write rights to copy files to the hadoop classpath folders.
Looks like you tried the LIBJARs option as well, did you edit your driver class to implement the TOOL interface? First make sure that you edit your driver class as shown below:
public class myDriverClass extends Configured implements Tool {
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new myDriverClass(), args);
System.exit(res);
}
public int run(String[] args) throws Exception
{
// Configuration processed by ToolRunner
Configuration conf = getConf();
Job job = new Job(conf, "My Job");
...
...
return job.waitForCompletion(true) ? 0 : 1;
}
}
Now edit your "hadoop jar" command as shown below:
hadoop jar YourApplication.jar [myDriverClass] args -libjars path/to/jar/file
Now lets understand what happens underneath. Basically we are handling the new command line arguments by implementing the TOOL Interface. ToolRunner is used to run classes implementing Tool interface. It works in conjunction with GenericOptionsParser to parse the generic hadoop command line arguments and modifies the Configuration of the Tool.
Within our Main() we are calling ToolRunner.run(new Configuration(), new myDriverClass(), args) - this runs the given Tool by Tool.run(String[]), after parsing with the given generic arguments. It uses the given Configuration, or builds one if it's null and then sets the Tool's configuration with the possibly modified version of the conf.
Now within the run method, when we call getConf() we get the modified version of the Configuration. So make sure that you have the below line in your code. If you implement everything else and still make use of Configuration conf = new Configuration(), nothing would work.
Configuration conf = getConf();
I tried setting the opencsv jar in the hadoop classpath but it didn't work.We need to explicitly copy the jar in the classpath for this to work.It did worked for me.
Below are the steps i followed:
I have done this in a HDP CLuster.I ahave copied my opencsv jar in hbase libs and exported it before running my jar
Copy ExternalJars to HDP LIBS:
To Run Open CSV Jar:
1.Copy the opencsv jar in directory /usr/hdp/2.2.9.1-11/hbase/lib/ and /usr/hdp/2.2.9.1-11/hadoop-yarn/lib
**sudo cp /home/sshuser/Amedisys/lib/opencsv-3.7.jar /usr/hdp/2.2.9.1-11/hbase/lib/**
2.Give the file permissions using
sudo chmod 777 opencsv-3.7.jar
3.List Files
ls -lrt
4.exporthadoop classpath:hbase classpath
5.Now run your Jar.It will pick up the opencsv jar and will execute properly.
I found another workaround by implementing ToolRunner like below. With this approach hadoop accepts command line options. We can avoid hard coding of adding files to DistributedCache
public class FlightsByCarrier extends Configured implements Tool {
public int run(String[] args) throws Exception {
// Configuration processed by ToolRunner
Configuration conf = getConf();
// Create a JobConf using the processed conf
JobConf job = new JobConf(conf, FlightsByCarrier.class);
// Process custom command-line options
Path in = new Path(args[1]);
Path out = new Path(args[2]);
// Specify various job-specific parameters
job.setJobName("my-app");
job.setInputPath(in);
job.setOutputPath(out);
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
// Submit the job, then poll for progress until the job is complete
JobClient.runJob(job);
return 0;
}
public static void main(String[] args) throws Exception {
// Let ToolRunner handle generic command-line options
int res = ToolRunner.run(new Configuration(), new FlightsByCarrier(), args);
System.exit(res);
}
}
I found a very easy solution to the problem:
login as root :
cd /usr/lib
find . -name "opencsv.jar"
Pick up the locatin of the file. In my case >I found it under /usr/lib/hive/lib/opencsv*.jar
Now submit the command
hadoop classpath
The result shows the direcory where hadoop searches jar-files.
Pick up one directory and copy opencsv*jar to that directory.
In my case it worked.

Categories