I am using Spark 1.5 on Windows. I haven't installed any separate binaries of Hadoop.
I running a Master and a single worker.
It's a simple HelloWorld Program as below :
package com.java.spark;
import java.io.Serializable;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.VoidFunction;
public class HelloWorld implements Serializable{
/**
*
*/
private static final long serialVersionUID = -7926281781224763077L;
public static void main(String[] args) {
// Local mode
//SparkConf sparkConf = new SparkConf().setAppName("HelloWorld").setMaster("local");
SparkConf sparkConf = new SparkConf().setAppName("HelloWorld").setMaster("spark://192.168.1.106:7077")
.set("spark.eventLog.enabled", "true")
.set("spark.eventLog.dir", "file:///D:/SparkEventLogsHistory");
//.set("spark.eventLog.dir", "/work/");
//tried many combinations above but all gives error.
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
String[] arr = new String[] { "John", "Paul", "Gavin", "Rahul", "Angel" };
List<String> inputList = Arrays.asList(arr);
JavaRDD<String> inputRDD = ctx.parallelize(inputList);
inputRDD.foreach(new VoidFunction<String>() {
public void call(String input) throws Exception {
System.out.println(input);
}
});
}
}
The exception I am getting is :
Exception in thread "main" java.io.IOException: Cannot run program "cygpath": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
at org.apache.hadoop.util.Shell.run(Shell.java:188)
at org.apache.hadoop.fs.FileUtil$CygPathCommand.<init>(FileUtil.java:412)
at org.apache.hadoop.fs.FileUtil.makeShellPath(FileUtil.java:438)
at org.apache.hadoop.fs.FileUtil.makeShellPath(FileUtil.java:465)
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:592)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:420)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:130)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at com.java.spark.HelloWorld.main(HelloWorld.java:28)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 13 more
16/04/01 20:13:24 INFO ShutdownHookManager: Shutdown hook called
Does anyone has any idea how to resolve this exception, so that Spark can pick the eventLogs from local directory.
If I dont give configure eventLog.dir then exception changes to :
Exception in thread "main" java.io.FileNotFoundException: File file:/H:/tmp/spark-events does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:468)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:373)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at com.java.spark.HelloWorld.main(HelloWorld.java:28)
Related
java code to generate the conversion of word to pdf
package com.sonakshi;
import com.documents4j.api.DocumentType;
import com.documents4j.api.IConverter;
import com.documents4j.job.LocalConverter;
import org.apache.commons.io.output.ByteArrayOutputStream;
import java.io.*;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
public class Hello{
public static void main(String[] args) throws IOException, ExecutionException, InterruptedException {
ByteArrayOutputStream bo = new ByteArrayOutputStream();
InputStream in = new BufferedInputStream(new FileInputStream("//home//sonakshi_user//Documents//WordDocToConvert.docx"));
IConverter converter = LocalConverter.builder()
.baseFolder(new File("//home//sonakshi_user//Documents//"))
.workerPool(20, 25, 2, TimeUnit.SECONDS)
.processTimeout(5, TimeUnit.SECONDS).build();
Future<Boolean> conversion = converter
.convert(in).as(DocumentType.MS_WORD)
.to(bo).as(DocumentType.PDF)
.prioritizeWith(1000) // optional
.schedule();
conversion.get();
try (OutputStream outputStream = new FileOutputStream("//home//sonakshi_user//Documents//Output.pdf")) {
bo.writeTo(outputStream);
} catch (IOException e) {
e.printStackTrace();
}
in.close();
bo.close();
}
}
I am getting this below exception generated while conversion
log4j:WARN No appenders could be found for logger (org.zeroturnaround.exec.ProcessExecutor).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.IllegalStateException: class com.documents4j.conversion.msoffice.MicrosoftWordBridge could not be created by a (File, long, TimeUnit) constructor
at com.documents4j.conversion.ExternalConverterDiscovery.make(ExternalConverterDiscovery.java:33)
at com.documents4j.conversion.ExternalConverterDiscovery.makeAll(ExternalConverterDiscovery.java:43)
at com.documents4j.conversion.ExternalConverterDiscovery.loadConfiguration(ExternalConverterDiscovery.java:86)
at com.documents4j.conversion.DefaultConversionManager.<init>(DefaultConversionManager.java:22)
at com.documents4j.job.LocalConverter.makeConversionManager(LocalConverter.java:79)
at com.documents4j.job.LocalConverter.<init>(LocalConverter.java:51)
at com.documents4j.job.LocalConverter$Builder.build(LocalConverter.java:186)
at com.sonakshi.Hello.main(Hello.java:20)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at com.documents4j.conversion.ExternalConverterDiscovery.make(ExternalConverterDiscovery.java:31)
... 7 more
Caused by: com.documents4j.throwables.ConverterAccessException: Unable to run script: /home/sonakshi_user/Documents/word_start1713780447.vbs
at com.documents4j.conversion.AbstractExternalConverter.runNoArgumentScript(AbstractExternalConverter.java:76)
at com.documents4j.conversion.msoffice.AbstractMicrosoftOfficeBridge.runNoArgumentScript(AbstractMicrosoftOfficeBridge.java:51)
at com.documents4j.conversion.msoffice.AbstractMicrosoftOfficeBridge.tryStart(AbstractMicrosoftOfficeBridge.java:34)
at com.documents4j.conversion.msoffice.MicrosoftWordBridge.startUp(MicrosoftWordBridge.java:44)
at com.documents4j.conversion.msoffice.MicrosoftWordBridge.<init>(MicrosoftWordBridge.java:39)
... 12 more
Caused by: org.zeroturnaround.exec.ProcessInitException: Could not execute [cmd, /S, /C, ""/home/sonakshi_user/Documents/word_start1713780447.vbs""] in /home/sonakshi_user/Documents. Error=2, No such file or directory
at org.zeroturnaround.exec.ProcessInitException.newInstance(ProcessInitException.java:80)
at org.zeroturnaround.exec.ProcessExecutor.invokeStart(ProcessExecutor.java:1002)
at org.zeroturnaround.exec.ProcessExecutor.startInternal(ProcessExecutor.java:970)
at org.zeroturnaround.exec.ProcessExecutor.execute(ProcessExecutor.java:906)
at com.documents4j.conversion.AbstractExternalConverter.runNoArgumentScript(AbstractExternalConverter.java:72)
... 16 more
Caused by: java.io.IOException: Cannot run program "cmd" (in directory "/home/sonakshi_user/Documents"): error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071)
at org.zeroturnaround.exec.ProcessExecutor.invokeStart(ProcessExecutor.java:997)
... 19 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:340)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:271)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1107)
... 21 more
Exception in thread "Shutdown hook: com.documents4j.job.LocalConverter" java.lang.NullPointerException
at com.documents4j.job.LocalConverter.shutDown(LocalConverter.java:100)
at com.documents4j.job.ConverterAdapter$ConverterShutdownHook.run(ConverterAdapter.java:134)
I have included many jar files as well.
Kindly specify if I need to use specific one
I am trying to submit spark program from cmd in windows 10 with below mentioned command:
spark-submit --class abc.Main --master local[2] C:\Users\arpitbh\Desktop\AmdocsIDE\workspace\Line_Count_Spark\target\Line_Count_Spark-0.0.1-SNAPSHOT.jar
but after running this i am getting error:
17/05/02 11:56:57 INFO ShutdownHookManager: Deleting directory C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
17/05/02 11:56:57 ERROR ShutdownHookManager: Exception while deleting Spark temp dir: C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
java.io.IOException: Failed to delete: C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:65)
at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:62)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:62)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
I have also checked JIRA of apache spark, This defect has been marked solved but no solution is mentioned. Please help.
package abc;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
public class Main {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
SparkConf conf =new SparkConf().setAppName("Line_Count").setMaster("local[2]");
JavaSparkContext ctx= new JavaSparkContext(conf);
JavaRDD<String> textLoadRDD = ctx.textFile("C:/spark/README.md");
System.out.println(textLoadRDD.count());
System.getProperty("java.io.tmpdir");
}
}
this is probably because you are instantiating the SparkContext without a SPARK_HOME or HUPA_HOME that allows the program to find winutils.exe in the bin directory. I found that when I went from
SparkConf conf = new SparkConf();
JavaSparkContext sc = new JavaSparkContext(conf);
to
JavaSparkContext sc = new JavaSparkContext("local[*], "programname",
System.getenv("SPARK_HOME"), System.getenv("JARS"));
the error went away.
I believed, you are trying to execute program without setting up user variables HADOOP_HOME or SPARK_LOCAL_DIRS.
I had same issued, resolved it by creating variables e.g HADOOP_HOME-> C:\Hadoop, SPARK_LOCAL_DIRS->C:\tmp\spark
I Have pseudo-distributed Hadoop setup on a linux machine. I have done a few examples in eclipse which is also installed in that linux machine and they worked fine. Now I want to perform MapReduce Jobs through eclipse (installed in windows machine) and access the HDFS which is already present in my linux machine. I have written the following Driver code:
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class Windows_Driver extends Configured implements Tool{
public static void main(String[] args) throws Exception {
int exitcode = ToolRunner.run(new Windows_Driver(), args);
System.exit(exitcode);
}
#Override
public int run(String[] arg0) throws Exception {
JobConf conf = new JobConf(Windows_Driver.class);
conf.set("fs.defaultFS", "hdfs://<Ip address>:50070");
FileInputFormat.setInputPaths(conf, new Path("sample"));
FileOutputFormat.setOutputPath(conf, new Path("sam"));
conf.setMapperClass(Win_Mapper.class);
conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(Text.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);
JobClient.runJob(conf);
return 0;
}
}
And the Mapper code :
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class Win_Mapper extends MapReduceBase implements Mapper<LongWritable, Text,Text, Text> {
#Override
public void map(LongWritable key, Text value, OutputCollector<Text, Text> o, Reporter arg3) throws IOException {
...
o.collect(... , ...);
}
}
When I run this, I get the following error:
SEVERE: PriviledgedActionException as:miracle cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-miracle\mapred\staging\miracle1262421749\.staging to 0700
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-miracle\mapred\staging\miracle1262421749\.staging to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at Windows_Driver.run(Windows_Driver.java:41)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at Windows_Driver.main(Windows_Driver.java:16)
How can I rectify the error? And how can I access my HDFS remotely from windows?
submit() method on the Job creates an internal Jobsubmitter instance and that would do all the data validations including input path,output path availability,file/directory creation permissions and other things. During different phases of MR, it will create temporary directories under which it will put the temp. files. The temp directory is taken from core-site.xml with property hadoop.tmp.dir. The issue with your system is it seems the temp. directory is /tmp/ and the user running the MR job doesn't have permission to change its rwx status to 700. Provide appropriate permissions and rerun the job.
I'm running hadoop in a school cluster. I get exception in thread main with class not found exception.
Exception in thread "main" java.lang.ClassNotFoundException: movielens.MovieLensDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
But I'm aware that I have to use full package name in the command and i have done the same. Following is the command I used
hadoop jar movielens.jar movielens.MovieLensDriver input output
Following is the code for my driver class.
package movielens;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.jobcontrol.Job;
import org.apache.hadoop.mapred.jobcontrol.JobControl;
public class MovieLensDriver {
public static class JobRunner implements Runnable {
private JobControl control;
public JobRunner(JobControl _control) {
this.control = _control;
}
public void run() {
this.control.run();
}
}
public static void handleRun(JobControl control)
throws InterruptedException {
JobRunner runner = new JobRunner(control);
Thread t = new Thread(runner);
t.start();
while (!control.allFinished()) {
System.out.println("Still running...");
Thread.sleep(5000);
}
}
public static void main(String args[]) throws IOException,
InterruptedException {
System.out.println("Program started");
if (args.length != 2) {
System.err
.println("Usage: MovieLensDriver <input path> <output path>");
System.exit(-1);
}
JobConf conf1 = new JobConf(movielens.MovieLensDriver.class);
conf1.setMapperClass(MoviePairsMapper.class);
conf1.setReducerClass(MoviePairsReducer.class);
conf1.setJarByClass(MovieLensDriver.class);
FileInputFormat.addInputPath(conf1, new Path(args[0]));
FileOutputFormat.setOutputPath(conf1, new Path("temp"));
conf1.setMapOutputKeyClass(Text.class);
conf1.setMapOutputValueClass(Text.class);
conf1.setOutputKeyClass(Text.class);
conf1.setOutputValueClass(IntWritable.class);
JobConf conf2 = new JobConf(MovieLensDriver.class);
conf2.setMapperClass(MoviePairsCoOccurMapper.class);
conf2.setReducerClass(MoviePairsCoOccurReducer.class);
conf2.setJarByClass(MovieLensDriver.class);
FileInputFormat.addInputPath(conf2, new Path("temp"));
FileOutputFormat.setOutputPath(conf2, new Path(args[1]));
conf2.setInputFormat(KeyValueTextInputFormat.class);
conf2.setMapOutputKeyClass(Text.class);
conf2.setMapOutputValueClass(IntWritable.class);
conf2.setOutputKeyClass(Text.class);
conf2.setOutputValueClass(IntWritable.class);
Job job1 = new Job(conf1);
Job job2 = new Job(conf2);
JobControl jobControl = new JobControl("jobControl");
jobControl.addJob(job1);
jobControl.addJob(job2);
job2.addDependingJob(job1);
handleRun(jobControl);
System.out.println("Program complete.");
System.exit(0);
}
}
It has been a frustrating search for the bug for last 3 hours and any help is appreciated.
You can try the 'libjar' option, which will take the jar and place it in the distributed cache. This makes the jar available to all of the job's task attempts. Notice that the libjars argument takes a comma separated list, not a colon or semi-colon separated list.
export LIBJARS=/path/jars1,/path/jars2,/path/movielens.jar
hadoop jar movielens.jar movielens.MovieLensDriver -libjars ${LIBJARS} input output
I am an absolute newbee in asterisk. I am trying to use asterisk-java for event listening through AMI. I am currently using the version 11.2.1 asterisk. When I tries to compiles the code as
javac -cp asterisk-java-0.3.jar HelloEvents.java
it complets successfully. But when I try to execute the file, I give the following error.
Exception in thread "main" java.lang.NoClassDefFoundError: HelloEvents
Caused by: java.lang.ClassNotFoundException: HelloEvents
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: HelloEvents. Program will exit.
the code is
import java.io.IOException;
import org.asteriskjava.manager.AuthenticationFailedException;
import org.asteriskjava.manager.ManagerConnection;
import org.asteriskjava.manager.ManagerConnectionFactory;
import org.asteriskjava.manager.ManagerEventListener;
import org.asteriskjava.manager.TimeoutException;
import org.asteriskjava.manager.action.StatusAction;
import org.asteriskjava.manager.event.ManagerEvent;
public class HelloEvents implements ManagerEventListener
{
private ManagerConnection managerConnection;
public HelloEvents() throws IOException
{
ManagerConnectionFactory factory = new ManagerConnectionFactory(
"localhost", "manager", "password");
this.managerConnection = factory.createManagerConnection();
}
public void run() throws IOException, AuthenticationFailedException,
TimeoutException, InterruptedException
{
// register for events
managerConnection.addEventListener(this);
// connect to Asterisk and log in
managerConnection.login();
// request channel state
managerConnection.sendAction(new StatusAction());
// wait 10 seconds for events to come in
Thread.sleep(10000);
// and finally log off and disconnectaaaa
managerConnection.logoff();
}
public void onManagerEvent(ManagerEvent event)
{
// just print received events
System.out.println(event);
}
public static void main(String[] args) throws Exception
{
HelloEvents helloEvents;
helloEvents = new HelloEvents();
helloEvents.run();
}
}
java -cp ".;asterisk-java.jar" HelloEvents
works fine.
And the class path separator is OS dependent. If you are using linux / mac, use : (colon) instead of ; (semicolon)
We can avoid adding class path every time while compiling or executing code. Now go to the location where jar file is located. Then run,
For linux :- export CLASSPATH=$CLASSPATH:asterisk-java-2.0.3.jar:.
For windows:- set CLASSPATH=$CLASSPATH:asterisk-java-2.0.3.jar:.
Now compile code by
javac HelloEvents.java
Execute it by java HelloEvents