"How to copy a file using Jsch?" was the question first in place. As using Jsch is complicated and error-prone and also works very low-level, you need to program several lines to get a simple scp working.
So, how do I implement a scp (or even sftp) with as few lines of code as possible in Java and not violate the DRY principle?
You can use the libraries used by the Ant scp task:
package org.example.scp;
import org.apache.tools.ant.Project;
import org.apache.tools.ant.taskdefs.optional.ssh.Scp;
public class ScpCopyExample {
public void downloadFile( String remoteFilePath, String localFilePath ) {
Scp scp = new Scp();
scp.setFile("username:password#host.example.org:" + remoteFilePath);
scp.setLocalTofile(localFilePath);
scp.setProject(new Project()); // prevent a NPE (Ant works with projects)
scp.setTrust(true); // workaround for not supplying known hosts file
scp.execute();
}
public static void main(String[] args) {
ScpCopyExample scpDemo = new ScpCopyExample();
scpDemo.downloadFile("~/test.txt", "testlocal.txt");
}
}
I did this with following jars in my classpath:
jsch-0.1.48.jar
ant-jsch-1.6.5.jar
ant-1.7.0.jar
ant-launcher-1.7.0.jar
This example can easily be extended to upload files or use SFTP instead.
Few lines as possible? Try this groovy example, which leverages the ANT scp task.
#Grapes([
#Grab(group='org.apache.ant', module='ant-jsch', version='1.8.4'),
#GrabConfig(systemClassLoader=true)
])
def ant = new AntBuilder()
ant.scp(file:"helloworld.doc", todir:"mark#remotehost:/home/mark/docs", password:"sEcReT")
The Grape annotations will download the jar dependencies at run-time.
Related
So right now I have two classes, one of which creates an object out of another class:
import java.io.*;
public class PostfixConverter {
public static void main(String args[]) throws IOException, OperatorException {
...
String postfixLine;
while ((postfixLine = br.readLine()) != null) {
// write some gaurd clauses for edge cases
if (postfixLine.equals("")) {
...
Cpu cpu = new Cpu();
and
public class Cpu {
Cpu() {
// this linkedListStack is for processing the postfix
...
}
Currently I'm running javac PostfixConverter.java to compile the class but it cannot find the Cpu symbol. What can I do so that the compiler can discover the missing symbol? Shouldn't everything by default be packaged in the default package and therefore find each other?
javac PostfixConverter.java
This command should work if both files are located on the current directory, as the default classpath (-cp option) is the current directory (.).
You should compile both files, so that the Cpu class file will be available to PostfixConverter:
javac Cpu.java PostfixConverter.java
Keep in mind that in general it is not desirable to build an application where everything sits in the default package. Consider creating an appropriate package here. Also, you may want to use either an IDE and/or Maven, which would make the build process easier for you.
As the other answer mentions, Java will also automatically build all Java source files in the current directory, which might explain why other source files are also getting built.
I want to use Soot library to build an SSA from *.java file. But all the examples I found use Soot as standalone tool, not library. Can anyone give me example hot to do it in program?
For a start I am just trying to load my class from the source file and print it (TestClass.class is in the directory A/home/abdullin/workspace/test):
import soot.G
import soot.Scene
import soot.options.Options
fun main(args: Array<String>) {
G.reset();
Options.v().set_whole_program(true)
Scene.v().loadBasicClasses()
Scene.v().sootClassPath = "${Scene.v().defaultClassPath()}:/home/abdullin/workspace/test"
val sc = Scene.v().getSootClass("TestClass")
Scene.v().loadNecessaryClasses()
sc.setApplicationClass()
println(sc.name)
sc.methods.forEach {
println(it)
}
}
But when I run this, I get runtime exception Aborting: can't find classfile TestClass. If I change Scene.v().getSootClass("TestClass") to Scene.v().loadClassAndSupport("TestClass") as they do in some of their tutorials, soot finds my class, but it is not complete. It prints me signatures of class methods, but can't find their bodies, activeBody field is null.
TestClass
<TestClass: void <init>()>
<TestClass: void main(java.lang.String[])>
<TestClass: void f1()>
First, make sure that the Soot jar is in the classpath.
Then, set up Soot using the classes soot.G and soot.options.Options (G.reset() and Options.v().parse() are methods of interest, also see command line options).
Using soot.Scene.v().setSootClassPath() and similar you can tell Soot where to find the class files of the code you want to analyze.
You can then use Scene.v().getSootClass() to obtain SootClass objects. Make sure that Soot loads all classes after setting the class you want to analyze as main class:
mySootClass.setApplicationClass();
Scene.v().loadNecessaryClasses();
After this, you can use Soot to obtain various types of graphs and run you analyses, as described in the Survivor's guide
You can read this post (https://o2lab.github.io/710/p/a1.html). But if you try to analyze a jar file, you should unzip it and get a set of class files. Then you should add your classes directory into the soot_class_path.
Demo:
public static void main(String[] args) {
//spotbugs -- testing
String classesDir = "D:\\wkspace\\seed8\\dir\\spotbugs";
String mainClass = "edu.umd.cs.findbugs.LaunchAppropriateUI";
//set classpath
String jreDir = System.getProperty("java.home") + "\\lib\\jce.jar";
String jceDir = System.getProperty("java.home") + "\\lib\\rt.jar";
String path = jreDir + File.pathSeparator + jceDir + File.pathSeparator + classesDir;
Scene.v().setSootClassPath(path);
//add an intra-procedural analysis phase to Soot
TestCallGraphSootJar_3 analysis = new TestCallGraphSootJar_3();
PackManager.v().getPack("wjtp").add(new Transform("wjtp.TestSootCallGraph", analysis));
excludeJDKLibrary();
Options.v().set_process_dir(Arrays.asList(classesDir));
Options.v().set_whole_program(true);
//Options.v().set_app(true);
SootClass appClass = Scene.v().loadClassAndSupport(mainClass);
Scene.v().setMainClass(appClass);
Scene.v().loadNecessaryClasses();
//enableCHACallGraph();
enableSparkCallGraph();
PackManager.v().runPacks();
}
If you replace
SootClass appclass = Scene.v().loadClassAndSupport(mainclass);
Scene.v().setMainClass(appclass);
Scene.v().loadNecessaryClasses();
by
Scene.v().loadNecessaryClasses();
SootClass appclass = Scene.v().getSootClass(mainclass);
Scene.v().setMainClass(appclass);
, the program also works.
I'm trying to write a UDF for Hadoop Hive, that parses User Agents. Following code works fine on my local machine, but on Hadoop I'm getting:
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String MyUDF .evaluate(java.lang.String) throws org.apache.hadoop.hive.ql.metadata.HiveException on object MyUDF#64ca8bfb of class MyUDF with arguments {All Occupations:java.lang.String} of size 1',
Code:
import java.io.IOException;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.*;
import com.decibel.uasparser.OnlineUpdater;
import com.decibel.uasparser.UASparser;
import com.decibel.uasparser.UserAgentInfo;
public class MyUDF extends UDF {
public String evaluate(String i) {
UASparser parser = null;
parser = new UASparser();
String key = "";
OnlineUpdater update = new OnlineUpdater(parser, key);
UserAgentInfo info = null;
info = parser.parse(i);
return info.getDeviceType();
}
}
Facts that come to my mind I should mention:
I'm compiling with Eclipse with "export runnable jar file" and extract required libraries into generated jar option
I'm uploading this "fat jar" file with Hue
Minimum working example I managed to run:
public String evaluate(String i) {
return "hello" + i.toString()";
}
I guess the problem lies somewhere around that library (downloaded from https://udger.com) I'm using, but I have no idea where.
Any suggestions?
Thanks, Michal
It could be a few things. Best thing is to check the logs, but here's a list of a few quick things you can check in a minute.
jar does not contain all dependencies. I am not sure how eclipse builds a runnable jar, but it may not include all dependencies. You can do
jar tf your-udf-jar.jar
to see what was included. You should see stuff from com.decibel.uasparser. If not, you have to build the jar with the appropriate dependencies (usually you do that using maven).
Different version of the JVM. If you compile with jdk8 and the cluster runs jdk7, it would also fail
Hive version. Sometimes the Hive APIs change slightly, enough to be incompatible. Probably not the case here, but make sure to compile the UDF against the same version of hadoop and hive that you have in the cluster
You should always check if info is null after the call to parse()
looks like the library uses a key, meaning that actually gets data from an online service (udger.com), so it may not work without an actual key. Even more important, the library updates online, contacting the online service for each record. This means, looking at the code, that it will create one update thread per record. You should change the code to do that only once in the constructor like the following:
Here's how to change it:
public class MyUDF extends UDF {
UASparser parser = new UASparser();
public MyUDF() {
super()
String key = "PUT YOUR KEY HERE";
// update only once, when the UDF is instantiated
OnlineUpdater update = new OnlineUpdater(parser, key);
}
public String evaluate(String i) {
UserAgentInfo info = parser.parse(i);
if(info!=null) return info.getDeviceType();
// you want it to return null if it's unparseable
// otherwise one bad record will stop your processing
// with an exception
else return null;
}
}
But to know for sure, you have to look at the logs...yarn logs, but also you can look at the hive logs on the machine you're submitting the job on ( probably in /var/log/hive but it depends on your installation).
such a problem probably can be solved by steps:
overide the method UDF.getRequiredJars(), make it returning a hdfs file path list which values are determined by where you put the following xxx_lib folder into your hdfs. Note that , the list mist exactly contains each jar's full hdfs path strings ,such as hdfs://yourcluster/some_path/xxx_lib/some.jar
export your udf code by following "Runnable jar file exporting wizard" (chose "copy required libraries into a sub folder next to the generated jar". This steps will result in a xxx.jar and a lib folder xxx_lib next to xxx.jar
put xxx.jar and the folders xxx_lib to your hdfs filesystem according to your code in step 0.
create a udf using: add jar ${the-xxx.jar-hdfs-path}; create function your-function as $}qualified name of udf class};
Try it. I test this and it works
I ran into library loading problems after creating a jar from my code via maven. I use intelliJ idea on Ubuntu. I broke the problem down to this situation:
Calling the following code from within idea it prints the path correctly.
package com.myproject;
public class Starter {
public static void main(String[] args) {
File classpathRoot = new File(Starter.class.getResource("/").getPath());
System.out.println(classpathRoot.getPath());
}
}
Output is:
/home/ted/java/myproject/target/classes
When I called mvn install and try to run it from command line using the following command I'm getting a NullPointerException since class.getResource() returns null:
cd /home/ted/java/myproject/target/
java -cp myproject-0.1-SNAPSHOT.jar com.myproject.Starter
same for calling:
cd /home/ted/java/myproject/target/
java -Djava.library.path=. -cp myproject-0.1-SNAPSHOT.jar com.myproject.Starter
It doesn't matter if I use class.getClassLoader().getRessource("") instead. Same problem when accessing single files inside of the target directory instead via class.getClassLoader().getRessource("file.txt").
I want to use this way to load native files in the same directory (not from inside the jar). What's wrong with my approach?
The classpath loading mechanism in the JVM is highly extensible, so it's often hard to guarantee a single method that would work in all cases. e.g. What works in your IDE may not work when running in a container because your IDE and your container probably have highly specialized class loaders with different requirements.
You could take a two tiered approach. If the method above fails, you could get the classpath from the system properties, and scan it for the jar file you're interested in and then extract the directory from that entry.
e.g.
public static void main(String[] args) {
File f = findJarLocation("jaxb-impl.jar");
System.out.println(f);
}
public static File findJarLocation(String entryName) {
String pathSep = System.getProperty("path.separator");
String[] pathEntries = System.getProperty("java.class.path").split(pathSep);
for(String entry : pathEntries) {
File f = new File(entry);
if(f.getName().equals(entryName)) {
return f.getParentFile();
}
}
return null;
}
I am building a Gradle plugin in Java because of some Java libraries I want to take advantage of. As part of the plugin, I need to list and process folders of files. I can find many examples of how to do this in gradle build files:
FileTree tree = fileTree(dir: stagingDirName)
tree.include '**/*.md'
tree.each {File file ->
compileThis(file)
}
But how would do I do this in Java using Gradle's Java api?
The underlying FileTree Java class has very flexible input parameters, which makes it very powerful, but it's devilishly difficult to figure out what kind of input will actually work.
Here's how I did this in my java-based gradle task:
public class MyPluginTask extends DefaultTask {
#TaskAction
public void action() throws Exception {
// sourceDir can be a string or a File
File sourceDir = new File(getProject().getProjectDir(), "src/main/html");
// or:
//String sourceDir = "src/main/html";
ConfigurableFileTree cft = getProject().fileTree(sourceDir);
cft.include("**/*.html");
// Make sure we have some input. If not, throw an exception.
if (cft.isEmpty()) {
// Nothing to process. Input settings are probably bad. Warn user.
throw new Exception("Error: No processable files found in sourceDir: " +
sourceDir.getPath() );
}
Iterator<File> it = cft.iterator();
while (it.hasNext()){
File f = it.next();
System.out.println("File: "+f.getPath()"
}
}
}
It's virtually the same, e.g. project.fileTree(someMap). There's even an overload of the fileTree method that takes just the base dir (instead of a map). Instead of each you can use a for-each loop, instead of closures you can typically use anonymous inner classes implementing the Action interface (although fileTree seems to be missing these method overloads). The Gradle Build Language Reference has the details. PS: You can also take advantage of Java libraries from Groovy.