I'm looking to use jshell to replace bash for command line processing.
I've created a simple class fs in the file fs.jsh (yes poor naming) that has a number of utility functions like:
// file fs.jsh
class fs
{
static void println(String line)
{
System.out.println(line);
}
}
I know want to include fs.jsh from another file:
e.g.
// helloworld.jsh
import fs.jsh
fs.println("Hello World");
The above code gives the error:
package fs does not exist
| import fs.jsh;
I've also tried:
import fs;
Which gives:
Error:
| '.' expected
| import fs;
So how do I import one script file from another.
One thing that you can make sure is to create an instance of the class before you access its method :
new fs().println("Hello World");
Another, do make sure the sequence in which the scripts are executed is fixed if one relies on the code of the other.
Scripts are run in the order in which they’re entered on the command
line. Command-line scripts are run after startup scripts. To run a
script after JShell is started, use the /open command.
Additionally, an import without a package doesn't make much sense and you cannot have packages in Jshell snippets.
The way you can do it is somewhat like:
Have one script file some.jsh
Another script file sometwo.jsh which calls it
Eventually, open the script sometwo.jsh
Related
I am trying to import a jar file. My file "Test.java" contains the line:
"import org.jfugue.*;"
When I run the command "javac -classpath .:jfugue-5.0.9.jar Test.java", I get the error "package org.jfugue does not exist". How do I fix this?
Note: I am using a Mac machine.
Actually if you inspect the jar file "jfugue-5.0.9.jar", There're no any Class files in the package "org.jfugue.". Instead it contains some sub packages such as org.jfugue.devices., org.jfugue.integration., org.jfugue.parser. etc.
Try something like this,
import org.jfugue.devices.*;
public class Hello {
public static void main (String[] args) {
System.out.println("Hi");
}
}
Starting point
You want to compile code using the contents of a jar file, specifically "jfugue-5.0.9.jar", and you have a "Test" class with an import statement, like this:
import org.jfugue.*;
public class Test {
}
If you compile that code, you get an error like this:
% javac -classpath .:jfugue-5.0.9.jar Test.java
Test.java:1: error: package org.jfugue does not exist
import org.jfugue.*;
^
1 error
What's going on?
You're doing the right steps, mostly, but the import statement isn't correct. Syntax-wise, it's fine, but it does not align with the contents of the jar file. The structure of the jar contents (which you can see by running: jar tf jfugue-5.0.9.jar) shows that there is a directory for "org/jfugue/", but there are no classes or interfaces there; it's just a directory.
Below is a view of the first 9 lines of jar contents, sorted. It shows several directories without file
contents – "org/" and "org/jfugue/" – but "org/jfugue/devices/" for example has four files present.
% jar tf jfugue-5.0.9.jar | sort | head -9
META-INF/
META-INF/MANIFEST.MF
org/
org/jfugue/
org/jfugue/devices/
org/jfugue/devices/MidiParserReceiver.class
org/jfugue/devices/MusicReceiver.class
org/jfugue/devices/MusicTransmitterToParserListener.class
org/jfugue/devices/MusicTransmitterToSequence.class
So if you were to change the import statement to "org.jfugue.devices.*" – which would match those four files
("MusicReceiver", etc) – then compilation would work fine (no errors).
import org.jfugue.devices.*;
public class Test {
}
% javac -classpath .:jfugue-5.0.9.jar Test.java
%
Solution
Following
JLS 7.5.1,
you can import each specific class one by one, such as:
import org.jfugue.devices.MidiParserReceiver;
import org.jfugue.devices.MusicReceiver;
import org.jfugue.devices.MusicTransmitterToParserListener;
import org.jfugue.devices.MusicTransmitterToSequence;
Or following
JLS 7.5.2,
you can import all classes and interfaces matching a wildcard pattern (so
long as there are actually classes or interfaces matching that pattern)
such as:
import org.jfugue.devices.*;
It's not allowed to import a subpackage, so "import org.jfugue;" (without the .* wildcard) would not work
(see Example 7.5.1-3 No Import of a Subpackage in JLS).
I am new to pig. I wrote a UDF in pig and used it in my pig script. But it gives following error
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve UserDefined.PartsOfSpeech using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Here is my UDF code
public String exec(Tuple input) throws IOException {
//my code here
}
Here is my pig script
REGISTER /home/bigdata/NetBeansProjects/UserDefined/dist/UserDefined.jar
a = load '/user/bigdata/json' using TextLoader() as (input:chararray);
b = foreach a GENERATE UserDefined.PartsOfSpeech(input);
In the above code UserDefined is my package name and PartsOfSpeech is my class name
The error message says that Pig cannot find UserDefined.PartsOfSpeech.
What package declaration does PartsOfSpeech.java have at the top of the file?
If the package declaration is package com.my.company; try this instead:
REGISTER /home/bigdata/NetBeansProjects/UserDefined/dist/UserDefined.jar
a = load '/user/bigdata/json' using TextLoader() as (input:chararray);
b = foreach a GENERATE com.my.company.PartsOfSpeech(input);
That is, replace UserDefined.PartsOfSpeech(input) with com.my.company.PartsOfSpeech(input) since the UDF is located in the package com.my.company.
Also, consider using the DEFINE keyword in your Pig script so you don't need to repeat com.my.company every time you use PartsOfSpeech.
DEFINE PartsOfSpeech UserDefined.dist.PartsOfSpeech();
REGISTER /home/bigdata/NetBeansProjects/UserDefined/dist/UserDefined.jar
a = load '/user/bigdata/json' using TextLoader() as (input:chararray);
b = foreach a GENERATE PartsOfSpeech(input);
There is more information about DEFINE in Chapter 5 of Alan Gates' Programming Pig: http://chimera.labs.oreilly.com/books/1234000001811/ch05.html#udf_define.
Here is an example of DEFINE from Gates' book:
--define.pig
register 'your_path_to_piggybank/piggybank.jar';
define reverse org.apache.pig.piggybank.evaluation.string.Reverse();
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
backwards = foreach divs generate reverse(symbol);
Before compiling your UDF(java class) make sure you have mentioned package name properly. for example if you have mentioned package name-
package com.pig.udf;
It means you need to take care of directory in your linux box as well.
you can follow below mentioned steps to create jar -
Create directory using
mkdir -p com/pig/udf
Create your java class with package com.pig.udf
Compile your java source code using command
javac -cp /usr/lib/pig-0.12.0.2.0.6.0-76.jar YourClass.java
Then go to the directory where you want to create jar for now -
cd ../../..
Now create jar using below command
jar -cvf yourJarName.jar com/
Register the jar in your script using keyword "register" followed by path of the jar
Now use your jar with keyword com.pig.udf.YourJavaClassName
for your scenerio -
REGISTER /home/bigdata/NetBeansProjects/UserDefined/dist/UserDefined.jar
a = load '/user/bigdata/json' using TextLoader() as (input:chararray);
b = foreach a GENERATE com.pig.udf.PartsOfSpeech(input);
I am using Cygwin on Windows 7 to try and set up a single node in Hadoop 1.2.1. I am following this tutorial. I am able to create the input directory fine as well as copy the .xml files to the input directory. What the trouble seems to be is when I execute $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs [a-z.]+' it throws "Error: Could not find or load main class work\work" in the command line. I have checked the source code (listed below, looks like Python) and there is a main method displayed. I have also tried variations on the original command line call such as, $ bin/hadoop jar hadoop-examples-*.jar main input output 'dfs [a-z.]+', etc.
My question is: Why is hadoop not reading this main Method? And how do I get it to read this main method? What is cygwin telling me when it says, "work/work?"Does the fact the source was written in Python and compiled to a .jar format have any significance?
from org.apache.hadoop.fs import Path
from org.apache.hadoop.io import *
from org.apache.hadoop.mapred import *
import sys
import getopt
class WordCountMap(Mapper, MapReduceBase):
one = IntWritable(1)
def map(self, key, value, output, reporter):
for w in value.toString().split():
output.collect(Text(w), self.one)
class Summer(Reducer, MapReduceBase):
def reduce(self, key, values, output, reporter):
sum = 0
while values.hasNext():
sum += values.next().get()
output.collect(key, IntWritable(sum))
def printUsage(code):
print "wordcount [-m <maps>] [-r <reduces>] <input> <output>"
sys.exit(code)
def main(args):
conf = JobConf(WordCountMap);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text);
conf.setOutputValueClass(IntWritable);
conf.setMapperClass(WordCountMap);
conf.setCombinerClass(Summer);
conf.setReducerClass(Summer);
try:
flags, other_args = getopt.getopt(args[1:], "m:r:")
except getopt.GetoptError:
printUsage(1)
if len(other_args) != 2:
printUsage(1)
for f,v in flags:
if f == "-m":
conf.setNumMapTasks(int(v))
elif f == "-r":
conf.setNumReduceTasks(int(v))
conf.setInputPath(Path(other_args[0]))
conf.setOutputPath(Path(other_args[1]))
JobClient.runJob(conf);
if __name__ == "__main__":
main(sys.argv)
I have been looking for an answer for how to execute a java jar file through python and after looking at:
Execute .jar from Python
How can I get my python (version 2.5) script to run a jar file inside a folder instead of from command line?
How to run Python egg files directly without installing them?
I tried to do the following (both my jar and python file are in the same directory):
import os
if __name__ == "__main__":
os.system("java -jar Blender.jar")
and
import subprocess
subprocess.call(['(path)Blender.jar'])
Neither have worked. So, I was thinking that I should use Jython instead, but I think there must a be an easier way to execute jar files through python.
Do you have any idea what I may do wrong? Or, is there any other site that I study more about my problem?
I would use subprocess this way:
import subprocess
subprocess.call(['java', '-jar', 'Blender.jar'])
But, if you have a properly configured /proc/sys/fs/binfmt_misc/jar you should be able to run the jar directly, as you wrote.
So, which is exactly the error you are getting?
Please post somewhere all the output you are getting from the failed execution.
This always works for me:
from subprocess import *
def jarWrapper(*args):
process = Popen(['java', '-jar']+list(args), stdout=PIPE, stderr=PIPE)
ret = []
while process.poll() is None:
line = process.stdout.readline()
if line != '' and line.endswith('\n'):
ret.append(line[:-1])
stdout, stderr = process.communicate()
ret += stdout.split('\n')
if stderr != '':
ret += stderr.split('\n')
ret.remove('')
return ret
args = ['myJarFile.jar', 'arg1', 'arg2', 'argN'] # Any number of args to be passed to the jar file
result = jarWrapper(*args)
print result
I used the following way to execute tika jar to extract the content of a word document. It worked and I got the output also. The command I'm trying to run is "java -jar tika-app-1.24.1.jar -t 42250_EN_Upload.docx"
from subprocess import PIPE, Popen
process = Popen(['java', '-jar', 'tika-app-1.24.1.jar', '-t', '42250_EN_Upload.docx'], stdout=PIPE, stderr=PIPE)
result = process.communicate()
print(result[0].decode('utf-8'))
Here I got result as tuple, hence "result[0]". Also the string was in binary format (b-string). To convert it into normal string we need to decode with 'utf-8'.
With args: concrete example using Closure Compiler (https://developers.google.com/closure/) from python
import os
import re
src = test.js
os.execlp("java", 'blablabla', "-jar", './closure_compiler.jar', '--js', src, '--js_output_file', '{}'.format(re.sub('.js$', '.comp.js', src)))
(also see here When using os.execlp, why `python` needs `python` as argv[0])
How about using os.system() like:
os.system('java -jar blabla...')
os.system(command)
Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command.
This is a followup to my own previous question and I'm kind of embarassed to ask this... But anyway: how would you start a second JVM from a standalone Java program in a system-independent way? And without relying on for instance an env variable like JAVA_HOME as that might point to a different JRE than the one that is currently running. I came up with the following code which actually works but feels just a little awkward:
public static void startSecondJVM() throws Exception {
String separator = System.getProperty("file.separator");
String classpath = System.getProperty("java.class.path");
String path = System.getProperty("java.home")
+ separator + "bin" + separator + "java";
ProcessBuilder processBuilder =
new ProcessBuilder(path, "-cp",
classpath,
AnotherClassWithMainMethod.class.getName());
Process process = processBuilder.start();
process.waitFor();
}
Also, the currently running JVM might have been started with some other parameters (-D, -X..., ...) that the second JVM would not know about.
I think that the answer is "Yes". This probably as good as you can do in Java using system independent code. But be aware that even this is only relatively system independent. For example, in some systems:
the JAVA_HOME variable may not have been set,
the command name used to launch a JVM might be different (e.g. if it is not a Sun JVM), or
the command line options might be different (e.g. if it is not a Sun JVM).
If I was aiming for maximum portability in launching a (second) JVM, I think I would do it using wrapper scripts.
It's not clear to me that you would always want to use exactly the same parameters, classpath or whatever (especially -X kind of stuff - for example, why would the child need the same heap settings as its parents) when starting a secondary process.
I would prefer to use an external configuration of some sort to define these properties for the children. It's a bit more work, but I think in the end you will need the flexibility.
To see the extent of possible configuration settings you might look at thye "Run Configurations" settings in Eclipse. Quite a few tabs worth of configuration there.
To find the java executable that your code is currently running under (i.e. the 'path' variable in your question's sample code) there is a utility method within apache ant that can help you. You don't have to build your code with ant - just use it as a library, for this one method.
It is:
org.apache.tools.ant.util.JavaEnvUtils.getJreExecutable("java")
It takes care of the sort of special cases with different JVM vendors that others have mentioned. (And looking at the source code for it, there are more special cases than I would have imagined.)
It's in ant.jar. ant is distributed under the Apache license so hopefully you can use it how you want without hassle.
Here's a way that determines the java executable which runs the current JVM using ProcessHandle.current().info().command().
The ProcessHandle API also should allow to get the arguments. This code uses them for the new JVM if available, only replacing the current class name with another sample class. (Finding the current main class inside the arguments gets harder if you don't know its name, but in this demo it's simply "this" class. And maybe you want to reuse the same JVM options or some of them, but not the program arguments.)
However, for me (openjdk version 11.0.2, Windows 10), the ProcessInfo.arguments() is empty, so the fallback else path gets executed.
package test;
import java.lang.ProcessBuilder.Redirect;
import java.lang.management.ManagementFactory;
import java.util.LinkedList;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class TestStartJvm {
public static void main(String[] args) throws Exception {
ProcessHandle.Info currentProcessInfo = ProcessHandle.current().info();
List<String> newProcessCommandLine = new LinkedList<>();
newProcessCommandLine.add(currentProcessInfo.command().get());
Optional<String[]> currentProcessArgs = currentProcessInfo.arguments();
if (currentProcessArgs.isPresent()) { // I know about orElse, but sometimes isPresent + get is handy
for (String arg: currentProcessArgs.get()) {
newProcessCommandLine.add(TestStartJvm.class.getName().equals(arg) ? TargetMain.class.getName() : arg);
}
} else {
System.err.println("don't know all process arguments, falling back to passed args array");
newProcessCommandLine.add("-classpath");
newProcessCommandLine.add(ManagementFactory.getRuntimeMXBean().getClassPath());
newProcessCommandLine.add(TargetMain.class.getName());
newProcessCommandLine.addAll(List.of(args));
}
ProcessBuilder newProcessBuilder = new ProcessBuilder(newProcessCommandLine).redirectOutput(Redirect.INHERIT)
.redirectError(Redirect.INHERIT);
Process newProcess = newProcessBuilder.start();
System.out.format("%s: process %s started%n", TestStartJvm.class.getName(), newProcessBuilder.command());
System.out.format("process exited with status %s%n", newProcess.waitFor());
}
static class TargetMain {
public static void main(String[] args) {
System.out.format("in %s: PID %s, args: %s%n", TargetMain.class.getName(), ProcessHandle.current().pid(),
Stream.of(args).collect(Collectors.joining(", ")));
}
}
}
Before ProcessHandle was added in Java 9, I did something like this to query the current JVM's command-line:
Let the user pass or configure a "PID to command-line" command template; under Windows, this could be wmic process where 'processid=%s' get commandline /format:list.
Determine PID using java.lang.management.ManagementFactory.getRuntimeMXBean().getPid().
Expand command template; execute; parse its output.