I use Nashorn scripts from a Java application. Java sets the context (including errorWriter) and everything just works fine...
BUT i haven't found the way to write to the error Stream from the nashorn script. Does anyone know ?
I tried to throw an error but it outputs in the scriptException, not in the error output Stream.
Thanks for any idea.
It looks like there is no built in function to write to stderr, like there is one to write to stdout (print).
But, you can set an attribute in the ScriptContext that has a function to write to it's error writer.
ScriptEngine se = ...;
ScriptContext sc = se.getContext();
sc.setAttribute("stderr", // name is 'stderr'
(Consumer<String>) str -> { // Object is a Consumer<String>
try {
Writer err = sc.getErrorWriter();
err.write(str);
err.flush();
} catch (Exception e) {
throw new Error(e);
}
},
ScriptContext.ENGINE_SCOPE // i.e. don't share with other engines
);
The reason to do it this way, instead of binding the error writer object directly, is that you can still change the error writer later, and it will still work.
Afterwards you could do this:
sc.setErrorWriter(new PrintWriter(new OutputStream() {
#Override
public void write(int b) throws IOException {
System.err.write(b);
}
}));
Which will write all the errors to the Java stderr (which is not the default btw).
Then from javascript you can do:
var error = function(message) { // A nice wrapper function
stderr.accept(message); // 'accept', since we passed a `Consumer<...>`
};
error('error'); // Will now print to Java's stderr
Related
I need to run following command 'hdfs dfs -cat /user/username/data/20220815/EDHSB.CSV', which shows the contents of the CSV file (present in remote HDFS).
To implement the above I have used below code:
try{
String shpath="hdfs dfs -cat /user/username/data/20220815/EDHSB.CSV";
Process ps = Runtime.getRuntime().exec(shpath);
ps.waitFor();
}
catch (Exception e) {
e.printStackTrace();
}
Next step is to read the CSV file from above code.
Is the first step good enough or is there any other way for the entire flow...
You should use java.lang.Process and java.lang.ProcessBuilder instead, as that allows you to intercept the output directly in your Java code.
Basically, it looks like this
final var process = new ProcessBuilder( "hdfs", "dfs", "-cat", "/user/username/data/20220815/EDHSB.CSV" )
.start();
final String csvFileContents;
try( var inputStream = process.getInputStream();
var reader = new BufferedReader( new InputStreamReader( inputStream ) )
{
csvFileContents = lines.collect( Collectors.joining( "\n" ) );
}
All necessary error handling was omitted for readability …
Two things about your code:
It's better not to call printStackTrace() because it's too easy to miss it. Do something meaningful with exceptions. If you can't, just let the exception come out of your method by adding a throws clause to its signature.
Do you really want to wait for the process to finish by calling waitFor() before you start reading? If you do, and the file is very big, you might lose some content because the Java runtime has a limited buffer. Instead, get its inputstream and start process it straight away. You'll get an EOF condition when the process exits.
void processCSV() throws IOException {
String shpath="hdfs dfs -cat /user/username/data/20220815/EDHSB.CSV";
Process ps = Runtime.getRuntime().exec(shpath);
try (Stream<String> lines = ps.inputReader().lines()) {
lines.forEach(line -> {
processCSVLine(line);
}
}
}
I'm trying to upgrade from docker-java 0.10.3 to 3.2.7. This line has me completely stumped:
InputStream response =
dockerClient.attachContainerCmd(container.getId())
.withLogs(true)
.withStdErr(true)
.withStdOut(true)
.withFollowStream(true)
.exec();
I have managed to get round one error by changing it to
InputStream response =
dockerClient.attachContainerCmd(container.getId())
.withLogs(true)
.withStdErr(true)
.withStdOut(true)
.withFollowStream(true)
.exec(new AttachContainerResultCallback());
(but my IDE says that AttachContainerResultCallback is deprecated.) The problem is that .exec() used to return an InputStream. Now it returns a void. I need the InputStream, because the output of the commands running in the container needs to find it's way to the screen. This needs to be realtime, because the user needs to see the output of the commands as they are running; I can't just copy a file at the end.
How can I get hold of this InputStream?
The error is:
java: incompatible types: inference variable T has incompatible bounds
lower bounds: java.io.InputStream,com.github.dockerjava.api.async.ResultCallback<com.github.dockerjava.api.model.Frame>
lower bounds: com.github.dockerjava.core.command.AttachContainerResultCallback
Try it:
var outputStream = new PipedOutputStream();
var inputStream = new PipedInputStream(outputStream);
var response =
dockerClient.attachContainerCmd(container.getId())
.withLogs(false)
.withStdErr(true)
.withStdOut(true)
.withFollowStream(true)
.exec(new ResultCallback.Adapter<>() {
#Override
public void onNext(Frame object) {
System.out.println(object); //for example
try {
outputStream.write(object.getPayload());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
});
try {
response.awaitCompletion();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
Variable inputStream will be what you are looking for.
P.S. In my case I do withLogs(false) because it blocks the current output and I get only part of the log. It probably has something to do with the container I'm connecting to.
I am creating a Java class using JNI that allows various IPC mechanisms between separate Java programs.
I've created a class called WindowsIPC that contains a native method that can access Windows' named pipes. I have a native function called createNamedPipeServer() that calls CreateNamedPipe. It appears to have created the pipe correctly as I can view it using a tool such as Process Explorer.
My problem is that when I make use of this in a separate Java program and making use of a separate thread to read and write data using Java's standard Input and output streams, it fails. I can write data to the pipe successfully but cannot read the contents; it returns with a FileNotFoundException (All pipe instances are busy).
I am struggling to wrap my head around this as I cannot figure out what other process is using the pipe as well as the fact that I've specified PIPE_UNLIMITED_INSTANCES when the pipe was created.
I have read up extensively on how reads work and my hunch is that input/output streams in Java handle it due to the fact that it is returning the error mentioned above.
Any insights would be greatly appreciated.
Here is the code:
WindowIPC.java
public class WindowsIPC {
public native int createNamedPipeServer(String pipeName);
static {
System.loadLibrary("WindowsIPC");
}
public static void main(String[] args) {
// some testing..
}
}
WindowsIPC.c
const jbyte *nameOfPipe; // global variable representing the named pipe
HANDLE pipeHandle; // global handle..
JNIEXPORT jint JNICALL Java_WindowsIPC_createNamedPipeServer
(JNIEnv * env, jobject obj, jstring pipeName) {
jint retval = 0;
char buffer[1024]; // data buffer of 1K
DWORD cbBytes;
// Get the name of the pipe
nameOfPipe = (*env)->GetStringUTFChars(env, pipeName, NULL);
pipeHandle = CreateNamedPipe (
nameOfPipe, // name of the pipe
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_BYTE |
PIPE_READMODE_BYTE |
PIPE_NOWAIT, // forces a return, so thread doesn't block
PIPE_UNLIMITED_INSTANCES,
1024,
1024,
0,
NULL
);
// error creating server
if (pipeHandle == INVALID_HANDLE_VALUE) retval = -1;
else printf("Server created successfully: name:%s\n", nameOfPipe);
// waits for a client -- currently in ASYC mode so returns immediately
jboolean clientConnected = ConnectNamedPipe(pipeHandle, NULL);
(*env)->ReleaseStringUTFChars(env, pipeName, nameOfPipe);
return retval;
}
And finally TestWinIPC.java
import java.io.*;
import java.util.Scanner;
public class TestWinIPC {
public static void main (String[] args)
{
WindowsIPC winIPC = new WindowsIPC();
// TEST NAMED PIPES
final String pipeName = "\\\\.\\Pipe\\JavaPipe";
if (winIPC.createNamedPipeServer(pipeName) == 0) {
System.out.println("named pipe creation succeeded");
Thread t = new Thread(new NamedPipeThread(pipeName));
t.start();
try {
System.out.println("opening pipe for input");
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(pipeName)));
System.out.println("waiting to read");
String line = br.readLine();
System.out.println("Read from pipe OK: " + line);
br.close();
}
catch (IOException exc) {
System.err.println("I/O Error: " + exc);
exc.printStackTrace();
}
}
} //main
private static class NamedPipeThread implements Runnable {
private String pipeName;
public NamedPipeThread (String pipeName) {
this.pipeName = pipeName;
} // constructor
public void run () {
try {
PrintWriter pw = new PrintWriter(new FileOutputStream(pipeName));
pw.println("Hello Pipe");
System.out.println("Wrote to named pipe OK");
pw.close();
}
catch (IOException exc) {
System.err.println("I/O Error: " + exc);
exc.printStackTrace();
}
} // run
}
}
The reason you are getting an "all pipe instances are busy" error is that you're connecting to the pipe twice (once for reading and once for writing) but you've only created one instance. (Note that using the PIPE_UNLIMITED_INSTANCES option allows you to create as many instances as you like, but you still have to create them yourself.)
From the looks of it, you were expecting the the call to FileInputStream to open the server end of the pipe. That's not how it works. You must use the handle returned from CreateNamedPipe in order to access the server end of the pipe.
Whether there is a straightforward, supported way to convert a handle into a stream in JNI I have no idea (there doesn't seem to be so far as I can tell) but note that the fact that it is a nonblocking handle is likely to be a complication, since Java almost certainly won't be expecting that.
A more promising approach would be to implement InputStream and/or OutputStream classes that call JNI methods to do the actual I/O.
Addendum: if you don't want to use JNI, and can't find any more acceptable way of converting a native handle into a stream, you could in principle launch a (native) thread to tie the server ends of two separate pipes together, allowing the client ends to talk to one another. I'm not sure that would perform any better than using JNI, but I suppose it might be worth trying.
There is one other technical problem with your code: in non-blocking mode, you are expected to call ConnectNamedPipe repeatedly until it reports that a pipe is connected:
Note that a good connection between client and server exists only after the ERROR_PIPE_CONNECTED error is received.
In practice, you can probably get away without doing this, provided you're not planning to reuse the pipe instance for another client. Windows implicitly connects the first client for any given pipe instance, so you don't need to call ConnectNamedPipe at all. However, you should note that this is an undocumented feature.
It probably makes more sense to use normal I/O and issue the call to ConnectNamedPipe the first time the Java code asks you to do I/O; presumably, the programmer will be expecting the read and/or write operations to block anyway.
If you do not want to use normal I/O, you should prefer asynchronous I/O to nonblocking I/O:
Nonblocking mode is supported for compatibility with Microsoft LAN Manager version 2.0, and it should not be used to achieve asynchronous input and output (I/O) with named pipes.
(Both quotes from the MSDN page on ConnectNamedPipe.)
Am running .exe file from java code using ProcessBulider, the code I have written is given below. The .exe file takes Input.txt(placed in same directory) as input and provide 3 output file in same directory.
public void ExeternalFileProcessing() throws IOException, InterruptedException {
String executableFileName = "I:/Rod/test.exe;
ProcessBuilder processBuilderObject=new ProcessBuilder(executableFileName,"Input.txt");
File absoluteDirectory = new File("I:/Rod");
processBuilderObject.directory(absoluteDirectory);
Process process = processBuilderObject.start();
process.waitFor();
}
this process is working fine by call ExeternalFileProcessing(). Now am doing validation process, If there is any crash/.exe file doesn't run, I should get the error message how can I get error message?
Note: It would be better that error message be simple like run successful/doesn't run successful or simply true/false, so that I can put this in If condition to continue the remaining process.
You can add exception handlers to get the error message.
public void externalFileProcessing() {
String executableFileName = "I:/Rod/test.exe";
ProcessBuilder processBuilderObject = new ProcessBuilder(
executableFileName, "Input.txt");
File absoluteDirectory = new File("I:/Rod");
processBuilderObject.directory(absoluteDirectory);
try {
Process process = processBuilderObject.start();
process.waitFor();
// this code will be executed if the process works
System.out.println("works");
} catch (IOException e) {
// this code will be executed if a IOException happens "e.getMessage()" will have an error
e.printStackTrace();
} catch (InterruptedException e) {
// this code will be executed if the thread is interrupted
e.printStackTrace();
}
}
But it would be better to handle it in the calling function by put a try catch handler in the calling function and handling it there.
Is it a third party .exe or do you have access to its sources? If so, you could work with basic System outputs (for example couts to the console).
Those outputs can be redirected to your java app using something like this:
InputStream is = process.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
String line = "";
while ((line = br.readLine()) != null) {
if(line.equals("something")) {
// do something
}
}
br.close();
This is how i do things like that and it works very well in general. But i must admit, that i can not say/garuantee, that this is THE way to do it. A more advanced approach might be the use of StreamGobbler (see Listing 4.5) to handle the outputs of the .exe.
Let me know if it helped you or not.
I'd like to rewrite and simply my code to cut down on the number of methods in a class that do exactly the same thing but either write to a file, or to a console so I can do things like:
PrintFlightSchedule(String aFileName); // prints to a file
PrintFlightSchedule(); // writes to console.
I've tried creating the following test method just to demonstrate what I'my trying to achieve, by defining an abstract OutputStream, then instantiating it as either a PrintStream, or console (via System.out):
public static void testOutputStream(String fileNm, String msg) {
OutputStream os;
if (fileNm.equals("") ) { // No file name provided, write to console
os = System.out;
}
// File name provided, write to this file name
else {
try {
os = new FileOutputStream(fileNm);
}
catch (FileNotFoundException fe) {
System.out.println("File not found " + fe.toString());
}
}
// Use the output stream here - ideally println method?
// os.println or write(6);
}
This is admittedly half-assed, but it gives you an idea what I'd like to achieve.
Is there a way in Java to define the output method (file or console) at run-time, so I can use the same methods to do either, at runtime? I guess a simple way would be to redirect the FileOutputStream to the console - is that possible?
Basically, you need to create a method that simply takes a OutputStream and writes all the details to it...
Then you create some helper methods that simply call it with the appropriate stream...
public void printFlightSchedule(OutputStream os) throws IOException {
// Write...
}
public void printFlightSchedule(File file) throws IOException {
FileOutputStream fis = null;
try {
fis = new FileOutputStream(file);
printFlightSchedule(fis);
} finally {
try {
} catch (Exception e) {
}
}
}
public void printFlightSchedule() throws IOException {
printFlightSchedule(System.out);
}
You may also want to take look at the Code Conventions for the Java Language...It will make it easier for people to read and understand your code ;)
java.io.OutputStream is already an abstraction of 'something you can write bytes to'. If your class interacts with an OutputStream and the clients of your class can choose what that OutputStream actually is (a file, the console, a null device, ...) then your class won't need to care about the type of OutpuStream is actually needed for a given context.
So instead of your class trying to do what it needs to do and create OutputStreams for its clients, let it just focus on its true responsibility and let clients provide the OutputStream they desire.
So keep only one constructor :
/**
* Constructs a new instance that will print to the given OutputStream
*/
PrintFlightSchedule(OutputStream stream);
Don't provide a filename String as a parameter, but a Writer.
Your method's signature becomes
void PrintFlightSchedule(Writer writer);
The code you show would be the bit that creates the Writer on startup depending on runtime parameters:
public static Writer createOutputWriter(String fileNm) {
OutputStream os;
if (fileNm.equals("") ) { // No file name provided, write to console
os = System.out;
}
// File name provided, write to this file name
else {
try {
os = new FileOutputStream(fileNm);
}
catch (FileNotFoundException fe) {
System.out.println("File not found " + fe.toString());
}
}
return new BufferedWriter(new OutputStreamWriter(os));
}
Don't forget to flush the writer after output.
How to write to Standard Output using BufferedWriter
You can create a FileOutputStream with a FileDescriptor instead of a string.
public FileOutputStream(FileDescriptor fdObj)
Creates a file output stream to write to the specified file descriptor, which represents an existing connection to an actual file in the file system.
First, if there is a security manager, its checkWrite method is called with the file descriptor fdObj argument as its argument.
If fdObj is null then a NullPointerException is thrown.
This constructor does not throw an exception if fdObj is invalid. However, if the methods are invoked on the resulting stream to attempt I/O on the stream, an IOException is thrown.
And default FileDescriptors are:
static FileDescriptor err
A handle to the standard error stream.
static FileDescriptor in
A handle to the standard input stream.
static FileDescriptor out
A handle to the standard output stream.
So the equivalent should be:
public static void testOutputStream(String fileNm, String msg) {
FileOutputStream os;
if (fileNm.equals("") ) { // No file name provided, write to console
os = new FileOutputStream(FileDescriptor.out);
}
// File name provided, write to this file name
else {
try {
os = new FileOutputStream(fileNm);
}
catch (FileNotFoundException fe) {
System.out.println("File not found " + fe.toString());
}
}
// Use the output stream here - ideally println method?
// os.println or write(6);
}