Dynamically run java code with Process - java

I have created a class that dynamically compiles, loads in a CustomClassLoader, and executes an in-memory java source (i.e.: without class files) java source by invoking it's main method.
I need to capture the StdOut, StdIn, and StdErr, although it's not possible doing so in my current code. (Compiler API + Classloader + Reflection)
My requirements might be the same as asked in this question - and as suggested by the accepted answer - use java.lang.Process. This is easier if I had physical files available in the file system, but I have not in this case.
I am planning to remove the Classloader + Reflection strategy and use the suggestion instead; although, I'm not familiar in actually redirecting the Std* using the Process class.
How can I do this in Java 7? (Snippets are highly appreciated) Or more importantly, is there a better approach?

Take a backup of the existing outputstream.
PrintStream realSystemOut = System.out;
Set it to other outputstream [ fileOutputStream, or some other streams]
PrintStream overridePrintStream = new PrintStream(new FileOutputStream("log.txt"));
System.setOut(overridePrintStream );
----- process -----
place the actual stream back to System.out
System.setOut(realSystemOut);
Thanks

Java allows you to supply your own PrintStream to override stdout and stderr and a InputStream for stdin.
Personally, I don't like simply throwing away the original stream, cause I tend to only want to redirect or parse it, not stop it (although you could do that as well).
Here is a simple example of the idea...
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintStream;
public class RedirectStdOut {
public static void main(String[] args) {
Consumer stdConsumer = new Consumer() {
#Override
public void processLine(StreamCapturer capturer, String text) {
}
#Override
public void processCharacter(StreamCapturer capturer, char character) {
capturer.getParent().print(character);
}
};
StreamCapturer stdout = new StreamCapturer(stdConsumer, System.out);
StreamCapturer stderr = new StreamCapturer(stdConsumer, System.err);
System.setOut(new PrintStream(stdout));
System.setErr(new PrintStream(stderr));
System.out.println("This is a test");
System.err.println("This is an err");
}
public static interface Consumer {
public void processLine(StreamCapturer capturer, String text);
public void processCharacter(StreamCapturer capturer, char character);
}
public static class StreamCapturer extends OutputStream {
private StringBuilder buffer;
private Consumer consumer;
private PrintStream parent;
private boolean echo = false;
public StreamCapturer(Consumer consumer, PrintStream parent) {
buffer = new StringBuilder(128);
this.parent = parent;
this.consumer = consumer;
}
public PrintStream getParent() {
return parent;
}
public boolean shouldEcho() {
return echo;
}
public void setEcho(boolean echo) {
this.echo = echo;
}
#Override
public void write(int b) throws IOException {
char c = (char) b;
String value = Character.toString(c);
buffer.append(value);
if (value.equals("\n")) {
consumer.processLine(this, value);
buffer.delete(0, buffer.length());
}
consumer.processCharacter(this, c);
if (shouldEcho()) {
parent.print(c);
}
}
}
}
Now the StreamCapturer has the ability to echo the output if you want, I've turned it off to demonstrate the use of the Consumer. I would normally use the Consumer to process what is coming through the stream, based on your needs, you can wait for the complete line or process the individual characters...

Related

How to write data in file in specific format with FileOutputStream and PrintWriter

I have class Artical:
first variable is code of artical, second variable is name of article and third is price of article.
public class Artical {
private final String codeOfArtical;
private final String nameOfArtical;
private double priceOfArtical;
public Artical(String codeOfArtical, String nameOfArtical, double priceOfArtical) {
this.codeOfArtical= codeOfArtical;
this.nameOfArtical= nameOfArtical;
this.priceOfArtical= priceOfArtical;
}
public void setPriceOfArtical(double priceOfArtical) {
this.priceOfArtical= priceOfArtical;
}
public String getCodeOfArtical() {
return codeOfArtical;
}
public String getNameOfArtical() {
return nameOfArtical;
}
public double getPriceOfArtical() {
return priceOfArtical;
}
}
I want in main class to write something like:
Artical a1 = new Artical("841740102156", "LG Monitor", 600.00);
new ShowArticalClass(a1).do();
new WriteArticalInFileClass(new File("baza.csv"), a1).do();
so that data in file will be written in format like this:
841740102156; Monitor LG; 600.00;
914918414989; Intel CPU; 250.00;
Those 2 classes ShowArticalClass and WriteArticalInFileClass arent important, those are abstract classes.*
So my question is: How do I set format to look like this, where every line is new Artical.
A very naive implementation can be the following:
Create a class that in turn creates a CSVWriter (assuming you want to write to a CSV). That class will expose a public method allowing you to pass in a path where the desired csv file lives as well as the Artical object you want to write to this file. Using that class you will format your data and write them to the file. An example of this could be:
public class CsvWriter {
private static final Object LOCK = new Object();
private static CsvWriter writer;
private CsvWriter() {}
public static CsvWriter getInstance() {
synchronized (LOCK) {
if (null == writer) {
writer = new CsvWriter();
}
return writer;
}
}
public void writeCsv(String filePath, Artical content) throws IOException {
try (var writer = createWriter(filePath)) {
writer.append(getDataline(content)).append("\n");
}
}
private String getDataline(Artical content) {
return String.join(",", content.getCode(), content.getName(), Double.toString(content.getPrice()));
}
private PrintWriter createWriter(String stringPath) throws IOException {
var path = Paths.get(stringPath);
try {
if (Files.exists(path)) {
System.out.printf("File under path %s exists. Will append to it%n", stringPath);
return new PrintWriter(new FileWriter(path.toFile(), true));
}
return new PrintWriter(path.toFile());
} catch (Exception e) {
System.out.println("An error has occurred while writing to a file");
throw e;
}
}
}
Note that this will take into account where the file provided is already in place (thus appending to it). In any other case the file will be created and written to directly.
Call this write method in a fashion similar to this:
public static void main(String... args) throws IOException {
var artical = new Artical("1", "Test", 10.10);
CsvWriter.getInstance().writeCsv("/tmp/test1.csv", artical);
var artical2 = new Artical("2", "Test", 11.14);
CsvWriter.getInstance().writeCsv("/tmp/test1.csv", artical2);
}
With that as a starting point you can go ahead and modify the code to be able to handle list of Artical objects.
If you really need to support CSV files though I would strongly recommend into looking at the various CSV related libraries that are out there instead of implementing your own code.

Writing in same file from different classes in java

How do I write in same text file from different classes in java.
One of the class call method from another class.
I do not want to open BufferedWriter in each class, so thinking if there is a cleaner way to do this ?
So essentially, I want to avoid writing the following code in each class
Path path = Paths.get("c:/output.txt");
try (BufferedWriter writer = Files.newBufferedWriter(path)) {
writer.write("Hello World !!");
}
A good way of doing this is to create a central writing class, that maps from a file name to a reader/writer-object. For example:
public class FileHandler {
private static final Map<String, FileHandler> m_handlers = new HashMap<>();
private final String m_path;
private final BufferedWriter m_writer;
// private final BufferedReader m_reader; this one is optional, and I did not instantiate in this example.
public FileHandler (String path) {
m_path = path;
try {
m_writer = Files.newBufferedWriter(path);
} catch (Exception e) {
m_writer = null;
// some exception handling here...
}
}
public void write(String toWrite) {
if (m_writer != null) {
try {
m_writer.write(toWrite);
} catch (IOException e) {
// some more exception handling...
}
}
}
public static synchronized void write(String path, String toWrite) {
FileHandler handler = m_handlers.get(path);
if (handler == null) {
handler = new FileHandler(path);
m_handlers.put(path, toWrite);
}
handler.write(toWrite);
}
}
Be aware that this behavior does not close the file writers at any point, because you don't know who else is currently (or later on) writing. This is not a complete solution, just a strong hint in a good direction.
This is cool, because now you can "always" call FileHandler.write("c:output.txt", "Hello something!?$");. The FileHandler class could be extended (as hinted) to read files too, and to do other stuff for you, that you might need later (like buffer the content, so you don't have to read a file every time you access it).

How to use Input/OutputStream to read/write data without creating new objects LibGDX

I'm using LibGDX and since I've started using this library, I've followed their advices and never created new objects at runtime using Pools and member variables to prevent triggering the garbage collector. However, when it comes to networking, I'm having difficulties to read from an InputStream (and writing from an OutputStream) without creating new Data(Input/Output)Stream objects.
I need a way to make those objects reusable to prevent creating a new object everytime I receive a player movement for instance.
Or, I need a way to read ints, floats and UTF strings from an Input/OutputStream without using this object.
For now, my unefficient code looks like this:
/**
* Represents a packet sent from client to server
* Used when a client sends a chat message to the server
* Created by winter on 25/03/16.
*/
public class PacketInChat extends Packet
{
private String message;
public PacketInChat() { }
public PacketInChat(String message) {
this.message = message;
}
#Override
public void readFrom(InputStream stream) throws IOException {
message = new DataInputStream(stream).readUTF(); //problem here
}
#Override
public void writeTo(OutputStream stream) throws IOException {
new DataOutputStream(stream).writeUTF(message); //problem here
}
//getters/setters for fields
}
I'm also detecting which socket it is by reading it's name so it's also problematic here:
String packetName = new DataInputStream(socket.getInputStream()).readUTF();
Packet packet = Pools.obtain((Class<? extends Packet>)Class.forName("package me.winter.socialplatformer.server.packet." + packetName));
packet.readFrom(socket.getInputStream());
Any ideas ? Thanks
Edit: EJP pointed out I can't read a String from a DataInputStream without creating a new object so my example is pretty much useless. I managed to read integers and floating points from bytes without DataInputStream and I could read Strings with StringBuffer (or StringBuilder) but the only important case in which I have to do this kind of optimization is in the packet for Player movement, which does not contain any String. I also finished by getting the packet type from an enum by reading the id instead of the name, so no more problem for that.
However, I'm still curious about how to reuse DataInputStream/DataOutputStream and will accept an answer that can explain me how to do so.
As per my comment, just use a WeakHashMap<InputStream, DataInputStream>, and similarly for the output streams:
public class PacketInChat extends Packet
{
private Map<InputStream, DataInputStream> map = new WeakHashMap<>();
private String message;
public PacketInChat() { }
public PacketInChat(String message) {
this.message = message;
}
#Override
public void readFrom(InputStream stream) throws IOException {
DataInputStream din;
synchronized (map)
{
if ((din = map.get(stream)) == null)
{
map.put(din, in = new DataInputStream(stream));
}
}
message = din.readUTF();
}
}
and similarly for output. The WeakHashMap will ensure the data streams get released when their wrapped streams disappear.
Or just restructure:
public class PacketInChat
{
DataInputStream din;
DataOutputStream dout;
public PacketInChat(InputStream in, OutputStream out)
{
this.din = new DataInputStream(in);
this.dout = new DataOutputStream(out);
}
public String readMessage() throws IOException { return din.readUTF(); }
public int readInt() throws IOException { return din.readInt(); }
// etc
public void writeMessage(String msg) throws IOException
{
dout.writeUTF(msg);
}
// etc
}
so that you create one of these per input/output stream pair, and keep them in a map somewhere.

Cloud Dataflow: reading entire text files rather than lines by line

I'm looking for a way to read ENTIRE files so that every file will be read entirely to a single String.
I want to pass a pattern of JSON text files on gs://my_bucket/*/*.json, have a ParDo then process each and every file entirely.
What's the best approach to it?
I am going to give the most generally useful answer, even though there are special cases [1] where you might do something different.
I think what you want to do is to define a new subclass of FileBasedSource and use Read.from(<source>). Your source will also include a subclass of FileBasedReader; the source contains the configuration data and the reader actually does the reading.
I think a full description of the API is best left to the Javadoc, but I will highlight the key override points and how they relate to your needs:
FileBasedSource#isSplittable() you will want to override and return false. This will indicate that there is no intra-file splitting.
FileBasedSource#createForSubrangeOfFile(String, long, long) you will override to return a sub-source for just the file specified.
FileBasedSource#createSingleFileReader() you will override to produce a FileBasedReader for the current file (the method should assume it is already split to the level of a single file).
To implement the reader:
FileBasedReader#startReading(...) you will override to do nothing; the framework will already have opened the file for you, and it will close it.
FileBasedReader#readNextRecord() you will override to read the entire file as a single element.
[1] One example easy special case is when you actually have a small number of files, you can expand them prior to job submission, and they all take the same amount of time to process. Then you can just use Create.of(expand(<glob>)) followed by ParDo(<read a file>).
Was looking for similar solution myself. Following Kenn's recommendations and few other references such as XMLSource.java, created the following custom source which seems to be working fine.
I am not a developer so if anyone has suggestions on how to improve it, please feel free to contribute.
public class FileIO {
// Match TextIO.
public static Read.Bounded<KV<String,String>> readFilepattern(String filepattern) {
return Read.from(new FileSource(filepattern, 1));
}
public static class FileSource extends FileBasedSource<KV<String,String>> {
private String filename = null;
public FileSource(String fileOrPattern, long minBundleSize) {
super(fileOrPattern, minBundleSize);
}
public FileSource(String filename, long minBundleSize, long startOffset, long endOffset) {
super(filename, minBundleSize, startOffset, endOffset);
this.filename = filename;
}
// This will indicate that there is no intra-file splitting.
#Override
public boolean isSplittable(){
return false;
}
#Override
public boolean producesSortedKeys(PipelineOptions options) throws Exception {
return false;
}
#Override
public void validate() {}
#Override
public Coder<KV<String,String>> getDefaultOutputCoder() {
return KvCoder.of(StringUtf8Coder.of(),StringUtf8Coder.of());
}
#Override
public FileBasedSource<KV<String,String>> createForSubrangeOfFile(String fileName, long start, long end) {
return new FileSource(fileName, getMinBundleSize(), start, end);
}
#Override
public FileBasedReader<KV<String,String>> createSingleFileReader(PipelineOptions options) {
return new FileReader(this);
}
}
/**
* A reader that should read entire file of text from a {#link FileSource}.
*/
private static class FileReader extends FileBasedSource.FileBasedReader<KV<String,String>> {
private static final Logger LOG = LoggerFactory.getLogger(FileReader.class);
private ReadableByteChannel channel = null;
private long nextOffset = 0;
private long currentOffset = 0;
private boolean isAtSplitPoint = false;
private final ByteBuffer buf;
private static final int BUF_SIZE = 1024;
private KV<String,String> currentValue = null;
private String filename;
public FileReader(FileSource source) {
super(source);
buf = ByteBuffer.allocate(BUF_SIZE);
buf.flip();
this.filename = source.filename;
}
private int readFile(ByteArrayOutputStream out) throws IOException {
int byteCount = 0;
while (true) {
if (!buf.hasRemaining()) {
buf.clear();
int read = channel.read(buf);
if (read < 0) {
break;
}
buf.flip();
}
byte b = buf.get();
byteCount++;
out.write(b);
}
return byteCount;
}
#Override
protected void startReading(ReadableByteChannel channel) throws IOException {
this.channel = channel;
}
#Override
protected boolean readNextRecord() throws IOException {
currentOffset = nextOffset;
ByteArrayOutputStream buf = new ByteArrayOutputStream();
int offsetAdjustment = readFile(buf);
if (offsetAdjustment == 0) {
// EOF
return false;
}
nextOffset += offsetAdjustment;
isAtSplitPoint = true;
currentValue = KV.of(this.filename,CoderUtils.decodeFromByteArray(StringUtf8Coder.of(), buf.toByteArray()));
return true;
}
#Override
protected boolean isAtSplitPoint() {
return isAtSplitPoint;
}
#Override
protected long getCurrentOffset() {
return currentOffset;
}
#Override
public KV<String,String> getCurrent() throws NoSuchElementException {
return currentValue;
}
}
}
A much simpler method is to generate the list of filenames and write a function to process each file individually. I'm showing Python, but Java is similar:
def generate_filenames():
for shard in xrange(0, 300):
yield 'gs://bucket/some/dir/myfilname-%05d-of-00300' % shard
with beam.Pipeline(...) as p:
(p | generate_filenames()
| beam.FlatMap(lambda filename: readfile(filename))
| ...)
FileIO does that for you without the need to implement your own FileBasedSource.
Create matches for each of the files that you want to read:
mypipeline.apply("Read files from GCS", FileIO.match().filepattern("gs://mybucket/myfilles/*.txt"))
Also, you can read like this if do not want Dataflow to throw exceptions when no file is found for your filePattern:
mypipeline.apply("Read files from GCS", FileIO.match().filepattern("gs://mybucket/myfilles/*.txt").withEmptyMatchTreatment(EmptyMatchTreatment.ALLOW))
Read your matches using FileIO:
.apply("Read file matches", FileIO.readMatches())
The above code returns a PCollection of the type FileIO.ReadableFile (PCollection<FileIO.ReadableFile>). Then you create a DoFn that process these ReadableFiles to meet your use case.
.apply("Process my files", ParDo.of(MyCustomDoFnToProcessFiles.create()))
You can read the entire documentation for FileIO here.

PrintWriter to multiple files

I need to write the same text to multiple files (or streams).
Sometimes I need to use Writer, sometimes a PrintWriter, sometimes a OutputStream...
One way to do this wold be to extend a PrintWriter to have an array of PrintWriters and overridde each method as follows:
class MutiplePrintWriter extends PrintWriter {
private PrintWriter[] outs;
public MutiplePrintWriter(PrintWriter[] outs) { this.out = out; }
public void print(boolean b) { for (PrintWriter out : outs) print(b); }
public void print(char c) { for (PrintWriter out : outs) print(c); }
public void print(char[] s) { for (PrintWriter out : outs) print(s); }
...
}
(and the same for Writer, OutputStream...)
Is there a better alternative?
Is this already implemented in a library?
There are libraries out there already for this. If you can use OSS then grab Apache Commons IO and take a look at the TeeOutputStream class. Here's some sample code illustrating its use:
public class TeeOutputStreamTest {
#Test
public void testPrintToMultipleStreams() throws Exception {
final String fileName1 = "/tmp/fileOne.txt";
final String fileName2 = "/tmp/fileTwo.txt";
final String fileName3 = "/tmp/fileThree.txt";
final TeeOutputStream tos = new TeeOutputStream(new FileOutputStream(
fileName1), new TeeOutputStream(new FileOutputStream(fileName2),
new FileOutputStream(fileName3)));
final PrintWriter writer = new PrintWriter(tos);
writer.println("Hello World");
writer.close();
}
}
You can use TeeOutputStream anywhere a regular OutputStream is accepted, or wrap it in a Writer, depending on whats needed.
You don't need to override all methods in PrintWriter, since all printXyz methods delegate to the basic write methods of the Writer API.
Although: PrintWriter has a constructor PrintWriter(Writer out). So you need to implement only a new Writer with _one_method likes this:
public class MultiWriter implements Writer {
private List<Writer> delegates;
public MultiWriter(List<Writer> delegates){
this.delegates = delegates;
}
public void write(char cbuf[], int off, int len) throws IOException {
for(Writer w: delegates){
w.writer(cbuf, off, len);
}
}
}
use this like this:
PrintWriter myPrinter = new PrintWriter(new MultiWriter(listOfDelegates));
myPrintWriter.println("Hello World!");
This will take care of the Writers.
You can do the same trick using OutputStream. You can also just implement MultiOutputStream, omit MultiWriter and use a delegation chain PrintWriter->OutputStreamWriter->MultiOutputStream. This would be just one method in one class to implement and you get PrintWriter, Writer and OutputStream just for free.
If you can use a Logging Library...
Use a logging library and define multiple appenders in its configuration.
You could use Apache Log4J or LogBack to that effect (I'd recommend LogBack, but to each their own).
If you are forced to use PrintWriter...
The unfortunately your solution is the best.
There's an alternative, but it's not pretty. If you are forced to pass a PrintWriter, short of providing an extension to add to the JRE's trusted libs at load time to replace PrintWriter with a class doing in effect what you suggest, I don't think you have much choice.
Well, actually you could do this easily:
instead of overriding all methods of PrintWriter class, you could override only one method of class Writer:
public void write(char cbuf[], int off, int len);
because all other methods (print(...) and other write(...)-s) use this one:
public void write(char cbuf[], int off, int len) {
for (Writer out : outs) out.write(cbuf, off, len);
}
UPD: also flush() and close() methods.
For OutputStream the same situation with method public void write(int b)
I prefer composition to extension. Here is another option:
public class MultiWriter
{
List<PrintWriter> listPrintWriter = new LinkedList<PrintWriter>;
List<Writer> listWriter = new LinkedList<Writer>;
public void addPrintWriter(final PrintWriter newPrintWriter)
{
listPrintWriter.add(newPrintWriter);
}
public void addWriter(final Writer newWriter)
{
listWriter.add(newWriter);
}
public void write((char cbuf[], int off, int len)
{
if (listPrintWriter != null)
{
for (PrintWriter printWriter : listPrintWriter)
{
printWriter.write(cbuf, off, len);
}
}
if (listWriter != null)
{
for (Writer writer : listWriter)
{
writer.write(cbuf, off, len);
}
}
}

Categories