Thread race condition just hangs while using PipedOutputStream - java

I am using piped output streams to convert OutputStream to InputStream because the AWS java sdk does not allow puting objects on S3 using OutputStreams
I'm using the code below, however, this will intermittently just hang. This code is in a web application. Currently there is no load on the application...I am just trying it out on my personal computer.
ByteArrayOutputStream os = new ByteArrayOutputStream();
PipedInputStream inpipe = new PipedInputStream();
final PipedOutputStream out = new PipedOutputStream(inpipe);
try {
String xmpXml = "<dc:description>somedesc</dc:description>"
JpegXmpRewriter rewriter = new JpegXmpRewriter();
rewriter.updateXmpXml(isNew1,os, xmpXml);
new Thread(new Runnable() {
public void run () {
try {
// write the original OutputStream to the PipedOutputStream
println "starting writeto"
os.writeTo(out);
out.close();
println "ending writeto"
} catch (IOException e) {
System.out.println("Some exception)
}
}
}).start();
ObjectMetadata metadata1 = new ObjectMetadata();
metadata1.setContentLength(os.size());
client.putObject(new PutObjectRequest("test-bucket", "167_sample.jpg", inpipe, metadata1));
}
catch (Exception e) {
System.out.println("Some exception")
}
finally {
isNew1.close()
os.close()
}

Instead of bothering with the complexities of starting another thread, instantiating two concurrent classes, and then passing data from thread to thread, all to solve nothing but a minor limitation in the provided JDK API, you should just create a simple specialization of the ByteArrayOutputStream:
class BetterByteArrayOutputStream extends ByteArrayOutputStream {
public ByteArrayInputStream toInputStream() {
return new ByteArrayInputStream(buf, 0, count);
}
}
This converts it to an input stream with no copying.

Related

Is it possible to load Executable once and keep calling it multiple times in Java?

I am working on prototype where in from my Java API I have to run an executable which in C#. There is code which inturn calls Matlab function.
Following is the java code to call the executable(an example)
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ByteArrayOutputStream errorStream = new ByteArrayOutputStream();
matProcessWrapper = new ExecWrapper.ExecWrapperBuilder
("C:\\Matlab\\HelloWorld\\bin\\Release\\netcoreapp3.1\\HelloWorld.exe")
.setErrorStream(errorStream)
.setOutputStream(outputStream)
.setTimeOutMilliSeconds(30*1000L)
.build();
try {
matProcessWrapper.executeProcessSync();
} catch (IOException e) {
e.printStackTrace();
}
The executable is loaded each time. is it possible to load this executable only once and then call its method again and again and once all the calling is done I can exit the model.
You could check if your process wrapper is already loaded or load it in your constructor.
public void execute() {
if (matProcessWrapper == null) {
loadMatProcessWrapper();
}
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ByteArrayOutputStream errorStream = new ByteArrayOutputStream()) {
matProcessWrapper.executeProcessSync();
} catch (IOException e) {
e.printStackTrace();
}
}
public void loadMatProcessWrapper() {
matProcessWrapper = new ExecWrapper.ExecWrapperBuilder
("C:\\Matlab\\HelloWorld\\bin\\Release\\netcoreapp3.1\\HelloWorld.exe")
.setErrorStream(errorStream)
.setOutputStream(outputStream)
.setTimeOutMilliSeconds(30 * 1000L)
.build();
}
Also don't forget to close your streams, I did this in my code snippet with try with resources.

Java : ObjectInputStream, 3 messages sent, only 2 received

Welcome to the fabulous world of networks. I discovered my passion. :)
However I have a very strange behavior in my app, and I would need your help to solve this one.
I made a simple server-client app.
The sending Thread :
new Thread(new Runnable() {
public void run() {
try {
ObjectOutputStream objectOutputStream = new ObjectOutputStream(new BufferedOutputStream(socket.getOutputStream()));
objectOutputStream.writeObject(message);
objectOutputStream.flush();
} catch (Exception exception) {
exception.printStackTrace();
}
}
}).start();
The receiving Thread :
new Thread(new Runnable() {
public void run() {
try {
while (!Thread.currentThread().isInterrupted()) {
ObjectInputStream objectInputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
Message message = (Message) objectInputStream.readObject();
Log.i("DEBUG", message);
}
} catch (Exception exception) {
try {
socket.close();
} catch (Exception exception) {
exception.printStackTrace();
}
}
}
}).start();
It works just fine, however if I send simultaneously 3 messages, my receiving thread only receives the 2 first ones. It does not matter if I change the order. The third is always lost.
I think it's a buffer size problem. How can I deal with that? Thank you. :)
BufferedReader buffers the input, just as the name says. This means that it reads from the input source into a buffer before passing it onto you. The buffer size here refers to the number of bytes it buffers.
ObjectInputStream objectInputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream(), size));
you can use size of BufferedInputStream and the reading of the buffer is slow so send the data with some delay
`

converting a java.util.stream.Stream<String> into a java.io.Reader

Part of my application is given an InputStream and wants to do some processing on this to produce another InputStream.
try (
final BufferedReader inputReader = new BufferedReader(new InputStreamReader(inputStream, UTF_8), BUFFER_SIZE);
final Stream<String> resultLineStream = inputReader.lines().map(lineProcessor::processLine);
final InputStream resultStream = new ReaderInputStream(new StringStreamReader(resultLineStream), UTF_8);
) {
s3Client.putObject(targetBucket, s3File, resultStream, new ObjectMetadata());
} catch (IOException e) {
throw new RuntimeException("Exception", e);
}
I am using the new Java 8 BufferedReader.lines() to a Stream onto which I can easily map my processing function.
The only thing still lacking is class StringStreamReader() which is supposed to turn my Stream into a Reader from which Apache commons-io:ReaderInputStream can create an InputStream again. (The detour to readers and back seems reasonable to deal with encodings and line breaks.)
To be very clear, the code above assumes
public class StringStreamReader extends Reader {
public StringStreamReader(Stream<String> stringStream) { ... }
#Overwrite
public int read(char cbuf[], int off, int len) throws IOException { ... }
// possibly overwrite other methods to avoid bad performance or high resource-consumption
}
So is there any library that offers such a StringStreamReader class? Or this there another way to write the application code above without implementing a custom Reader or InputStream subclass?
You can do something like that:
PipedWriter writer = new PipedWriter();
PipedReader reader = new PipedReader();
reader.connect(writer);
strings.stream().forEach(string -> {
try {
writer.write(string);
writer.write("\n");
} catch (Exception e) {
e.printStackTrace();
}
});
But i guess you want some form of lazy processing. Stream api does not really help in that case, you need a dedicated Thread + some buffer to do that.

PDF file download using BlockingQueue

I'm trying to download a pdf file using URLConnection. Here's how I setup the connection object.
URL serverUrl = new URL(url);
urlConnection = (HttpURLConnection) serverUrl.openConnection();
urlConnection.setDoInput(true);
urlConnection.setRequestMethod("GET");
urlConnection.setRequestProperty("Content-Type", "application/pdf");
urlConnection.setRequestProperty("ENCTYPE", "multipart/form-data");
String contentLength = urlConnection.getHeaderField("Content-Length");
I obtained inputstream from the connection object.
bufferedInputStream = new BufferedInputStream(urlConnection.getInputStream());
And the output stream to write the file contents.
File dir = new File(context.getFilesDir(), mFolder);
if(!dir.exists()) dir.mkdir();
final File f = new File(dir, String.valueOf(documentName));
f.createNewFile();
final BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(f, true)); //true for appendMode
BlockingQueue is created so that threads performing read and write operations can access the queue.
final BlockingQueue<ByteArrayWrapper> blockingQueue = new ArrayBlockingQueue<ByteArrayWrapper>(MAX_VALUE,true);
final byte[] dataBuffer = new byte[MAX_VALUE];
Now created thread to read data from InputStream.
Thread readerThread = new Thread(new Runnable() {
#Override
public void run() {
try {
int count = 0;
while((count = bufferedInputStream.read(dataBuffer, 0, dataBuffer.length)) != -1) {
ByteArrayWrapper byteArrayWrapper = new ByteArrayWrapper(dataBuffer);
byteArrayWrapper.setBytesReadCount(count);
blockingQueue.put(byteArrayWrapper);
}
blockingQueue.put(null); //end of file
} catch(Exception e) {
e.printStackTrace();
} finally {
try {
bufferedInputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
});
Now the writer thread reads those file contents.
Thread writerThread = new Thread(new Runnable() {
#Override
public void run() {
try {
while(true) {
ByteArrayWrapper byteWrapper = blockingQueue.take();
if(null == byteWrapper) break;
bufferedOutputStream.write(byteWrapper.getBytesRead(), 0, byteWrapper.getBytesReadCount());
}
bufferedOutputStream.flush();
} catch(Exception e) {
e.printStackTrace();
} finally {
try {
bufferedOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
});
Finally, threads are started.
readerThread.start();
writerThread.start();
Theoretically it should read the file from InputStream and save it to the target file. However, in reality, it produces blank pdf file. At some other time, it shows invalid pdf format exception. File size matches with content length of the InputStream. Is there anything I'm missing?
I'm not familiar with ByteArrayWrapper. Does it just hold a reference to the array, like this?
public class ByteArrayBuffer {
final private byte[] data;
public ByteArrayBuffer(byte[] data) {
this.data = data;
}
public byte[] getBytesRead() {
return data;
}
/*...etc...*/
}
If so. that would be the problem: all of the ByteArrayWrapper objects are backed by the same array. Which is repeatedly overwritten by the writer. Even though BlockingQueue did the hard work of safely publishing each object from one thread to the other.
The simplest fix might be to make the ByteArrayWrapper effectively immutable i.e. don't change it after publishing it to another thread. Taking a copy of the array on construction would be simplest:
public ByteArrayWrapper(byte[] data) {
this.data = Arrays.copyOf(data, data.length);
}
One other problem is that "BlockingQueue does not accept null elements" (see BlockingQueue docs), and so the "end of input" sentinel value doesn't work. Replacing null with a
private static ByteArrayWrapper END = new ByteArrayWrapper(new byte[]{});
in the appropriate places will fix that.
By making those changes to a copy of the code I was able to retrieve a faithful copy of a PDF file.
Try to use Android DownloadManager (http://developer.android.com/reference/android/app/DownloadManager.html) it is used to handle long-running HTTP requests in the background.
Here you don't need to think about received bytes and the progress is displayed in the notification bar.
There is a good tutorial here: http://blog.vogella.com/2011/06/14/android-downloadmanager-example/

Exception propagation within PipedInputStream and PipedOutputStream

I have a data producer that runs in a separate thread and pushes generated data into PipedOutputStream which is connected to PipedInputStream. A reference of this input stream is exposed via public API so that any client can use it. The PipedInputStream contains a limited buffer which, if full, blocks the data producer. Basically, as the client reads data from the input stream, new data is generated by the data producer.
The problem is that the data producer may fail and throw an exception. But as the consumer is running in a separate thread, there is no nice way to get the exception to the client.
What I do is that I catch that exception and close the input stream. That will result in a IOException with message "Pipe closed" on the client side but I would really like to give the client the real reason behind that.
This is a rough code of my API:
public InputStream getData() {
final PipedInputStream inputStream = new PipedInputStream(config.getPipeBufferSize());
final PipedOutputStream outputStream = new PipedOutputStream(inputStream);
Thread thread = new Thread(() -> {
try {
// Start producing the data and push it into output stream.
// The production my fail and throw an Exception with the reason
} catch (Exception e) {
try {
// What to do here?
outputStream.close();
inputStream.close();
} catch (IOException e1) {
}
}
});
thread.start();
return inputStream;
}
I have two ideas how to fix that:
Store the exception in the parent object and expose it to the client via API. I. e. if the reading fails with an IOException, the client could ask the API for the reason.
Extend / re-implement the piped streams so that I could pass a reason to the close() method. Then the IOException thrown by the stream could contain that reason as a message.
Any better ideas?
Coincidentally I just wrote similar code to allow GZip compression of a stream. You don't need to extend PipedInputStream, just FilterInputStream will do and return a wrapped version, e.g.
final PipedInputStream in = new PipedInputStream();
final InputStreamWithFinalExceptionCheck inWithException = new InputStreamWithFinalExceptionCheck(in);
final PipedOutputStream out = new PipedOutputStream(in);
Thread thread = new Thread(() -> {
try {
// Start producing the data and push it into output stream.
// The production my fail and throw an Exception with the reason
} catch (final IOException e) {
inWithException.fail(e);
} finally {
inWithException.countDown();
}
});
thread.start();
return inWithException;
And then InputStreamWithFinalExceptionCheck is just
private static final class InputStreamWithFinalExceptionCheck extends FilterInputStream {
private final AtomicReference<IOException> exception = new AtomicReference<>(null);
private final CountDownLatch complete = new CountDownLatch(1);
public InputStreamWithFinalExceptionCheck(final InputStream stream) {
super(stream);
}
#Override
public void close() throws IOException {
try {
complete.await();
final IOException e = exception.get();
if (e != null) {
throw e;
}
} catch (final InterruptedException e) {
throw new IOException("Interrupted while waiting for synchronised closure");
} finally {
stream.close();
}
}
public void fail(final IOException e) {
exception.set(Preconditions.checkNotNull(e));
}
public void countDown() {complete.countDown();}
}
This is my implementation, taken from above accepted answer https://stackoverflow.com/a/33698661/5165540 , where I don't use the CountDownLatch complete.await() as it would cause a deadlock if the InputStream gets abruptly closed before the writer has finished writing the full content.
I still set the exception caught when PipedOutpuStream is being used, and I create the PipedOutputStream in the spawn thread, using a try-finally-resource pattern to ensure it gets closed, waiting in the Supplier until the 2 streams are piped.
Supplier<InputStream> streamSupplier = new Supplier<InputStream>() {
#Override
public InputStream get() {
final AtomicReference<IOException> osException = new AtomicReference<>();
final CountDownLatch piped = new CountDownLatch(1);
final PipedInputStream is = new PipedInputStream();
FilterInputStream fis = new FilterInputStream(is) {
#Override
public void close() throws IOException {
try {
IOException e = osException.get();
if (e != null) {
//Exception thrown by the write will bubble up to InputStream reader
throw new IOException("IOException in writer", e);
}
} finally {
super.close();
}
};
};
Thread t = new Thread(() -> {
try (PipedOutputStream os = new PipedOutputStream(is)) {
piped.countDown();
writeIozToStream(os, projectFile, dataFolder);
} catch (final IOException e) {
osException.set(e);
}
});
t.start();
try {
piped.await();
} catch (InterruptedException e) {
t.cancel();
Thread.currentThread().interrupt();
}
return fis;
}
};
Calling code is something like
try (InputStream is = streamSupplier.getInputStream()) {
//Read stream in full
}
So when is InputStream is closed this will be signaled in the PipedOutputStream causing eventually a "Pipe closed" IOException, ignored at that point.
If I keep instead the complete.await() line in the FilterInputStreamclose() I could suffer from deadlock (PipedInputStream trying to close, waiting on complete.await(), while PipedOutputStream is waiting forever on PipedInputStreamawaitSpace )

Categories