Always blocking input stream for testing? - java

I'm doing some unit tests where essentially I need the input stream to block forever. Right now I'm using this to construct the input stream
InputStream in = new ByteArrayInputStream("".getBytes());
While it works some of the time, other times the input stream is read before the output stream (what I'm testing) is finished, causing all sorts of havoc.
Essentially I need this input stream to block forever when read. The only solution I can think of is to setup the InputStream with a massive buffer so that the other threads finish, but thats a really hackish and brittle solution. I do have mockito but I'm very new at it and not sure if I can get away with only mocking read without mocking anything else.
Does anyone know of a better solution?
EDIT:
This is my new attempt. It works most of the time, but other times the input thread dies early which causes the Output Thread to die (that behavior is intentional). I can't seem to figure out though why this would sometimes fail.
This is the general test under TestNG simplified for clarity.
protected CountDownLatch inputLatch;
#BeforeMethod
public void botSetup() throws Exception {
//Setup streams for bot
PipedOutputStream out = new PipedOutputStream();
//Create an input stream that we'll kill later
inputLatch = new CountDownLatch(1);
in = new AutoCloseInputStream(new ByteArrayInputStream("".getBytes()) {
#Override
public synchronized int read() {
try {
//Block until were killed
inputLatch.await();
} catch (InterruptedException ex) {
//Wrap in an RuntimeException so whatever was using this fails
throw new RuntimeException("Interrupted while waiting for input", ex);
}
//No more input
return -1;
}
});
Socket socket = mock(Socket.class);
when(socket.getInputStream()).thenReturn(in);
when(socket.getOutputStream()).thenReturn(out);
//Setup ability to read from bots output
botOut = new BufferedReader(new InputStreamReader(new PipedInputStream(out)));
...
}
#AfterMethod
public void cleanUp() {
inputLatch.countDown();
bot.dispose();
}
For the test I use readLine() from botOut to get the appropriate number of lines. The issue though is that when the output thread dies, readLine() blocks forever which hangs up TestNG. I've tried a timeout with mixed results: most of the time it would work but others it would kill tests that just took a little longer than normal to test.
My only other option is to just not use streams for this kind of work. The output thread relies on an output queue, so I could just run off of that. The issue though is that I'm not actually testing writing to the stream, just what is going to be sent, which does bother me.

Mockito is great- I am personally a huge fan!
With Mockito, you can do something like the code below. You basically set up a stream mock, and you tell it to sleep for a very long time when the "read" method is invoked on it. You can then pass this mock into the code you want to test when the stream hangs.
import static org.mockito.Mockito.*;
//...
#Test
public void testMockitoSleepOnInputStreamRead() throws Exception{
InputStream is = mock(InputStream.class);
when(is.read()).thenAnswer(new Answer() {
#Override
public Object answer(InvocationOnMock invocation) {
try {
Thread.sleep(10000000000L);
return null;
} catch (InterruptedException ie) {
throw new RuntimeException(ie);
}
}
});
//then use this input stream for your testing.
}

I'd make an InputStream that, when read(), does a wait() on something that's held locked till you're done with the rest of the test. Subclass from FilterInputStream to get everything else for free.

There doesn't seem to be any reliable way to do this. My code in the question only works sometimes, #Moe's doesn't work at all, #Ed's suggesting is what I was origionally doing, and #SJuan's is sort of what I'm already doing.
There just seems to be too much stuff going on. The input stream I give to the class is wrapped in a InputStreamReader, then a Buffered reader. Suggestions for other streams inside of other streams just further complicate the issue.
To fix the problem I did what I should of done origionally: Create a factory method for the InputThread (the thread that actually does the reading), then override in my testing. Simple, effective, and 100% reliable.
I suggest anyone that runs into this problem to first try and override the part of your program that does the reading. If you can't then the code I posted is the only semi-reliable code that works in my situation.

Then you need another InputStream flavour. Read block when no more bytes are available, but with ByteArrayOutputStream they are always available until the end of stream is found.
I would extend BAOS by changing read() so it checks for a certain boolean value (if true read, if false wait a second and loop). Then change that variable from your unit code when it is the right time.
Hope that helps

I created a helper class that extends ByteArrayInputStream for my unit tests. It pipes a given byte[] through, but at the end of the stream instead of returning -1 it waits until close() is called. If ten seconds passes it gives up and throws an exception.
If you want it to close earlier you can call latch.countdown() yourself.
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
public class BlockingByteArrayInputStream extends ByteArrayInputStream {
private CountDownLatch latch;
public BlockingByteArrayInputStream(byte[] buf) {
super(buf);
latch = new CountDownLatch(1);
}
#Override
public synchronized int read() {
int read = super.read();
if (read == -1) {
waitForUnblock();
}
return read;
}
#Override
public int read(byte[] b) throws IOException {
int read = super.read(b);
if (read == -1) {
waitForUnblock();
}
return read;
}
#Override
public synchronized int read(byte[] b, int off, int len) {
int read = super.read(b, off, len);
if (read == -1) {
waitForUnblock();
}
return read;
}
private void waitForUnblock() {
try {
latch.await(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
throw new RuntimeException("safeAwait interrupted");
}
}
#Override
public void close() throws IOException {
super.close();
latch.countDown();
}
}

Related

How does java.util.logging.Handler lifecycle works?

I'm trying to implement a custom handler that logs parsed LogRecord objects into a file (basically what FileHandler or StreamHandler does). My currently implementation is shown below:
public final class ErrorHandler extends Handler {
private static final String OUTPUT_FILE = ".output";
private final Formatter formatter = new CustomFormatter();
private BufferedWriter writter;
#Override
public void publish(LogRecord record) {
if (record.getLevel() == SEVERE || record.getLevel() == WARNING) {
writeToOutput(record);
}
}
void writeToOutput(LogRecord log) {
try {
if (writter == null) {
writter = new BufferedWriter(new FileWriter(OUTPUT_FILE, true));
}
writter.write(formatter.format(log));
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public void flush() {
}
#Override
public void close() {
try {
writter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
P.S.: I known that we can achieve the same as the code above just by setting filter and formatter on a FileHandler or StreamHandler however I'll need the hookpoints later in the future.
My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there. If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?
Ok, after two days fighting agains that I came to realize that the process was running on a daemon, therefore, handler's close() was only called when daemon was killed. I believe that this was leading to multiples calls to flush() almost at the same time. Running the process with no daemon solved the issue.
My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there.
This is because the bytes are cached in the BufferedWriter. Flush sends those bytes to the wrapped FileWriter. If you collect enough bytes it will flush to the target file but you risk losing that information of you have some sort of process crash or disk issue.
If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?
Perhaps you have added two instances of this handler to the logger and both are appending to the same file. Logger.addHandler works like a List and not like a Set. Add code to Print the logger tree which will tell you how many handler instances are installed.
I'm sure I have no process crash nor disk issue, and I believe that close calls flush. Yet, I don't see why nothing is being logged - and it happens only file is not created yet.
Close is only implicitly called when the Java virtual machine shuts down and the handler is visible from the LogManager. If the shutdown is not clean as described in the documentation then the contents of the buffered writer is not flushed.

Java 8 equivalent of (RxJava) Observable#onComplete()

I'm getting to know Java 8 Stream API and I am unsure how to signal to a consumer of a stream that the stream is completed.
In my case the results of the stream-pipeline will be written in batches to a database or published on a messaging service. In such cases the stream-pipeline should invoke a method to "flush" and "close" the endpoints once the stream is closed.
I had a bit of exposure to the Observable pattern as implemented in RxJava and remember the Observer#onComplete method is used there for this purpose.
On the other hand Java 8 Consumer only exposes an accept method but no way to "close" it. Digging in the library I found a sub-interface of Consumer called Sink which offers an end method, but it's not public. Finally I thought of implementing a Collector which seems to be the most flexible consumer of a stream, but isn't there any simpler option?
The simplest way of doing a final operation is by placing the appropriate statement right after the terminal operation of the stream, for example:
IntStream.range(0, 100).parallel().forEach(System.out::println);
System.out.println("done");
This operation will be performed in the successful case only, where a commit is appropriate. While the Consumers run concurrently, in unspecified order, it is still guaranteed that all of them have done their work upon normal return.
Defining an operation that is also performed in the exceptional case is not that easy. Have a look at the following example:
try(IntStream is=IntStream.range(0, 100).onClose(()->System.out.println("done"))) {
is.parallel().forEach(System.out::println);
}
This works like the first one but if you test it with an exceptional case, e.g.
try(IntStream is=IntStream.range(0, 100).onClose(()->System.out.println("done"))) {
is.parallel().forEach(x -> {
System.out.println(x);
if(Math.random()>0.7) throw new RuntimeException();
});
}
you might encounter printouts of numbers after done. This applies to all kind of cleanup in the exceptional case. When you catch the exception or process a finally block, there might be still running asynchronous operations. While it is no problem rolling back a transaction in the exceptional case at this point as the data is incomplete anyway, you have to be prepared for still running attempts to write items to the now-rolled-back resource.
Note that Collector-based solutions, which you thought about, can only define a completion action for the successful completion. So these are equivalent to the first example; just placing the completing statement after the terminal operation is the simpler alternative to the Collector.
If you want to define operations which implement both, the item processing and the clean up steps, you may create your own interface for it and encapsulate the necessary Stream setup into a helper method. Here is how it might look like:
Operation interface:
interface IoOperation<T> {
void accept(T item) throws IOException;
/** Called after successfull completion of <em>all</em> items */
default void commit() throws IOException {}
/**
* Called on failure, for parallel streams it must set the consume()
* method into a silent state or handle concurrent invocations in
* some other way
*/
default void rollback() throws IOException {}
}
Helper method implementation:
public static <T> void processAllAtems(Stream<T> s, IoOperation<? super T> c)
throws IOException {
Consumer<IoOperation> rollback=io(IoOperation::rollback);
AtomicBoolean success=new AtomicBoolean();
try(Stream<T> s0=s.onClose(() -> { if(!success.get()) rollback.accept(c); })) {
s0.forEach(io(c));
c.commit();
success.set(true);
}
catch(UncheckedIOException ex) { throw ex.getCause(); }
}
private static <T> Consumer<T> io(IoOperation<T> c) {
return item -> {
try { c.accept(item); }
catch (IOException ex) { throw new UncheckedIOException(ex); }
};
}
Using it without error handling might be as easy as
class PrintNumbers implements IoOperation<Integer> {
public void accept(Integer i) {
System.out.println(i);
}
#Override
public void commit() {
System.out.println("done.");
}
}
processAllAtems(IntStream.range(0, 100).parallel().boxed(), new PrintNumbers());
Dealing with errors is possible, but as said, you have to handle the concurrency here. The following example does also just print number but use a new output stream that should be closed at the end, therefore the concurrent accept calls have to deal with concurrently closed streams in the exceptional case.
class WriteNumbers implements IoOperation<Integer> {
private Writer target;
WriteNumbers(Writer writer) {
target=writer;
}
public void accept(Integer i) throws IOException {
try {
final Writer writer = target;
if(writer!=null) writer.append(i+"\n");
//if(Math.random()>0.9) throw new IOException("test trigger");
} catch (IOException ex) {
if(target!=null) throw ex;
}
}
#Override
public void commit() throws IOException {
target.append("done.\n").close();
}
#Override
public void rollback() throws IOException {
System.err.print("rollback");
Writer writer = target;
target=null;
writer.close();
}
}
FileOutputStream fos = new FileOutputStream(FileDescriptor.out);
FileChannel fch = fos.getChannel();
Writer closableStdIO=new OutputStreamWriter(fos);
try {
processAllAtems(IntStream.range(0, 100).parallel().boxed(),
new WriteNumbers(closableStdIO));
} finally {
if(fch.isOpen()) throw new AssertionError();
}
Terminal operations on Java 8 streams (like collect(), forEach() etc) will always complete the stream.
If you have something that is processing objects from the Stream you know when the stream ends when the Collector returns.
If you just have to close your processor, you can wrap it in a try-with-resource and perform the terminal operatoni inside the try block
try(BatchWriter writer = new ....){
MyStream.forEach( o-> writer.write(o));
}//autoclose writer
You can use Stream#onClose(Runnable) to specify a callback to invoke when the stream is closed. Streams are pull-based (contrary to push-based rx.Observable), so the hook is associated with stream, not it's consumers
This is a bit of a hack but works well.
Create a stream concatenation of the original + a unique object.
Using peek(), see if the new object is encountered, and call the onFinish action.
Return the stream with filter, so that the unique object won't be returned.
This preserves the onClose event of the original stream.
public static <T> Stream<T> onFinish(Stream<T> stream, Runnable action) {
final Object end = new Object(); // unique object
Stream<Object> withEnd = Stream.concat(stream.sequential(), Stream.of(end));
Stream<Object> withEndAction = withEnd.peek(item -> {
if (item == end) {
action.run();
}
});
Stream<Object> withoutEnd = withEndAction.filter(item -> item != end);
return (Stream<T>) withoutEnd;
}
Another option is to wrap the original spliterator, and when it returns false, call the action.
public static <T> Stream<T> onFinishWithSpliterator(Stream<T> source, Runnable onFinishAction) {
Spliterator<T> spliterator = source.spliterator();
Spliterator<T> result = new Spliterators.AbstractSpliterator<T>(source.estimateSize(), source.characteristics()) {
#Override
public boolean tryAdvance(Consumer<? super T> action) {
boolean didAdvance = source.tryAdvance(action);
if (!didAdvance) {
onFinishAction.run();
}
return didAdvance;
}
};
// wrap the the new spliterator with a stream and keep the onClose event
return StreamSupport.stream(result, false).onClose(source::close);
}

How to get the progress of reading a file

I'm trying to read a large object from a file in my application. Since this can take some time I'd like to somehow connect the reading of the file with a JProgressBar. Is there any easy way to find the progress of reading a file? (The loading itself is done in a swingworker thread so updating a progress bar should not be a problem.) I've been thinking about overriding the readByte() method in the FileInputStream to return a progress value of sorts but that seems such a devious way. Any suggestions on how to realize this are more than welcome.
Here is the code for reading the file:
public class MapLoader extends SwingWorker<Void, Integer> {
String path;
WorldMap map;
public void load(String mapName) {
this.path = Game.MAP_DIR + mapName + ".map";
this.execute();
}
public WorldMap getMap() {
return map;
}
#Override
protected Void doInBackground() throws Exception {
File f = new File(path);
if (! f.exists())
throw new IllegalArgumentException(path + " is not a valid map name.");
try {
FileInputStream fs = new FileInputStream(f);
ObjectInputStream os = new ObjectInputStream(fs);
map = (WorldMap) os.readObject();
os.close();
fs.close();
} catch (IOException | ClassCastException | ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
#Override
protected void done() {
firePropertyChange("map", null, map);
}
}
If it were me, I would not mess with overriding FileInputStream. I think the decorator might be a good fit here. The idea is you create a decorator input stream that you pass to your ObjectInputStream. The decorator takes care of updating the progress of your read, then delegates to the real input stream.
Perhaps the easiest solution is to use CountingInputStream from Apache commons-io. The basic steps would be:
Create subclass of CountingInputStream as a non-static inner class of your map loader
Override the afterRead method. Call super.afterRead, then publish your updated status
Pass an instance of your new decorator input stream to output stream, passing the file input stream to the constructor of your decorator
Using RandomAccessFile you may call getFilePointer() to known how many bytes was read.
Time consuming operation may be executed in a background thread, remember using SwingUtilities.invokeLater() to communicate between background task and GUI threads.
If you are considering to override read() in FileInputStream, what you could legitimately consider is making your own wrapper InputStream class that accepts a progress-monitor callback. However, you'll find out that it not as easy as implementing read() since it is very inefficient to spend a method invocation for each byte. Instead you'll need to deal with read(byte[], int, int), which is a bit more involved.

Synchronisation on java.io.File Object

Is it good to use synchronised on java.io.File Object. When you want to alternatively read and write that File Object using two threads: one for reading and one for writing.
public class PrintChar {
File fileObj;
public void read() {
while (true) {
synchronized (this) {
readFile();
notifyAll();
try {
wait();
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName()
+ " throws Exception");
e.printStackTrace();
}
}
}
}
public void write(String temp) {
while (true) {
synchronized (this) {
writeFile(temp);
notifyAll();
try {
wait();
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName()
+ " throws Exception");
e.printStackTrace();
}
}
}
}
public void setFileObj(File fileObj) {
this.fileObj = fileObj;
}
public void readFile() {
InputStream inputStream;
try {
inputStream = new FileInputStream(fileObj);
// Get the object of DataInputStream
DataInputStream in = new DataInputStream(inputStream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
// Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
}
in.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public void writeFile(String temp) {
BufferedWriter bw;
try {
bw = new BufferedWriter(new FileWriter(fileObj, true));
bw.write(temp);
bw.newLine();
bw.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
final PrintChar p = new PrintChar();
p.setFileObj(new File("C:\\sunny.txt"));
Thread readingThread = new Thread(new Runnable() {
#Override
public void run() {
p.read();
}
});
Thread writingThread = new Thread(new Runnable() {
#Override
public void run() {
p.write("hello");
}
});
Thread Randomizer = new Thread(new Runnable() {
#Override
public void run() {
while (true)
try {
Thread.sleep(500000);
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName()
+ " throws Exception");
e.printStackTrace();
}
}
});
readingThread.start();
writingThread.start();
Randomizer.start();
}
}
In the code above I have used Synchronised(this), Can i use Synchronise(fileObj)??
One More solution I have got from one of my professors is to encapsulate the read and write in objects and push them in a fifo after every operation, if anybody elaborate on this
Edit:
Now that you have added your code, you can lock on fileObj but only if it is not changed. I would move it to the constructor and make it final to make sure that someone doesn't call setFileObj inappropriately. Either that or throw an exception if this.fileObj is not null.
Couple other comments:
Don't use notifyAll() unless you really need to notify multiple threads.
If you catch InterruptedException, I'd quit the thread instead of looping. Always make good decisions around catching InterruptedException and don't just print and loop.
Your in.close(); should be in a finally block.
You can lock on any object you want as long as both threads are locking on the same constant object. It is typical to use a private final object for example:
private final File sharedFile = new File(...);
// reader
synchronized (sharedFile) {
// read from file
}
...
// writer
synchronized (sharedFile) {
// write to file
}
What you can't do is lock on two different File objects, even if they both point to the same file. The following will not work for example:
private static final String SHARED_FILE_NAME = "/tmp/some-file";
// reader
File readFile = new File(SHARED_FILE_NAME);
synchronized (readFile) {
...
}
// writer
File writeFile = new File(SHARED_FILE_NAME);
synchronized (writeFile) {
...
}
Also, just because you are locking on the same File object does not mean that the reading and writing code will work between the threads. You will need to make sure that in the writer that all updates are flushed in the synchronized block. In the reader you probably do not want to use buffered streams otherwise you will have stale data.
In general, locking across I/O is not a great idea. It's better to construct your program such that you guarantee by design that usually a given section of the file is not being concurrently written and read, and only lock if you absolutely must mediate between reads and writes of a given piece of the file.
Usually not. There are much better ways: Use a ReentrantLock
This class already offers the "lock for reading/writing" metaphor. It also correctly handles the case that many threads can read at the same time but only one thread can write.
As other people already mentioned, locking will only work if all threads use the same File instance.
Make sure you flush the output buffers after each write; this will cost some performance but otherwise, you'll get stale reads (read thread won't find data that you expect to be there).
If you want to simplify the code, add a third thread which accepts commands from the other two. The commands are READ and WRITE. Put the commands in a queue and let the 3rd thread wait for entries in the queue. Each command should have a callback method (like success()) which the 3rd thread will call when the command has been executed.
This way, you don't need any locking at all. The code for each thread will be much more simple and easy to test.
[EDIT] Answer based on your code: It would work in your case because everyone uses the same instance of fileObj but it would mix several things into one field. People reading your code would expect the file object to be just the path to the file to read. So the solution would violate the principle of least astonishment.
If you'd argue that it would save memory, then I'd reply with "premature optimization".
Try to find a solution which clearly communicates your intent. "Clever" coding is good for your ego but that's about the only positive thing that one can say about it (and it's not good for your ego to learn what people will say about you after they see your "clever" code for the first time...) ;-)
Queueing off read/write objects to one thread that then performs the operation is a valid approach to something, but I'm not sure what.
Wha it would not do, for example, is to enforce read/write/read/write order as you specified in your earlier question. There is nothing to stop the read thread queueing up 100 read requests.
That could be prevented by making the thread that submits an object wait on it until it is signaled by the read/write thread, but this seems a very complex way of just enforcing read/write order, (assuming that's what you still want).
I'm getting to the state now where I'm not sure what it is you need/want.

PrintWriter to JTextArea, nothing displays until called method closes

Context: I am reading data from a serial port at 115.2 Kbaud. The read data is printed using a PrintWriter that I then have appending to a JTextArea.
Everything works well, but the text in the JTextArea does not appear until the method sending the stream from the serial port to my PrintWriter finishes. I'd like it to display closer to real-time, as I will at times be receiving upwards of 20-30 MB of text at a time, and how the general flow of text changes as the program executes would be valuable.
I am using the PrintWriter to JTextArea method here. I think the solution probably has to do with Threads and PipedWriter/PipedReader, but every attempt I've made to implement that has failed miserably.
Thank you for your help.
//code calling method; VerifierReader does not inherit from Reader
//or any such class. it's wholly homegrown. I send it the PrintWriter
//as out, telling it to output there
verifierInstance=new VerifierReader("COM3", verifierOutputLocString.getText());
verifierInstance.setSysOutWriter(out);
verifierInstance.readVerifierStream();
// and the relevant code from VerifierReader
public void setSysOutWriter (PrintWriter outWriter) {
sysOutWriter=new PrintWriter(outWriter);
}
public void readVerifierStream() throws SerialPortException,
InterruptedException{
try{
sysOutWriter.println("Listening for verifier...");
//sysOutWriter.flush();
verifierPort.addEventListener(new verifierListener());
lastReadTimer=System.currentTimeMillis();
while(verifierPort.isOpened()) {
Thread.sleep(1000);
//System.out.println(timeOut);
if( ((long)(System.currentTimeMillis()-lastReadTimer))>timeOut){
sysOutWriter.println("Finished");
verifierPort.removeEventListener();
verifierPort.closePort();
}
}
}
finally {
if (verifierPort.isOpened()) {
verifierPort.closePort();
}
bfrFile.close();
}
}
private class verifierListener implements SerialPortEventListener{
String outBuffer;
public void serialEvent(SerialPortEvent event) {
if(event.isRXCHAR()){//If data is available
timeOut=200;
lastReadTimer=System.currentTimeMillis();
if(event.getEventValue() > 0){//Check bytes count in the input buffer
try {
byte[] buffer = verifierPort.readBytes(event.getEventValue());
outBuffer=new String(buffer);
bfrFile.print(outBuffer);
sysOutWriter.print(outBuffer);
//bfrFile.flush();
//sysOutWriter.flush();
}
catch (SerialPortException ex) {
sysOutWriter.println(ex);
}
}
}
}
}
Edit:
I've attempted what was recommended below, and have made the following changes:
private class VerifierTask extends SwingWorker<Void, String> {
public VerifierTask() throws IOException, SerialPortException, InterruptedException{
verifierInstance= new VerifierReader(streamReader);
verifierInstance.setReaderIO("COM3", verifierOutputLocString.getText());
verifierInstance.readVerifierStream();
}
#Override
protected Void doInBackground() throws IOException{
int charItem;
char[] charBuff = new char[10];
String passString;
while ((charItem = streamReader.read(charBuff, 0, 10)) !=-1) {
passString = new String(charBuff);
publish(passString);
}
return null;
}
#Override
protected void process(List<String> outList) {
for (String output : outList) {
outputArea.append(output);
}
}
}
was added, and I changed my button to immediately invoke a new instance of the VerifierTask class, in addition to making VerifierReader implement a PipedWriter for output (with all of that being Strings).
I'm not sure what I'm doing wrong here. When this code is executed the Java process just freezes indefinitely.
Am I assuming correctly that a VerifierReader created in any VerifierTask thread is tied to that thread, and thus my thread.sleep and while(true) statements no longer pose a problem?
Don't call Thread.sleep or do while (true) on the main Swing event thread, the EDT. Ever. Instead do this sort of thing in a background thread such as one provided via a SwingWorker. You would use the publish/process method pair to get intermediate results to your JTextArea.
For more on this, please check out the tutorial: Concurrency in Swing.

Categories