I am trying to interrupt a thread that is running AES encryption on a file. That can take a while, so far I have come up with this.
This body is inside button activate event. When the user clicks the button again (else clause). The thread should be interrupted, but I would be happier if I could stop the thread completely. But that is deprecated.
Anyway, the thread ignores the .interrupt() and continues to execute the aes256File. It does raise the fileEncryptThread.isInterrupted() flag, but from cpu usage I can see it still continues to crunch the file.
I have read the guide on safe stopping of threads, but I have no desire to completely redesign my already slow AES implementation to be checking for out of class interrupts flags...
fileEncryptThread = new FileThread() // new thread please
{
#Override
public void run()
{
String result = "";
result = MyCrypto.aes256File(enInPath,
enOutPath,
charsToString(passT.getPassword()),
sec);
if (!"".equals(result)) // error handling
{
JOptionPane.showMessageDialog(null,result);
}
}
};
fileEncryptThread.start();
}
else // if stop clicked
{
fileEncryptThread.interrupt();
In order to effectively interrupt a thread, that thread has to be written in an interruptible way. That is, check the
Thread.currentThread().isInterrupted()
boolean and act thereupon.
In your app, you should verify that
result = MyCrypto.aes256File(enInPath,
enOutPath,
charsToString(passT.getPassword()),
sec);
acts in such a manner (if its a 3rd party library, it should be javadoc'ed). If it's not interruptible, you'd choose another implementation for encryption.
AFAIK, the only safe way for a thread to terminate is to return from the "main"-method of the thread (usually run() in Runnable or Thread). You could for example use a while(<some class member boolean>) -loop inside your MyCrypto.aes256File -method and set the boolean to false so the thread will fall out of the loop and exit returning a value indicating that the process was not completed.
One other approach that may be possible, (I don't know the AES algorithm well enough), would be split up the file reading from the encryption. A read thread would fill large-ish buffers from the file and queue them to the encryption thread. The encryption thread would process the buffers and queue the 'used' ones back to the reader thread. This allows both easy stopping of both threads while also probably improving performance, especially on multi-core machines, by moving I/O waits out of the encrypter. The encryption thread would probably never have to wait for a disk read - a temporary wait by the reader thread for a disk head move would not matter if the encryption thread had enough buffers to go at in the queue, (even on a single-core machine). The fixed number of buffers and the two (blocking, thread-safe), queues provide flow-control should the reader thread get ahead of the encrypter.
The actual stopping mechanism then becomes somewhat trivial. The gain in avoiding disk latency would overwhelm the time wasted checking a flag occasionally, eg. just before going to the queue for the next buffer.
Queueing buffers then also allows the possibility of adding sequence numbers to the buffers and so allowing all cores to work on the encryption.
Related
I read that a single user thread can deadlock with a system thread.
My question is , this system thread can be any thread (not necessarily a java thread) that is sharing resources with the java thread. E.g. : I/O on 2 files after taking lock on the files.
So unless the system thread shares the resource with the java thread, it can't create deadlock.
Is there any other example of the above statement in bold.
Another question:
If there are 2 functions using 2 locks, they should lock in the same order. But is it mandatory to release in the same reverse order. Can the lock release order differ for the 2 functions
E.g :
function1() {
try {
lock1.lock();
lock2.lock();
} finally {
lock2.unlock();
lock1.unlock();
}
}
function2() {
try {
lock1.lock();
lock2.lock();
} finally {
lock1.unlock();
lock2.unlock();
}
}
Link for reference : if a single user thread deadlocks, a system thread must also be involved
It's correct that a single Java thread cannot deadlock against itself if only Java object monitor locks are involved.
It's not entirely clear what you mean by "system thread". Even when running a simple program, a JVM will have several threads running, such as a finalizer thread, or for GUI applications, an event distribution thread (EDT). These threads can potentially take Java object monitor locks and therefore deadlock against a single application thread.
A single Java thread can deadlock against external processes, not other Java threads. For example, consider this program:
public static void main(String[] args) throws Exception {
Process proc = Runtime.getRuntime().exec("cat");
byte[] buffer = new byte[100_000];
OutputStream out = proc.getOutputStream();
out.write(buffer);
out.close();
InputStream in = proc.getInputStream();
int count = in.read(buffer);
System.out.println(count);
}
This runs "cat" which simply copies from stdin to stdout. This program will usually deadlock, since it writes a large amount of data to the subprocess. The subprocess will block writing to its output, since the parent hasn't read it yet. This prevents the subprocess from reading all its input. Thus the Java thread has deadlocked against the subprocess. (The usual way to deal with this situation is to have another Java thread read the subprocess output.)
A single Java thread can deadlock if it's waiting for a notification that never occurs. Consider:
public static void main(String[] args) throws InterruptedException {
Object obj = new Object();
synchronized (obj) {
obj.wait();
}
}
This program will never terminate since nothing will ever notify obj or interrupt the thread. This may seem a bit contrived, but instances of this "lost wakeup problem" do occur in practice. A system with bugs may fail to set state properly, or call notify at the wrong time, or call notify instead of notifyAll, leaving a thread blocked in a wait call awaiting a notification that will never occur. In such cases it might be hard to identify another thread that this thread is deadlocked against, since that thread might have died in the past, or it might not have been created yet. But it is surely deadlock.
UPDATE
I ran across another example of a single-threaded deadlock. Goetz et. al., Java Concurrency In Practice p. 215, describes thread-starvation deadlock. Consider an example where
a task that submits a task and waits for its result executes in a single-threaded Executor. In that case, the first task will wait forever, permanently stalling that task and all others waiting to execute in that Executor.
(A single-threaded Executor is basically a single thread processing a queue of tasks, one at a time.)
UPDATE 2
I've found another example in the literature of a single-thread deadlock:
There are three patterns of pairwise deadlock that can occur using monitors. In practice, of course, deadlocks often involve more than two processes, in which case the actual patterns observed tend to be more complicated; conversely, it is also possible for a single process to deadlock with itself (for example, if an entry procedure is recursive).
Lampson, Butler W., and David D. Redell. Experience with Processes and Monitors in Mesa. CACM Vol. 23 No. 2, Feb 1980.
Note that in this paper, a "process" refers to what we'd call a thread and an "entry procedure" is like a synchronized method. However, in Mesa, monitors are not re-entrant, so a single thread can deadlock itself if it attempts to enter the same monitor a second time.
The same is true in Posix threads. If a thread calls pthread_mutex_lock a second time on a normal (i.e., not recursive) mutex, the thread will deadlock on itself.
From these examples, I conclude that "deadlock" does not strictly require two or more threads.
For the first question: think of any Swing application. The main thread may interfere with the Event Dispatch Thread for example easily (since all the event handling takes place in that specific thread). Also, you could play around with the finalizer thread.
For the second question: yes, you can release the locks in any order.
I have a thread which basically looks like this:
public void run() throws InterruptedException {
while(true) {
String curUrl = taskQueue.take();
...
}
}
I know I can change true to a volatile variable, and set it to false when I no longer need it, but it is used throughout my application, so its difficult to tell when it is no longer needed.
I am wondering if having an infinite loop will have a visible effect on the performance of the rest of the android VM, if its left in a blocked state even when my app is not running.
EDIT 1: The code that starts the thread will only start it if its not already running.
As the taskQueue is a blocking queue, no. Leaving this thread running throughout the application's life will not cause any noticeable effect.
If TaskQueue wasn't a blocking then it would. Your application will consume a near a whole core of processing power looping as fast as it can.
A blocking queue will cause a thread invoking method 'take' to halt until there is some result available to return. A non-blocking queue will return null or throw an exception.
Android will terminate your application's process when it isn't being used, and that will include terminating your waiting thread, so in general it won't hang around for long after users switch away to other tasks. I wouldn't worry about it, if it's difficult to know when it won't be needed again.
In my Java web app I have a method which ends out about 200 emails. Because of email server delay the whole process takes about 7 minutes. This bulk email sending has to take place as the result of user action. I of course don't want the user to have to wait that long before they are forwarded to the next, not mention that Apache times out anyway, so I am attempting to implement FutureTask to get the process to run in a separate thread while proceed with the rest of the code like this:
Some code;
Runnable r = (Runnable)new sendEmails(ids);
FutureTask task = new FutureTask(r, null);
Thread t = new Thread(task);
t.start();
Some more code;
The app, however, still waits for the FutureTask to finish before proceeding. I am open to the idea that this also not the best way to run some code on the side in another thread while continuing with the rest of the script. Are there better ways/How do I make this one work?
It looks like you are spinning up 200+ threads in a for loop. That will place a high burden on the machine, and due to the size of each stack that is allocated with each thread it will not take too many threads before the JVM runs out of memory, initially causing much GC and JVM locking up and then potentially under high enough load, a crash.
Sadly this may or may not explain why your code is waiting for the FutureTasks to complete. It may only appear to be waiting to due thrashing by creating/scheduling so many threads; but then again it may not. There could very well be something else synchronizing your code that has been cut out of the snippet above.
A way for you to find if there is a tricksy synchronisation hiding somewhere would be to hit ctrl-break while running the code (assuming that you are running from a command line, intellij/eclipse both have a stack dump icon that is handy). This will cause a stack dump for every thread in the system to appear. By doing this you will be able to find the user thread that is waiting for the future tasks to complete, and it will say which monitor it is waiting on. If it is not waiting, then you have a different problem. For example the system thrashes creating so many threads in short order that it appears to lock up or some such for a short period of time.
But first I would avoid the excessive Thread creation part, as that could be masking the issue. I suggest using code similar to the following:
ExecutorService scheduler = Executors.newCachedThreadPool()
scheduler.submit( task )
As many others I have a problem killing my thread without using stop().
I have tried to use volatile on a variable with a while loop in my threads run() routine.
The problem is as far as I can see, that the while loop only checks the variable before every turn. The complex routine Im running takes a long time, and because of that the thread is not terminated immediately.
The thread I want to terminate is a routine that connects to another server and it uses a looooong time. And I want to have an abort button for this. (Terminating the thread). I'll try to explane with some code.
class MyConnectClass{
Thread conThread;
volitile boolean threadTerminator = false;
..some code with connect and abort button..
public void actionPerformed(ActionEvent e) {
String btnName = e.getActionCommand();
if(btnName.equalsIgnoreCase("terminate")){
threadTerminator = true;
conThread.interrupt();
System.out.println("#INFO# USER ABORTED CURRENT OPERATION!");
}else if(btnName.equalsIgnoreCase("connectToServer")){
conThread = new Thread() {
public void run() {
while(threadTerminator == false){
doComplexConnect(); //Uses a loooong time
}
}
}
conThread.start();
}
}
}
How can I kill my "connection" thread instantly?
Thanks.
Java abandoned the stop() approach in Threads a while back because killing a Thread ungracefully caused huge problems in the JVM. From the Javadoc for stop():
Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked (as a natural consequence of the unchecked ThreadDeath exception propagating up the stack). If any of the objects previously protected by these monitors were in an inconsistent state, the damaged objects become visible to other threads, potentially resulting in arbitrary behavior. Many uses of stop should be replaced by code that simply modifies some variable to indicate that the target thread should stop running. The target thread should check this variable regularly, and return from its run method in an orderly fashion if the variable indicates that it is to stop running. If the target thread waits for long periods (on a condition variable, for example), the interrupt method should be used to interrupt the wait.
In most cases, it is up to you to check the threadTerminator var whenever it is safe for you to terminate, and handle the thread exit gracefully. See http://docs.oracle.com/javase/6/docs/technotes/guides/concurrency/threadPrimitiveDeprecation.html
If you are doing long I/O, you may be in trouble. Some I/O operations throw an InterruptedException, in which case, you can interrupt the thread, and, if you were in that I/O, that exception will be thrown more or less instantly, and you can abort and cleanup the thread. For this reason, interrupting a thread is preferable to using a special custom threadTerminator variable - it's much more standard. In you main code outside of the I/O, check for interrupted() or isInterrupted() periodically (instead of threadTerminator == false).
If you are doing I/O that doesn't throw InterruptedException, sometimes you can close the Socket or similar, and catch the IOException. And sometimes you are stuck.
Why don't you interrupt the thread and just move on, letting it hang until it finishes? The user could initiate a different action (thread) while the old thread finishes gracefully (which, from what I see you are pretty much doing already btw)
The downside of this that you have trouble when the user starts clicking "connectToServer" a lot (many threads), or when the threads fail to terminate (hanged threads). But maybe it's sufficient for your purpose?
Edit:
It would be simple to implement a mechanism that prevents spawning a new conthread unless "it's good to go" (e.g., use a semaphore).
The tricky part will be deciding whether it's good to open a new connection. You could ask the original thread (I.e. have a isalive() method), or the party you are trying to connect to. Or you could go for a timeout solution. For example, you could let conthread update a timestamp and decide it's dead if the timestamp isn't updated for 1 min etc. The most generally applicable solution would probably be the timeout solution.
Suppose you need to deal with 2 threads, a Reader and a Processor.
Reader will read a portion of the stream data and will pass it to the Processor, that will do something.
The idea is to not stress the Reader with too much of data.
In the set up, i
// Processor will pick up data from pipeIn and will place the output in pipeOut
Thread p = new Thread(new Processor(pipeIn, pipeOut));
p.start();
// Reader will pick a bunch of bits from the InputStream and place it to pipeIn
Thread r = new Thread(new Reader(inputStream, pipeIn));
r.start();
Needless to say, neither pipe is null, when initialized.
I am thinking ... When Processor has been started it attempts to read from the pipeIn, in the following loop:
while (readingShouldContinue) {
Thread.sleep(1); // To avoid tight loop
byte[] justRead = readFrom.getDataCurrentlyInQueue();
writeDataToPipe(processData(justRead));
}
If there is no data to write, it will write nothing, should be no problem.
The Reader comes alive and picks up some data from a stream:
while ((in.read(buffer)) != -1) {
// Writes to what processor considers a pipeIn
writeTo.addDataToQueue(buffer);
}
In Pipe itself, i synchronize access to data.
public byte[] getDataCurrentlyInQueue() {
synchronized (q) {
byte[] a = q.peek();
q.clear();
return a;
}
}
I expect the 2 threads to run semi in parallel, interchanging activities between Reader and a Processor. What happens however is that
Reader reads all blocks up front
Processor treats everything as 1 single block
What am i missing please?
What am i missing please?
(First I should point out that you've left out some critical bits of the code and other information that is needed for a specific fact-based answer.)
I can think of a number of possible explanations:
There may simply be a bug in your application. There's not a lot of point guessing what that bug might be, but if you showed us more code ...
The OS thread scheduler will tend to let an active thread keep running until it blocks. If your processor has only one core (or if the OS only allows your application to use one core), then the second thread may starve ... long enough for the first one to finish.
Even if you have multiple cores, the OS thread scheduler may be slow to assign extra cores, especially if the 2nd thread starts and then immediately blocks.
It is possible that there is some "granularity" effect in the buffering that is causing work not to appear in the queue. (You could view this as a bug ... or as a tuning issue.)
It could simply be that you are not giving the application enough load for multi-threading to kick in.
Finally, I can't figure out the Thread.sleep stuff either. A properly written multi-threaded application does not use Thread.sleep for anything but long term delays; e.g. threads that do periodic house-keeping tasks in the background. If you use sleep instead of blocking, then 1) you risk making the application non-responsive, and 2) you may encourage the OS thread scheduler to give the thread fewer time slices. It could well be that this is the source of your trouble vis-a-vis thread starvation.
You reinvented parts of the java concurrent library. it would make things a lot easier if you modeled your threads with BlockingQueue instead of synchronizind things yourself.
Basically your producer would put chunks on the BlockingQueue und your consumer would while(true) loop over the queue and call get(). That way the producer would block/wait until there is a new chunk on the queue.
The reader is reading everything before its first time-slice. This means that the reading is finishing before the processor ever gets a chance to run.
Try increasing the amount of bytes that are being read, or slow down the reader somehow; maybe with a sleep() call every once in a while.
Btw. Don't poll. It is a horrendous waste of CPU cycles, and it doesn't scale at all.
Also use a synchronized queue and forget the manual locking. http://docs.oracle.com/javase/tutorial/collections/implementations/queue.html
When using multiple threads you need to determine whether you
have work which can be performed in parallel efficiently.
are not adding more overhead than the improvement you are likely to achieve
the OS, or some library is not already optimised to do what you are trying to do.
In your case, you have a good example of when not to use multi-threads. The OS is already tuned to read ahead and buffer data before you ask for it. The work the Reader does is relatively trivial. The overhead of creating new buffers, adding them to a queue and passing the data between threads is likely to be greater than the amount of work you are performing in parallel.
When you try to use multiple threads to do a task best done by a single thread, you will get strange profiling/tuning results.
+1 For a good question.