Java multithreading, getting threads to work in parallel - java

Suppose you need to deal with 2 threads, a Reader and a Processor.
Reader will read a portion of the stream data and will pass it to the Processor, that will do something.
The idea is to not stress the Reader with too much of data.
In the set up, i
// Processor will pick up data from pipeIn and will place the output in pipeOut
Thread p = new Thread(new Processor(pipeIn, pipeOut));
p.start();
// Reader will pick a bunch of bits from the InputStream and place it to pipeIn
Thread r = new Thread(new Reader(inputStream, pipeIn));
r.start();
Needless to say, neither pipe is null, when initialized.
I am thinking ... When Processor has been started it attempts to read from the pipeIn, in the following loop:
while (readingShouldContinue) {
Thread.sleep(1); // To avoid tight loop
byte[] justRead = readFrom.getDataCurrentlyInQueue();
writeDataToPipe(processData(justRead));
}
If there is no data to write, it will write nothing, should be no problem.
The Reader comes alive and picks up some data from a stream:
while ((in.read(buffer)) != -1) {
// Writes to what processor considers a pipeIn
writeTo.addDataToQueue(buffer);
}
In Pipe itself, i synchronize access to data.
public byte[] getDataCurrentlyInQueue() {
synchronized (q) {
byte[] a = q.peek();
q.clear();
return a;
}
}
I expect the 2 threads to run semi in parallel, interchanging activities between Reader and a Processor. What happens however is that
Reader reads all blocks up front
Processor treats everything as 1 single block
What am i missing please?

What am i missing please?
(First I should point out that you've left out some critical bits of the code and other information that is needed for a specific fact-based answer.)
I can think of a number of possible explanations:
There may simply be a bug in your application. There's not a lot of point guessing what that bug might be, but if you showed us more code ...
The OS thread scheduler will tend to let an active thread keep running until it blocks. If your processor has only one core (or if the OS only allows your application to use one core), then the second thread may starve ... long enough for the first one to finish.
Even if you have multiple cores, the OS thread scheduler may be slow to assign extra cores, especially if the 2nd thread starts and then immediately blocks.
It is possible that there is some "granularity" effect in the buffering that is causing work not to appear in the queue. (You could view this as a bug ... or as a tuning issue.)
It could simply be that you are not giving the application enough load for multi-threading to kick in.
Finally, I can't figure out the Thread.sleep stuff either. A properly written multi-threaded application does not use Thread.sleep for anything but long term delays; e.g. threads that do periodic house-keeping tasks in the background. If you use sleep instead of blocking, then 1) you risk making the application non-responsive, and 2) you may encourage the OS thread scheduler to give the thread fewer time slices. It could well be that this is the source of your trouble vis-a-vis thread starvation.

You reinvented parts of the java concurrent library. it would make things a lot easier if you modeled your threads with BlockingQueue instead of synchronizind things yourself.
Basically your producer would put chunks on the BlockingQueue und your consumer would while(true) loop over the queue and call get(). That way the producer would block/wait until there is a new chunk on the queue.

The reader is reading everything before its first time-slice. This means that the reading is finishing before the processor ever gets a chance to run.
Try increasing the amount of bytes that are being read, or slow down the reader somehow; maybe with a sleep() call every once in a while.
Btw. Don't poll. It is a horrendous waste of CPU cycles, and it doesn't scale at all.
Also use a synchronized queue and forget the manual locking. http://docs.oracle.com/javase/tutorial/collections/implementations/queue.html

When using multiple threads you need to determine whether you
have work which can be performed in parallel efficiently.
are not adding more overhead than the improvement you are likely to achieve
the OS, or some library is not already optimised to do what you are trying to do.
In your case, you have a good example of when not to use multi-threads. The OS is already tuned to read ahead and buffer data before you ask for it. The work the Reader does is relatively trivial. The overhead of creating new buffers, adding them to a queue and passing the data between threads is likely to be greater than the amount of work you are performing in parallel.
When you try to use multiple threads to do a task best done by a single thread, you will get strange profiling/tuning results.
+1 For a good question.

Related

Poor Multi-threading performance compared to Multi-processing in Java

Assume that we have several million long lines of text that must be parsed.
On my i7 2600 CPU it takes about 13 milliseconds to parse every 1000 lines.
Therefore, parsing 1,000,000 lines takes around 13 seconds.
To decrease execution time, I have managed using multiple threads.
Using a blocking queue, I push 1,000,000 lines as a set of 1000 chunk each containing 1000 lines and consume the chunks using 8 threads. The code is simple and seems to be working however, the performance is not encouraging and takes around 11 seconds.
Here is the main fraction of multi-threaded code:
for(int i=0;i<threadCount;i++)
{
Runnable r=new Runnable() {
public void run() {
try{
while (true){
InputType chunk=inputQ.poll(10, TimeUnit.MILLISECONDS);
if(chunk==null){
if(inputRemains.get())
continue;
else
return;
}
processItem(chunk);
}
}catch (Exception e) {
e.printStackTrace();
}
}
};
Thread t=new Thread(r);
threadList.add(t);
for(Thread t: threads)
t.join();
I have used ExecutorService too but the performance is worse!
Changing the chunk size does not help too and the perfomance does not improve.
It means that the blocking queue is not a bottleneck.
On the other hand, when I run 4 instances of the serial program concurrently, it just takes 15 seconds to all 4 instances finish. This means that I can process 4,000,0000 lines using 4 process in 15 seconds and hence, the speed up is around 3.4 that is very promising compared to 1.2 speed up of multi-threading.
I am wondering that anyone has any idea about this?
The problem is very straight forward: a set of lines in a blocking queue and several threads that pol items from the queue and process them in parallel. The queue is filled initially so the threads are fully busy.
I had similar experiences before too but I can not figure out why multi-processing is better.
I should also mention that I run the test on Windows 7 and using a 1.7 JRE.
Any idea is welcomed and thanks before hand.
Edit:
So I initially thought that your timing was around your entire program. If you are just timing the processing of the lines after they have been read into memory, then it may be that your processItem(chunk); method is either doing IO of its own or it is writing information into a synchronized object or other shared variable that is stopping it from being able to fulling run concurrently.
I am wondering that anyone has any idea about this?
Your problem may be that you are IO bound and not CPU board. The only way you will get a large speed improvement by adding more threads is if you are doing more CPU processing than you are doing reading from (or writing to) disk. Once you have maxed out the IO capabilities of your disk subsystem, there is not much that you can do to improve the speed of the processing. As you have demonstrated, adding more threads can actually slow down an IO bound program.
I'd add a single extra thread (i.e. 2 processing threads) to see if that helps. If all you are getting is a 2 second speed improvement then you are going to have to divide the file up over multiple drives or move it to a memory drive if this is a repeated task to be able to read it faster.
I have used ExecutorService too but the performance is worse!
This might happen because you are using too many threads or maybe processing too few lines per iteration/chunk.
On the other hand, when I run 4 instances of the serial program concurrently, it just takes 15 seconds to all 4 instances finish
I suspect this is because each of them can use each other's disk cache from the OS. When the first application reads block #1, the other 3 applications don't have to. Try copying the file 4 times and try 4 serial applications running at the same time each on their own file. You should see the difference.
I would blame your parallelisation of your code. If items are available to process then several threads will be competing for the same resource (the queue). Contention for synchronisation locks is a bit of a performance killer. If items are being processed faster than they are being added to the queue then the threads that are being starved are pretty much just busy loops eg. while (true) {}. This is because your poll time is very short and when the polling fails you simply immediately try again.
A little note on synchronisation. To begin with the JVM uses busy loops to wait for a resource to become available as (in general) code is written to release synchronisation locks as quickly as possible and the alternative (doing a context switch) is quite expensive. Eventually if the JVM finds it is spending most of its time waiting for synchronisation locks then it will default to do switching out to a different thread if it cannot acquire a lock.
A better solution is to have one thread reading in the data and dispatching a new thread whenever there is both an available slot for a thread and data for a new thread. Here Executor would be useful as it can keep track of which threads have finished and which are still busy. But the pseudo-code would look something like:
int charsRead;
char[] buffer = new char[BUF_SIZE];
int startIndex = 0;
while((charsRead = inputStreamReader.read(buffer, startIndex, buffer.length)
!= -1) {
// find last new line so don't give a thread any partial lines
int lastNewLine = findFirstNewLineBeforeIndex(buffer, charsRead);
waitForAvailableThread(); // if not max threads running then should return
// immediately
Thread t = new Thread(createRunnable(buffer, lastNewLine));
t.start();
addRunningThread(t);
// copy any overshoot to the start of a new buffer
// use a new buffer as the another thread is now reading from the previous
// buffer
char[] newBuffer = new char[BUF_SIZE];
System.arraycopy(buffer, lastNewLine+1, newBuffer, 0,
charsRead-lastNewLine-1);
buffer = newBuffer;
}
waitForRemainingThreadsToTerminate();
it takes about 13 milliseconds to parse every 1000 lines.
Therefore, parsing 1,000,000 lines takes around 13 seconds.
The jVM doesn't warm up until it has done something 10,000 after which it can be 10-100x faster so it could be 13 second or it could be 130 ms or less.
Using a blocking queue, I push 1,000,000 lines as a set of 1000 chunk each containing 1000 lines and consume the chunks using 8 threads. The code is simple and seems to be working however, the performance is not encouraging and takes around 11 seconds.
I suggest you retest one thread, you are likely to find it takes less than 11 second.
The bottle neck is the time it takes to parse the String into a line and create the String object, the rest is just overhead which doesn't address the true bottle neck.
If you read different files, one per cpus, you can get close to linear speed up. The problem with reading lines is you have to read one after the other and you get little benefit from concurrency.
2600 is using HT ( Hyper threading) for 8 threads .. and parsing is mainly memory work so little benefit from HT..

Spawning tons of threads without running out of memory

I have a multi-threaded application which creates hundreds of threads on the fly. When the JVM has less memory available than necessary to create the next Thread, it's unable to create more threads. Every thread lives for 1-3 minutes. Is there a way, if I create a thread and don't start it, the application can be made to automatically start it when it has resources, and otherwise wait until existing threads die?
You're responsible for checking your available memory before allocating more resources, if you're running close to your limit. One way to do this is to use the MemoryUsage class, or use one of:
Runtime.getRuntime().totalMemory()
Runtime.getRuntime().freeMemory()
...to see how much memory is available. To figure out how much is used, of course, you just subtract total from free. Then, in your app, simply set a MAX_MEMORY_USAGE value that, when your app has used that amount or more memory, it stops creating more threads until the amount of used memory has dropped back below this threshold. This way you're always running with the maximum number of threads, and not exceeding memory available.
Finally, instead of trying to create threads without starting them (because once you've created the Thread object, you're already taking up the memory), simply do one of the following:
Keep a queue of things that need to be done, and create a new thread for those things as memory becomes available
Use a "thread pool", let's say a max of 128 threads, as all your "workers". When a worker thread is done with a job, it simply checks the pending work queue to see if anything is waiting to be done, and if so, it removes that job from the queue and starts work.
I ran into a similar issue recently and I used the NotifyingBlockingThreadPoolExecutor solution described at this site:
http://today.java.net/pub/a/today/2008/10/23/creating-a-notifying-blocking-thread-pool-executor.html
The basic idea is that this NotifyingBlockingThreadPoolExecutor will execute tasks in parallel like the ThreadPoolExecutor, but if you try to add a task and there are no threads available, it will wait. It allowed me to keep the code with the simple "create all the tasks I need as soon as I need them" approach while avoiding huge overhead of waiting tasks instantiated all at once.
It's unclear from your question, but if you're using straight threads instead of Executors and Runnables, you should be learning about java.util.concurrent package and using that instead: http://docs.oracle.com/javase/tutorial/essential/concurrency/executors.html
Just write code to do exactly what you want. Your question describes a recipe for a solution, just implement that recipe. Also, you should give serious thought to re-architecting. You only need a thread for things you want to do concurrently and you can't usefully do hundreds of things concurrently.
This is an alternative, lower level solution Then the above mentioed NotifyingBlocking executor - it is probably not as ideal but will be simple to implement
If you want alot of threads on standby, then you ultimately need a mechanism for them to know when its okay to "come to life". This sounds like a case for semaphores.
Make sure that each thread allocates no unnecessary memory before it starts working. Then implement as follows :
1) create n threads on startup of the application, stored in a queue. You can Base this n on the result of Runtime.getMemory(...), rather than hard coding it.
2) also, creat a semaphore with n-k permits. Again, base this onthe amount of memory available.
3) now, have each of n-k threads periodically check if the semaphore has permits, calling Thread.sleep(...) in between checks, for example.
4) if a thread notices a permit, then update the semaphore, and acquire the permit.
If this satisfies your needs, you can go on to manage your threads using a more sophisticated polling or wait/lock mechanism later.

Multithreading help in Java

I'm new to Java, and I need some help working on this program. This is a small part of a large class project, and I must use multithreading.
Here's what I want to do algorithmically:
while (there is still input left, store chunk of input in <chunk>)
{
if there is not a free thread in my array then
wait until a thread finishes
else there is a free thread then
apply the free thread to <chunk> (which will do something to chunk and output it).
Note: The ordering of the chunks being output must be the same as input
}
So, the main things I don't know how to do:
How can I check whether or not there's a free thread in the array? I know that there is a function ThreadAlive, but it seems super inefficient to poll every single thread every time in my loop.
If there is no free thread, how can I wait until one has finished?
The ordering is important. How can I preserve the ordering in which the threads output? As in, the order of the output needs to match the order of the input. How can I guarantee this synchronization?
How do I even pass the chunk to my thread? Can I just use the Runnable interface to do this?
Any help with these four bullets is greatly appreciated. Since I'm a super noob, code samples would help significantly.
(side-note: Making an array of threads was just an idea of mine to handle the user defined number of threads. If you have a better way to handle this you're welcome to suggest it!)
Sounds like you basically have a producer/consumer model and can be solved with an ExecutorService and BlockingQueue. Here is a similar question with a similar answer:
producer/consumer work queues
As #altaiojok mentioned, you want to use an ExecutorService and BlockingQueue. The basic algorithm works like this:
ExecutorService executor = Executors.newFixedThreadPool(...); //or newCachedThreadPool, etc...
BlockingQueue<Future<?>> outputQueue = new LinkedBlockingQueue<Future<?>>();
//To be run by an input processing thread
void submitTasks() {
BufferedReader input = ... //going to assume you have a file that you want to read, this could be any method of input (file, keyboard, network, etc...)
String line = input.readLine();
while(line != null) {
outputQueue.add(executor.submit(new YourCallableImplementation(line)));
line = input.readLine();
}
}
//To be run by a different output processing thread
void processTaskOutput() {
try {
while(true) {
Future<?> resultFuture = outputQueue.take();
? result = resultFuture.get();
//process the output (write to file, send to network, print to screen, etc...
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
I'll leave it to you to figure out how to implement Runnable to make the input and output thread as well as how to implement Callable for the tasks you need to process.
I would suggest using commons-pool which offers pooling of threads so you can easily limit the number of used threads and it also offers some other helper methods.
Concerning the ordering: have a look at the synchronize keyword.
And I would suggest to have a look at the java tutorial (the part about concurrency): http://download.oracle.com/javase/tutorial/essential/concurrency/index.html
Streams might come handy:
List<Chunk> chunks = new ArrayList<>();
//....
Function<Chunk, String> toWeightInfo = (chunk) -> "weight = "+(chunk.size()*chunk.prio());
List<String> results = chunks.parallelStream()
.map(toWeightInfo)
.collect(Collectors.toList());
System.out.println(results);
The parallel stream uses the System's default "fork/join" thread pool, which should be the size of available logical CPUs and processes your stuff in parallel. It also guarantees the same order of results.
The parallel streams API hides all the complexity of assigning free threads to jobs and optimizations like work-stealing away from you. Just give it something to chew on and it will work its magic.
If you need to use a thread pool of a custom size, please refer to the
Custom thread pool in Java 8 parallel stream question.
You might also have a look at this good Java 8 Stream Tutorial.
If your case is rather complex and you're streaming chunks into your program, and you've got multiple stages of work, where some must be serial and some can be parallel and some depend on each other, you might have a look at the Disruptor framework from LMAX.
Kind regards
Use ExecutorCompletionService and Future<T>. Together they provide a threadpool based task framework that takes care of all your concerns.
How can I check whether or not there's a free thread in the array? I know that there is a function ThreadAlive, but it seems super inefficient to poll every single thread every time in my loop.
You dont have to. The executor will do this for you in an (super)efficient manner.You just have to submit tasks to it and sit back.
If there is no free thread, how can I wait until one has finished?
Again , you really dont have to. This is taken care of by executor.
The ordering is important. How can I preserve the ordering in which the threads output? As in, the order of the output needs to match the order of the input. How can I guarantee this synchronization?
This is a concern. If you want the processed output ( of chunks, in your words ) to arrive in the same order as these chunks are present in the initial array, you have to address a few points :
Is it just the order of arrival of the results that matter , or is it that the tasks processing themselves have dependencies on the order ? If it is the former , it is much easily done, but if its the later , then you have problems. ( which I think are very hard things to start with considering your admission of being new to Java, so I would just recommend more learning on your part before attempting this. )
Assuming it is the former case , what you can do is this : Submit the chunks to the executor in some order , and each submission will give you a handle ( called a Future<Result> ) to the task processed output. Store these handles in a ordered queue, and when you want the results , call the get() on these Future(s). Note that if some task in the middle of the order takes long time to complete , then the results of the following tasks will also be delayed.
How do I even pass the chunk to my thread? Can I just use the Runnable interface to do this?
Create a Callable instance wrapping one chunk each into the instance. This represents your task that you will submit() to the ExecutorService.

Java - how to interrupt a busy thread

I am trying to interrupt a thread that is running AES encryption on a file. That can take a while, so far I have come up with this.
This body is inside button activate event. When the user clicks the button again (else clause). The thread should be interrupted, but I would be happier if I could stop the thread completely. But that is deprecated.
Anyway, the thread ignores the .interrupt() and continues to execute the aes256File. It does raise the fileEncryptThread.isInterrupted() flag, but from cpu usage I can see it still continues to crunch the file.
I have read the guide on safe stopping of threads, but I have no desire to completely redesign my already slow AES implementation to be checking for out of class interrupts flags...
fileEncryptThread = new FileThread() // new thread please
{
#Override
public void run()
{
String result = "";
result = MyCrypto.aes256File(enInPath,
enOutPath,
charsToString(passT.getPassword()),
sec);
if (!"".equals(result)) // error handling
{
JOptionPane.showMessageDialog(null,result);
}
}
};
fileEncryptThread.start();
}
else // if stop clicked
{
fileEncryptThread.interrupt();
In order to effectively interrupt a thread, that thread has to be written in an interruptible way. That is, check the
Thread.currentThread().isInterrupted()
boolean and act thereupon.
In your app, you should verify that
result = MyCrypto.aes256File(enInPath,
enOutPath,
charsToString(passT.getPassword()),
sec);
acts in such a manner (if its a 3rd party library, it should be javadoc'ed). If it's not interruptible, you'd choose another implementation for encryption.
AFAIK, the only safe way for a thread to terminate is to return from the "main"-method of the thread (usually run() in Runnable or Thread). You could for example use a while(<some class member boolean>) -loop inside your MyCrypto.aes256File -method and set the boolean to false so the thread will fall out of the loop and exit returning a value indicating that the process was not completed.
One other approach that may be possible, (I don't know the AES algorithm well enough), would be split up the file reading from the encryption. A read thread would fill large-ish buffers from the file and queue them to the encryption thread. The encryption thread would process the buffers and queue the 'used' ones back to the reader thread. This allows both easy stopping of both threads while also probably improving performance, especially on multi-core machines, by moving I/O waits out of the encrypter. The encryption thread would probably never have to wait for a disk read - a temporary wait by the reader thread for a disk head move would not matter if the encryption thread had enough buffers to go at in the queue, (even on a single-core machine). The fixed number of buffers and the two (blocking, thread-safe), queues provide flow-control should the reader thread get ahead of the encrypter.
The actual stopping mechanism then becomes somewhat trivial. The gain in avoiding disk latency would overwhelm the time wasted checking a flag occasionally, eg. just before going to the queue for the next buffer.
Queueing buffers then also allows the possibility of adding sequence numbers to the buffers and so allowing all cores to work on the encryption.

Java threads query

Im working on a java application that involves threads. So i just wrote a piece of code to just familiarize myself with the execution of multiple yet concurrent threads
public class thready implements Runnable{
private int num;
public thready(int a) {
this.num=a;
}
public void run() {
System.out.println("This is thread num"+num);
for (int i=num;i<100;i++)
{
System.out.println(i);
}
}
public static void main(String [] args)
{
Runnable runnable =new thready(1);
Runnable run= new thready(2);
Thread t1=new Thread(runnable);
Thread t2=new Thread(run);
t1.start();
t2.start();
}}
Now from the output of this code, I think at any point in time only 1 thread is executing and the execution seems to alternate between the threads. Now i would like to know if my understanding of the situation is correct. And if it is I would like to know if there is any way in which i could get both threads to executing simultaneously as i wish to incorporate this scenario in a situation wherein i want to write a tcp/ip socket listener that simultaneously listens on 2 ports, at the same time. And such a scenario cant have any downtime.
Any suggestions/advice would be of great help.
Cheers
How many processors does your machine have? If you have multiple cores, then both threads should be running at the same time. However, console output may well be buffered and will require locking internally - that's likely to be the effect you're seeing.
The easiest way to test this is to make the threads do some real work, and time them. First run the two tasks sequentially, then run them in parallel on two different threads. If the two tasks don't interact with each other at all (including "hidden" interactions like the console) then you should see a roughly 2x performance improvement using two threads - if you have two cores or more.
As Thilo said though, this may well not be relevant for your real scenario anyway. Even a single-threaded system can still listen on two sockets, although it's easier to have one thread responsible for each socket. In most situations where you're listening on sockets, you'll spend a lot of the time waiting for more data anyway - in which case it doesn't matter whether you've got more than one core or not.
EDIT: As you're running on a machine with a single core (and assuming no hyperthreading) you will only get one thread executing at a time, pretty much by definition. The scheduler will make sure that both threads get CPU time, but they'll basically have to take turns.
If you have more than one CPU, both threads can run simultaneously. Even if you have only one CPU, as soon as one of the threads waits for I/O, the other can use the CPU. The JVM will most likely also try to dice out CPU time slices fairly. So for all practical purposes (unless all they do is use the CPU), your threads will run simultaneously (as in: within a given second, each of them had access to the CPU).
So even with a single CPU, you can have two threads listening on a TCP/IP socket each.
Make the threads sleep in between the println statements. What you have executes too fast for you to see the effect.
Threads are just a method of virtualizing the CPU so that it can be used by several applications/threads simultaneously. But as the CPU can only execute one program at a time, the Operating System switches between the different threads/processes very fast.
If you have a CPU with just one core (leaving aside hyperthreading) then your observation, that only one thread is executing at a time, is completely correct. And it's not possible in any other way, you're not doing anything wrong.
If the threads each take less than a single CPU quantum, they will appear to run sequentially. Printing 100 numbers in a row is probably not intensive enough to use up an entire quantum, so you're probably seeing sequential running of threads.
As well, like others have suggested, you probably have two CPU, or a hyperthreaded CPU at least. The last pure single core systems were produced around a decade ago, so it's unlikely that your threads aren't running side-by-side.
Try increasing the amount of processing that you do, and you might see the output intermingle. Be aware that when you do, System.out.println is NOT threadsafe, as far as I know. You'll get one thread interrupting the output of another mid-line.
They do run simultaneously, they just can't use the outputstream at the same time.
Replace your run- method with this:
public void run() {
for (int i=num;i<100;i++) {
try {
Thread.sleep(100);
System.out.println("Thread " + num + ": " + i);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
If you are getting many messages per second and processing each piece of data takes few milliseconds to few seconds, it is not a good idea to start one-thread per message. Ultimately number of threads spawned are limited by the underlying OS. You may get out-of-threads error or something like that.
Java 5 introduced Thread Pool framework where you can allocate a fixed number of threads and submit the job (instance of Runnable). This framework will run the job in one of the available thread in the pool. It is more efficient as there is not much context switching done. I wrote a blog entry to jump-start on this framework.
http://dudefrommangalore.blogspot.com/2010/01/concurrency-in-java.html
Cheers,
-- baliga
For the question on listening on 2 ports, clients has to send message to one of them. But since both ports are opened to accept connections within a single JVM, if the JVM fails having 2 ports does not provide you high-availability.
Usual pattern for writing a server which listen on a port is to have one thread listen on the port. As soon as the data arrives, spawn another thread, hand-over the content as well as the client socket to the newly spawned thread and continue accepting new messages.
Another pattern is to have multiple threads listen on the same socket. When client connects, connection is made to one of the thread.
Two ways this could go wrong:
System.out.println() may use a buffer, you should call flush() to get it to the screen.
There has to be some synchronisation
build into the System.out object or
you couldn't use it in a
multithreaded application without
messing up the output, so it is
likely that one thread holds a lock
for most of the time, making the other thread wait. Try using System.out in one thread and Sytem.err in the other.
Go and read up on multitasking and multiprogramming. http://en.wikipedia.org/wiki/Computer_multitasking

Categories