I'm facing a problem. I have a Java client-server application that manages a restaurant; I need to use just keyboard input and output in order to manage it.
My waiter class has two threads: one that reads orders from the input, and one that is continuatively listening (using multicast) for a ready dish that needs to be delivered to the table.
Considering that everything is meant to be done with keyboard, my "ordering" thread writes to standard output "Do you wanna make an order? [y/n]" and waits for an answer, while the "delivery" thread listens for something to be delivered.
If the waiter chooses to order something, then the second thread doesn't show anything until the order is finished (done using a status boolean); if the waiter is free (which means, is showing the hint: "Do you wanna order?") and a ready dish arrives he will see on standard input "There is something to be delivered. Wanna deliver it? [y/n]" and wait for an answer.
My problem is that, whatever he chooses, I don't have any control on which thread will read the answer: did he mean to deliver or to order?
I've tried many possibilities, everyone not working:
- closing the ordering scanner from input, but it can't close;
- pausing the first thread from the other one, but you can't do that in Java;
- synchronizing everything, not working because the threads are meant to work together, not one at a time;
- use some semaphores/status boolean, but in that case I need to modify all the "ordering" part, including an infinite loop that checks that semaphores (I can't use acquire or release without stopping everything).
Any ideas/hints on how to solve the problem?
Have one thread and only one thread listening to UDP. It can store the results in a thread safe collection for the keyboard thread to read or wait on.
Related
Logic Flow A, Processing a New Batch order, Involving 3 threads
Logic Flow B, Cancelling a Batch Order, 1 thread
Scenario
We have a BatchOrder of goods that will need to be broken down into individual ItemOrders to be processed
At any point of time, a Cancel request for a BatchOrder may arrive
Implementation
I have implemented some code but then I feel the pseudo code laid out in above tables are in fact much more concise and hopefully clearer
Logic Flow A, Processing a New Batch order, Involving 3 threads
Logic Flow B, Cancelling a Batch Order, 1 thread
Requirement
Cancellation should be accurate and complete for ItemOrders that has not entered the 'Filling' status state
Cancellation should be efficient -- it should stop any unnecessary creation and processing of ItemOrders whose BatchOrder is cancelled, and has not entered the 'Filling' state, as early as possible.
ItemOrder that has entered 'Filling' state can not be cancelled by the system but manually
Questions
if Step c_2, c_3, c_4 happens before pb_5, there will be outstanding 'orphan' ItemOrders that are meant to be cancelled. how to prevent this, considering Requirement 1 and 2.
since Step c_5 can happen at anytime, is the mechanism I have set up in the pseudo code adequate to address Requirement 1 and 2?
In general, is there any generic design pattern we can abstract to handle this kind of situation since I think this is a pretty generic MultiThread scenario applicable to many systems (almost all the online shopping cart system, for one)? we can see it as a Producer/Consumer scenario with an addition of 'ProducerCancelHandler', no?
[UPDATE]
Answers
for Question 1, one possible solution may be
the Cancel thread will raise volatile 'cancelSignalReceived' flag in BatchOrder
the Batch process thread will check the 'cancelSignalReceived', at the end of each loop, and raise another volatile flag 'terminatedDueToCancelSignal' (without it, ItemOrder[i] may not be completely 'set up' hence not put in the Batch order by the Batch process thread, so the Cancel thread won't be able to find it, but then the Batch process thread will finish its loop and 'set up' this ItemOrder - by this it becomes an 'orphan' ItemOrder that supposed to be cancelled but not marked as 'cancelled')
the Cancel thread will spin and wait until 'terminatedDueToCancelSignal' is true, then proceed to do cancellation actions
But OMG, I can't help thinking: why does this typical scenario require this kind of intricate/hairy/clumsy solution? i.e. back to my Question 3, shouldn't there be some pattern we can abstract and follow in this situation, something much more elegant, simplistic and robust?
Assign class new attribute with enum type atomic, (as you do). Now when you push object to queue (it should be strong reference to the object) you also add it to cache (key - some identifier and value - another ref to the object). Now when you pop queue - in consumer you need to check this atomic attribute for the value and proceed if that is correct (not cancelled).in producer (processing batch order) if you need to remove value from processing you change this atomic attribute for canceled. This way you don't have to always check in consumers against itemOrder.
Here's the scenario:
ThreadA is going to read from some socket, and write data to "MyFile.txt"
ThreadB is going to read "MyFile", and when it reaches the end, it will loops until new data are available in MyFile (because i don't want to re-open "MyFile.txt", and lose the time so i reach the position from where i was..).
Is it possible to do such a thing ?
If not, is there another way to do such a thing ?
The problem you mention is a famous Producer Consumer Problem
Common solution to this is to use BlockingQueue
An example of real world usage is in AjaxYahooSearchEngineMonitor
What Thread A does is, it will submit a string to queue, and then return immediately.
What Thread B does is, it will pick up the item from queue one by one, and process them.
When there is no item in the queue, Thread B will just wait there. See line 83 of the source code.
I came across Java: notify() vs. notifyAll() all over again but still could not satisy myself.
xagyg explained it very well but in the end it became very complex to memorize the concept.
I am trying my best here with simple daily life example so that i and others can come back to this if any one forget. My source of understanding
is answer by xagyg in above link but trying to simplyfy the things here.
Say two guys go to movie theatre and found its houseful. But then box office guy say Jon told them there is a ticket that has been
reserved for president. If he does not come, he will sell it off. Then guys told to jon, ok we are waiting in hotel near by, please
notify us when you get any info. These guys go to hotel and sleep. Now president does not turn up, now Jon has two options
First is notify one of the guy and let other sleep. If he does that one can go movie while other will probably continue to sleep(till
he doesn't get notified. I am assuming this guy didn't have sleep for a year :)). Another option is he notifies(awakens) both of them,
choose any one of them(In actual java example program does not select but its vm/thread scheduler) for movie.In that case he will keep other guy in hotel room as he can create some kind of issues :(. Now once
the show ends, this guy can go for next show if ticket is available. Consider ticket as lock, theatre as object. This what exactly
notify and notifyAll does. So it is clear that notifAll is better over notify when in confusion
Now consider producer/consumer example.
say two consumer thread are waiting for production in store. Now what producer does, he produces two items in single go and exit. Now if producer use notify, only one thread can consumer while other will continue to wait for forever.
But if producer uses notifyAll() here, both thread can go for consumption one at a time
Let me know if my understanding is correct?
I don't think what you've written in the last statement is correct. Each time the producer produces an object it is supposed to notify a consumer, so if 2 objects are created it should invoke notify twice and not just once. This way if you use notify instead of notifyAll, you will still be able to get both consumer threads to consume
I have many threads performing different operations on object and when nearly 50% of the task finished then I want to serialize everything(might be I want to shut down my machine ).
When I come back then I want to start from the point where I had left.
How can we achieve?
This is like saving state of objects of any game while playing.
Normally we save the state of the object and retrieve back. But here we are storing its process's count/state.
For example:
I am having a thread which is creating salary excel sheet for 50 thousand employee.
Other thread is creating appraisal letters for same 50 thousand employee.
Another thread is writing "Happy New Year" e-mail to 50 thousand employee.
so imagine multiple operations.
Now I want to shut down in between 50% of task finishes. say 25-30 thousand employee salary excel-sheet have been written and appraisal letters done for 25-30 thousand and so on.
When I will come back next day then I want to start the process from where I had left.
This is like resume.
I'm not sure if this might help, but you can achieve this if the threads communicate via in-memory queues.
To serialize the whole application, what you need to do is to disable the consumption of the queues, and when all the threads are idle you'll reach a "safe-point" where you can serialize the whole state. You'll need to keep track of all the threads you spawn, to know if they are in are idle.
You might be able to do this with another technology (maybe a java agent?) that freezes the JVM and allows you to dump the whole state, but I don't know if this exists.
well its not much different than saving state of object.
just maintain separate queues for different kind of inputs. and on every launch (1st launch or relaunch) check those queues, if not empty resume your 'stopped process' by starting new process but with remaining data.
say for ex. an app is sending messages, and u quit the app with 10 msg remaining. Have a global queue, which the app's senderMethod will check on every launch. so in this case it will have 10msg in pending queue, so it will continue sending remaining msgs.
Edit:
basically, for all resumable process' say pr1, pr2....prN, maintain queue of inputs, say q1, q2..... qN. queue should remove processed elements, to contain only pending inputs. as soon as u suspend system. store these queues, and on relaunching restore them. have a common routine say resumeOperation, which will call all resumable process (pr1, pr2....prN). So it will trigger the execution of methods with non-0 queues. which in tern replicate resuming behavior.
Java provides the java.io.Serializable interface to indicate serialization support in classes.
You don't provide much information about the task, so it's difficult to give an answer.
One way to think about a task is in terms of a general algorithm which can split in several steps. Each of these steps in turn are tasks themselves, so you should see a pattern here.
By cutting down each algorithms in small pieces until you cannot divide further you get a pretty good idea of where your task can be interrupted and recovered later.
The result of a task can be:
a success: the task returns a value of the expected type
a failure: somehow, something didn't turn right while doing computation
an interrupted computation: the work wasn't finished, but it may be resumed later, and the return value is the state of the task
(Note that the later case could be considered a subcase of a failure, it's up to you to organize your protocol as you see fit).
Depending on how you generate the interruption event (will it be a message passed from the main thread to the worker threads? Will it be an exception?), that event will have to bubble within the task tree, and trigger each task to evaluate if its work can be resumed or not, and then provide a serialized version of itself to the larger task containing it.
I don't think serialization is the correct approach to this problem. What you want is persistent queues, which you remove an item from when you've processed it. Every time you start the program you just start processing the queue from the beginning. There are numerous ways of implementing a persistent queue, but a database comes to mind given the scale of your operations.
I'm writing a simple application for an android phone to communicate with a PC over a socket connection.
The phone might write or recieve a message at any time, and the computer might as well.
The solution I have used so far works like this:
Create a thread and call READ in it.
Run an infinte loop.
Check if thread has finished,
If so grab the READ and process,
Then start a new thread also calling read.
Check to see if another object working in another thread wants to write,
If so grab and write.
Specifically, I am using AsyncTask from the Android API to run the threads.
It all works fine, but I am wondering if creating a new thread for each READ is too performance heavy and/or bad coding, and if so how I can reuse the same thread to have the same behaviour.
Alternatively, is there a better way to handle this situation overall?
Thanks in advance for any advice!
Yes, creating a new thread for each read is grossly inefficient for your described need.
Instead, consider creating a single thread, a List<your data type> to hold reads, and a semaphore to flag that data is available. Your thread reads each message, places it into the list, and posts the semaphore to whatever is waiting for data. That 'whatever' then receives whatever is in the list until it empties it, then goes back to waiting on the semaphore.
You need one read thread and one write thread. Both should use a BlockingQueue to interface with the rest of the application. Although I don't understand why you would have multiple threads wanting to write.