I want to implement a producer / consumer scenario where i have multiple producers and a single consumer. Producers keep adding items to a queue and consumer dequeues the items. When the consumer has processed enough items, both the producers and consumer should stop execution. Consumer can easily terminate itself when it process enough items. But the producers should also know when to exit. The typical producer poison pills do not work here.
One way to do it would be to have a shared boolean variable between consumer and producers. Consumer sets the boolean variable to true and producers periodically check the variable and exit if it set to true.
Any better ideas on how i can do this ?
I suppose you can have a shared counter and have a max. If an increment is greater than the max value then the thread cannot add to the queue.
private final AtomicInteger count = new AtomicInteger(0);
private final int MAX = ...;/
private final BlockingQueue<T> queue = ...;
public boolean add(T t){
if(count.incrementAndGet() > MAX)
return false;
return queue.offer(t);
}
Not sure if this approach would be any use.
Include a reference to the producer in the message.
Producer provides a call back method to tell them to stop producing.
Consumer keeps a registry of producers based on the unique set of references
that are passed to it.
When the consumer has had enough, it iterates over the registry of producers, and tells them to stop by calling the callback method.
Would only work if producer and consumer are in the same JVM
Wouldn't stop any new producers from starting up
And I'm not sure it maintains the separation of producer and consumer
Alternatively, as the Queue is the shared resource between these two objects, could you introduce an "isOpen" state on the queue which is checked before the producer writes to it and is set by the consumer when it has done as much work as it is happy to do?
From what I understand you'll need something like this:
private static final BlockingQueue<String> queue = new LinkedBlockingQueue<String>();
private static boolean needMore = true;
static class Consumer implements Runnable
{
Scanner scanner = new Scanner(System.in);
#Override
public void run()
{
do
{
try
{
String s = queue.take();
System.out.println("Got " + s);
needMore = scanner.nextBoolean();
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
while (needMore);
}
}
static class Producer implements Runnable
{
Random rand = new Random();
#Override
public void run()
{
System.out.println("Starting new producer...");
do
{
queue.add(String.valueOf(rand.nextInt()));
try
{
Thread.sleep(1000);
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
while (needMore);
System.out.println("Producer shuts down.");
}
}
public static void main(String[] args) throws Exception
{
Thread producer1 = new Thread(new Producer());
Thread producer2 = new Thread(new Producer());
Thread producer3 = new Thread(new Producer());
Thread consumer = new Thread(new Consumer());
producer1.start();
producer2.start();
producer3.start();
consumer.start();
producer1.join();
producer2.join();
producer3.join();
consumer.join();
return;
}
The consumer dynamically decides if it needs more data and stops when it has found what it was searching for example; this is simulated by the user inputting true/false for continuing/stopping.
Here is an I/O sample:
Starting new producer...
Starting new producer...
Starting new producer...
Got -1782802247
true
Got 314306979
true
Got -1787470224
true
Got 1035850909
false
Producer shuts down.
Producer shuts down.
Producer shuts down.
This may not look clean on first sight, but I think it's actually cleaner than having an extra variable etc. if you are trying to do this as a part of shutdown process.
Make your consumers an ExecutorService, and from your consumer task, call shutdownNow() when the task decides that the consumers had consumed enough. This will cancel all pending tasks on the queue, interrupt currently running tasks and the producers will start to get RejectedExecutionException upon submission. You can treat this exception as a signal from the consumers.
Only caveat is that when you have multiple consumers, calling shutdownNow() in a serial manner will not guarantee that no task will be executed after one consumer decided it was enough. I'm assuming that's fine. If you need this guarantee, then you can indeed share an AtomicBoolean and let all producers and consumers check it.
Related
I have to manage scheduled file replications in a system. The file replications are scheduled by users and I need to restrict the amount of system resources used during replication. The amount of time that each replication may take is not defined (i.e. a replication may be scheduled to run every 15 minutes and the previous run may still be running when the next run is due) and a replication should not be queued if it's already queued or running.
I have a scheduler that periodically checks for due file replications and, for each one, (1) add it to a blocking queue if it is not queued nor running or (2) drop it otherwise.
private final Object scheduledReplicationsLock = new Object();
private final BlockingQueue<Replication> replicationQueue = new LinkedBlockingQueue<>();
private final Set<Long> queuedReplicationIds = new HashSet<>();
private final Set<Long> runningReplicationIds = new HashSet<>();
public boolean add(Replication replication) {
synchronized (scheduledReplicationsLock) {
// If the replication job is either still executing or is already queued, do not add it.
if (queuedReplicationIds.contains(replication.id) || runningReplicationIds.contains(replication.id)) {
return false;
}
replicationQueue.add(replication)
queuedReplicationIds.add(replication.id);
}
I also have a pool of threads that waits until there is a replication in the queue and executes it. Below is the main method of each thread in the thread pool:
public void run() {
while (True) {
Replication replication = null;
synchronized (scheduledReplicationsLock) {
// This will block until a replication job is ready to be run or the current thread is interrupted.
replication = replicationQueue.take();
// Move the ID value out of the queued set and into the active set
Long replicationId = replication.getId();
queuedReplicationIds.remove(replicationId);
runningReplicationIds.add(replicationId);
}
executeReplication(replication)
}
}
This code gets into a deadlock because the first thread in the thread poll will get scheduledLock and prevent the scheduler to add replications to the queue. Moving replicationQueue.take() out of the synchronized block will eliminate the deadlock, but then it's possible that a element is removed from the queue and the hash sets are not atomically updated with it, which could cause a replication to be incorrectly dropped.
Should I use BlockingQueue.poll() and release the lock + sleep if the queue is empty instead of using BlockingQueue.take() ?
Fixes to the current solution or other solutions that meet the requirements are welcome.
wait / notify
Keeping your same control flow, instead of blocking on the BlockingQueue instance while holding the mutex lock, you can wait on notifications for the scheduledReplicationsLock forcing the worker thread to release the lock and return to the waiting pool.
Here down a reduced sample of your producer:
private final List<Replication> replicationQueue = new LinkedList<>();
private final Set<Long> runningReplicationIds = new HashSet<>();
public boolean add(Replication replication) {
synchronized (replicationQueue) {
// If the replication job is either still executing or is already queued, do not add it.
if (replicationQueue.contains(replication) || runningReplicationIds.contains(replication.id)) {
return false;
} else {
replicationQueue.add(replication);
replicationQueue.notifyAll();
}
}
}
The worker Runnable would then be updated as follows:
public void run() {
synchronized (replicationQueue) {
while (true) {
if (replicationQueue.isEmpty()) {
scheduledReplicationsLock.wait();
}
if (!replicationQueue.isEmpty()) {
Replication replication = replicationQueue.poll();
runningReplicationIds.add(replication.getId())
executeReplication(replication);
}
}
}
}
BlockingQueue
Generally you are better off using the BlockingQueue to coordinate your producer and replicating worker pool.
The BlockingQueue is, as the name implies, blocking by nature and will cause the calling thread to block only if items cannot be pulled / pushed from / to the queue.
Meanwhile, note that you will have to update your running / enqueued state management as you will only synchronizing on the BlockingQueue items dropping any constraints. This then will depend on the context, whether this would be acceptable or not.
This way, you would drop all other used mutex(es) and use on the BlockingQueue as your synchronization state:
private final BlockingQueue<Replication> replicationQueue = new LinkedBlockingQueue<>();
public boolean add(Replication replication) {
// not sure if this is the proper invariant to check as at some point the replication would be neither queued nor running while still have been processed
if (replicationQueue.contains(replication)) {
return false;
}
// use `put` instead of `add` as this will block waiting for free space
replicationQueue.put(replication);
return true;
}
The workers will then take indefinitely from the BlockingQueue:
public void run() {
while (true) {
Replication replication = replicationQueue.take();
executeReplication(replication);
}
}
You no need to use any additional synchronization block if you using BlockingQueue
Quote from docs (https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html)
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control.
just use something like this
public void run() {
try {
while (replicationQueue.take()) { //Thread will be wait for the next element in the queue
Long replicationId = replication.getId();
queuedReplicationIds.remove(replicationId);
runningReplicationIds.add(replicationId);
executeReplication(replication);
}
} catch (InterruptedException ex) {
//if interrupted while waiting next element
}
}
}
look in javadoc https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/LinkedBlockingQueue.html#take()
Or you can use BlockinQueue.pool() with timeout settings
UPD: After discussion, I extend LinkedBlockingQueue with two ConcurrentHashSets and add method afterTake() to remove processed Replicas. You do not need an additional synchronizations outside the queue. Just put replica in the first thread and take it in another, and call afterTake() when replication finished. You need to override other method if you want to use it.
package ru.everytag;
import io.vertx.core.impl.ConcurrentHashSet;
import java.util.concurrent.LinkedBlockingQueue;
public class TwoPhaseBlockingQueue<E> extends LinkedBlockingQueue<E> {
private ConcurrentHashSet<E> items = new ConcurrentHashSet<>();
private ConcurrentHashSet<E> taken = new ConcurrentHashSet<>();
#Override
public void put(E e) throws InterruptedException {
if (!items.contains(e)) {
items.add(e);
super.put(e);
}
}
public E take() {
E item = take();
taken.add(item);
items.remove(item);
return item;
}
public void afterTake(E e) {
if (taken.contains(e)) {
taken.remove(e);
} else if (items.contains(e)) {
throw new IllegalArgumentException("Element still in the queue");
}
}
}
I have been working on the PC problem to understand Java Synchronization and inter thread communication. Using the code at the bottom, the output was
Producer produced-0
Producer produced-1
Producer produced-2
Consumer consumed-0
Consumer consumed-1
Consumer consumed-2
Producer produced-3
Producer produced-4
Producer produced-5
Consumer consumed-3
Consumer consumed-4
But shouldn't the output be something like as below
Producer produced-0
Consumer consumed-0
Producer produced-1
Consumer consumed-1
Producer produced-2
Consumer consumed-2
Producer produced-3
I expect such an output because my understanding is, the consumer is notified of the value produced as soon as the the produce method releases lock when the method terminates. As a result the consumer block which was waiting, enters the synchronized state acquiring lock to consume the value produced, meanwhile the producer method is blocked. this lock is released at the end of the consume method which is acquired by the producer thread which was blocked due to synchronization and the cycle continues as each method is blocked due to the lock acquired.
Please let me know what did I misunderstood? Thanks
package MultiThreading;
//Java program to implement solution of producer
//consumer problem.
import java.util.LinkedList;
public class PCExample2
{
public static void main(String[] args)
throws InterruptedException
{
// Object of a class that has both produce()
// and consume() methods
final PC pc = new PC();
// Create producer thread
Thread t1 = new Thread(new Runnable()
{
#Override
public void run()
{
try
{
while (true) {
pc.produce();
}
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
});
// Create consumer thread
Thread t2 = new Thread(new Runnable()
{
#Override
public void run()
{
try
{
while (true) {
pc.consume();
}
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
});
// Start both threads
t1.start();
t2.start();
// t1 finishes before t2
t1.join();
t2.join();
}
// This class has a list, producer (adds items to list
// and consumber (removes items).
public static class PC
{
// Create a list shared by producer and consumer
// Size of list is 2.
LinkedList<Integer> list = new LinkedList<>();
int capacity = 12;
int value = 0;
// Function called by producer thread
public void produce() throws InterruptedException
{
synchronized (this)
{
// producer thread waits while list
// is full
while (list.size()==capacity)
wait();
System.out.println("Producer produced-"
+ value);
// to insert the jobs in the list
list.add(value++);
// notifies the consumer thread that
// now it can start consuming
notify();
// makes the working of program easier
// to understand
Thread.sleep(1000);
}
}
// Function called by consumer thread
public void consume() throws InterruptedException
{
synchronized (this)
{
// consumer thread waits while list
// is empty
while (list.size()==0)
wait();
//to retrive the ifrst job in the list
int val = list.removeFirst();
System.out.println("Consumer consumed-"
+ val);
// Wake up producer thread
notify();
// and sleep
Thread.sleep(1000);
}
}
}
}
It is not necessarily the case that the first thread to make a call for a currently taken lock (let's call it Thread A) will aquire the lock as soon as the lock's current owner thread will relinquish it, if other threads have also made calls for the lock since Thread A tried to acquire it. There is no ordered "queue". See here and here. So, judging by the output of the program, it seems as if after the producer releases the lock, there might be not enough time for the consumer to acquire the lock before the while loop in the producer thread is repeated and the producer thread makes another call for the lock (as the other answers have pointed out, Thread.sleep() does not cause the sleeping thread to relinquish the lock), and if the consumer is unlucky, the producer will re-acquire the lock, even though the consumer was there first.
However, there seems to be another misunderstanding. The producer thread will never "wait" on the PC until the list contains 12 elements, so the consumer thread is only guaranteed to be granted the lock when the producer has produced at least 12 elements (which, incidentally, is what happens when I run the program – the consumer never gets a chance until the producer thread calls wait() on the PC, but then, it consumes the entire list). This also means that, if it happens to be the consumer's turn and the list contains less than 12 elements, the producer thread will not be notified because it is not waiting to be notified, but only blocked and already, let's say "anticipating" or "expecting" the lock on the PC (see also here on the difference between "waiting" and "blocked"). So even if you put the two Thread.sleep() invocations outside the synchronization blocks, thereby giving the consumer thread (hopefully, you shouldn't rely on this) enough time to acquire the lock, the call notify() from the consumer thread will have no effect because the producer thread will never be in a waiting state.
To really ensure that both threads modify the PC alternately, you would have to make the producer thread wait only if the list size is greater than zero, as opposed to if the list contains 12 (or however many) elements.
From the API: The awakened thread will compete in the usual manner with any other threads that might be actively competing to synchronize on this object; for example, the awakened thread enjoys no reliable privilege or disadvantage in being the next thread to lock this object.
Move sleep() outside the synchronized block to give the other thread an advantage to acquire the lock.
Pay attention to two mothods: notify && Thread.sleep
Object.notify():
Wakes up a single thread that is waiting on this object's monitor. If any threads are waiting on this object, one of them is chosen to be awakened. The choice is arbitrary and occurs at the discretion of the implementation. A thread waits on an object's monitor by calling one of the wait methods.
The awakened thread will not be able to proceed until the current thread relinquishes the lock on this object. The awakened thread will compete in the usual manner with any other threads that might be actively competing to synchronize on this object; for example, the awakened thread enjoys no reliable privilege or disadvantage in being the next thread to lock this object.
Thread.sleep():
Causes the currently executing thread to sleep (temporarily cease execution) for the specified number of milliseconds plus the specified number of nanoseconds, subject to the precision and accuracy of system timers and schedulers. The thread does not lose ownership of any monitors.
OK. Now you know that notify will just wake up a thread which also monitor this object, but the awakened thread will compete to synchronize on this object. If your producer notify the consumer and release the lock, and then the producer and consumer is standing on the same point to compete. And the Thread.sleep does not do the work you want , it will not release the lock when it sleep as the doc said. So this might happen.
In conclusion, Thread.sleep is not very good with synchronize. and even though you remove this, the first output will happen because of the mechanism of notify.
#Andrew S's answer will work.
Just adding the appropriate condition will do the work.
import java.util.LinkedList;
import java.util.Queue;
class Producer extends Thread {
public Queue<Integer> producerQueue;
public int size;
public int count = 0;
Producer(Queue<Integer> queue, int size) {
producerQueue = queue;
this.size = size;
}
public void produce() throws InterruptedException {
synchronized (producerQueue) {
while (producerQueue.size() > 0) {
producerQueue.wait();
}
System.out.println("Produced : " + count);
producerQueue.add(count++);
producerQueue.notify();
Thread.sleep(100);
}
}
public void run() {
try {
while (true) produce();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
class Consumer extends Thread {
public Queue<Integer> consumerQueue;
public int size;
Consumer(Queue<Integer> queue, int size) {
consumerQueue = queue;
this.size = size;
}
public void consume() throws InterruptedException {
synchronized (consumerQueue) {
while (consumerQueue.size() == 0) {
consumerQueue.wait();
Thread.sleep(100);
}
System.out.println("Consumed : " + consumerQueue.poll());
consumerQueue.notify();
}
}
public void run() {
try {
while (true) consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public class Test {
public static void main(String[] args) {
Queue<Integer> commonQueue = new LinkedList<>();
int size = 10;
new Producer(commonQueue, size).start();
new Consumer(commonQueue, size).start();
}
}
I am having a issue debugging my SynchronousQueue. its in android studio but should not matter its java code. I am passing in true to the constructor of SynchronousQueue so its "fair" meaning its a fifo queue. But its not obeying the rules, its still letting the consumer print first and the producer after. The second issue i have is i want these threads to never die, do you think i should use a while loop on the producer and the consumer thread and let them keep "producing and consuming" each other ?
here is my simple code:
package com.example.android.floatingactionbuttonbasic;
import java.util.concurrent.SynchronousQueue;
import trikita.log.Log;
public class SynchronousQueueDemo {
public SynchronousQueueDemo() {
}
public void startDemo() {
final SynchronousQueue<String> queue = new SynchronousQueue<String>(true);
Thread producer = new Thread("PRODUCER") {
public void run() {
String event = "FOUR";
try {
queue.put(event); // thread will block here
Log.v("myapp","published event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
producer.start(); // starting publisher thread
Thread consumer = new Thread("CONSUMER") {
public void run() {
try {
String event = queue.take(); // thread will block here
Log.v("myapp","consumed event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
consumer.start(); // starting consumer thread
}
}
to start the threads i simple call new SynchronousQueueDemo().startDemo();
The logs always look like this no matter what i pass to synchronousQueue constructor to be "fair":
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
Checking the docs here, it says the following:
public SynchronousQueue(boolean fair)
Creates a SynchronousQueue with the specified fairness policy.
Parameters:
fair - if true, waiting threads contend in FIFO order for access; otherwise the order is unspecified.
The fairness policy relates to the order in which the queue is read. The order of execution for a producer/consumer is for the consumer to take(), releasing the producer (which was blocking on put()). Set fairness=true if the order of consumption is important.
If you want to keep the threads alive, have a loop condition which behaves well when interrupted (see below). Presumably you want to put a Thread.sleep() in the Producer, to limit the rate at which events are produced.
public void run() {
boolean interrupted = false;
while (!interrupted) {
try {
// or sleep, then queue.put(event)
queue.take(event);
} catch (InterruptedException e) {
interrupted = true;;
}
}
}
SynchronousQueue work on a simple concept. You can only produce if you have a consumer.
1) Now if you start doing queue.put() without any queue.take(), the thread will block there. So any soon as you have queue.take(), the Producer thread will be unblocked.
2) Similarly if you start doing queue.take() it will block until there is a producer. So once you have queue.put(), the Consumer Thread will be blocked.
So as soon as queue.take() is executed, both Producer and Consumer threads are unblocked. But you do realize that Producer and Consumer are both running in seperate threads. So any of the messages you put after the blocking calls can be executed. In my case the order of the output was this. Producer was getting printed first.
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR
I have a javax.jms.Queue queue and have my listener listening to this queue. I get the message(a String) and execute a process passing the string as an input parameter to that process.
I want to just run 10 instances of that process running at one time. Once those are finished then only next messages should be processed.
How it can be achieved? As it reads all the message at once and runs as many instances of that process running, causing the server to be hanged.
// using javax.jms.MessageListener
message = consumer.receive(5000);
if (message != null) {
try {
handler.onMessage(message); //handler is MessageListener instance
}
}
Try to put this annotation on your mdb listener:
#ActivationConfigProperty(propertyName = "maxSession", propertyValue = "10")
I am assuming that you have a way of accepting hasTerminated messages from your external processes. This controller thread will communicate with the JMS listener using a Semaphore. The Semaphore is initialized with 10 permits, and every time an external process calls TerminationController#terminate (or however the external processes communicate with your listener process) it adds a permit to the Semaphore, and then JMSListener must first acquire a permit before it can call messageConsumer.release() which ensures that no more than ten processes can be active at a time.
// created in parent class
private final Semaphore semaphore = new Semaphore(10);
#Controller
public class TerminationController {
private final semaphore;
public TerminationController(Semaphore semaphore) {
this.semaphore = semaphore;
}
// Called from external processes when they terminate
public void terminate() {
semaphore.release();
}
}
public class JMSListener implements Runnable {
private final MessageConsumer messageConsumer;
private final Semaphore semaphore;
public JMSListener(MessageConsumer messageConsumer, Semaphore semaphore) {
this.messageConsumer = messageConsumer;
this.semaphore = semaphore;
}
public void run() {
while(true) {
semaphore.acquire();
Message message = messageConsumer.receive();
// create process from message
}
}
}
I think a simple while check would suffice. Here's some Pseudocode.
While (running processes are less than 10) {
add one to the running processes list
do something with the message
}
and in the code for onMessage:
function declaration of on Message(Parameters) {
do something
subtract 1 from the running processes list
}
Make sure that the variable you're using to count the amount of running processes is declared as volatile.
Example as requested:
public static volatile int numOfProcesses = 0;
while (true) {
if (numOfProcesses < 10) {
// read a message and make a new process, etc
// probably put your receive code here
numOfProcesses++;
}
}
Wherever your the code for your processes is written:
// do stuff, do stuff, do more stuff
// finished stuff
numOfProcesses--;
I'm doing a CPU scheduling simulator project for my OS course. The program should consist of two threads: producer and consumer threads. The producer thread includes the generator that generates processes in the system and the long term scheduler that selects a number of processes and put them in an Object called Buffer of type ReadyQueue (which is a shared object by consumer and producer). The consumer thread includes the short term scheduler which takes processes from the queue and starts the scheduling algorithm. I wrote the whole program without using threads and it worked properly but now I need to add threads and I never used threads so I appreciate if someone can show me how to modify the code that I'm showing below to implement the required threads.
Here's the Producer class implementation:
public class Producer extends Thread{
ReadyQueue Buffer = new ReadyQueue(20); // Shared Buffer of size 20 between consumer and producer
JobScheduler js = new JobScheduler(Buffer);
private boolean systemTerminate = false; // Flag to tell Thread that there are no more processes in the system
public Producer(ReadyQueue buffer) throws FileNotFoundException{
Buffer = buffer;
Generator gen = new Generator(); // Generator generates processes and put them in a vector called memory
gen.writeOnFile();
}
#Override
public void run() {
synchronized(this){
js.select(); // Job Scheduler will select processes to be put in the Buffer
Buffer = (ReadyQueue) js.getSelectedProcesses();
while(!Buffer.isEmpty()){
try {
wait(); // When Buffer is empty wait until getting notification
} catch (InterruptedException e) {
e.printStackTrace();
}
systemTerminate = js.select();
Buffer = (ReadyQueue) js.getSelectedProcesses();
if(systemTerminate) // If the flag's value is true the thread yields
yield();
}
}
}
public ReadyQueue getReadyQueue(){
return Buffer;
}
}
This is the Consumer class implementation:
public class Consumer extends Thread{
ReadyQueue Buffer = new ReadyQueue(20);
Vector<Process> FinishQueue = new Vector<Process>();
MLQF Scheduler ;
public Consumer(ReadyQueue buffer){
Buffer = buffer;
Scheduler = new MLQF(Buffer,FinishQueue); // An instance of the multi-level Queue Scheduler
}
#Override
public void run() {
int count = 0; // A counter to track the number of processes
while(true){
synchronized(this){
Scheduler.fillQueue(Buffer); // Take contents in Buffer and put them in a separate queue in the scheduler
Scheduler.start(); // Start Scheduling algorithm
count++;
}
if(count >= 200) // If counter exceeds the maximum number of processes thread must yeild
yield();
notify(); // Notify Producer thread when buffer is empty
}
}
public void setReadyQueue(ReadyQueue q){
Buffer = q;
}
}
This is the main Thread:
public class test {
public static void main(String[] args) throws FileNotFoundException,InterruptedException {
ReadyQueue BoundedBuffer = new ReadyQueue(20);
Producer p = new Producer(BoundedBuffer);
Consumer c = new Consumer(p.getReadyQueue());
p.start();
System.out.println("Ready Queue: "+p.getReadyQueue());
p.join();
c.start();
c.join();
}
}
Thank you in advance.
One problem with your code is that it suffers from a common bug in multithreaded producer/consumer models. You must use a while look around the wait() calls. For example:
try {
// we must do this test in a while loop because of consumer race conditions
while(!Buffer.isEmpty()) {
wait(); // When Buffer is empty wait until getting notification
...
}
} catch (InterruptedException e) {
e.printStackTrace();
}
The issue is that if you have multiple threads that are consuming, you may notify a thread but then another thread made come through and dequeue the item that was just added. When a thread is moved from the WAIT queue to the RUN queue after being notified, it will usually be put at the end of the queue, possibly behind other threads waiting to synchronize on this.
For more details about that see my documentation about this.