BlockingQueue consumer has no response while queue is not empty - java

I have a distributed system, whose node receive message objects through socket. The messages are written to a BlockingQueue when received and processed in another thread. I make sure that there is just one BlockingQueue instance within a machine. The incoming rate for is very high, roughly thousands per second. The consumer works well at first, but blocks (have no response at all) after a certain period - I have checked that BlockingQueue is not empty, so should not be blocked by BlockingQueue.take(). When I manually decrease the rate of incoming message objects, the consumer works absolutely well. This is quite confusing...
Could you help me identify the problem? Thanks a lot in advance.
Consumer code:
ThreadFactory threadFactory = new ThreadFactoryBuilder()
.setNameFormat(id+"-machine-worker")
.setDaemon(false)
.setPriority(Thread.MAX_PRIORITY)
.build();
ExecutorService executor = Executors.newSingleThreadExecutor(threadFactory);
executor.submit(new Worker(machine));
public static class Worker implements Runnable {
Machine machine;
public Worker(Machine machine) {
this.machine = machine;
}
#Override
public void run() {
while (true) {
try {
Message message = machine.queue.take();
// Do my staff here...
} catch (Exception e) {
logger.error(e);
}
}
}
}
Producer code:
// Below code submits the SocketListener runnable described below
ExecutorService worker;
Runnable runnable = socketHandlerFactory.getSocketHandlingRunnable(socket, queue);
worker.submit(runnable);
public SocketListener(Socket mySocket, Machine machine, LinkedBlockingQueue<Message> queue) {
this.id = machine.id;
this.socket = mySocket;
this.machine = machine;
this.queue = queue;
try {
BufferedInputStream bis = new BufferedInputStream(socket.getInputStream(), 8192*64);
ois = new ObjectInputStream(bis);
} catch (Exception e) {
logger.error("Error in create SocketListener", e);
}
}
#Override
public void run() {
Message message;
try {
boolean socketConnectionIsAlive = true;
while (socketConnectionIsAlive) {
if (ois != null) {
message = (Message) ois.readObject();
queue.put(message);
}
}
} catch (Exception e) {
logger.warn(e);
}
}

If you are using an unbounded queue, it may happen that the whole system is getting bogged down due to memory pressure. Also, this means that the producing intensity is not limited by the consuming intensity. So, use a bounded queue.
Another advice: get a full thread stacktrace dump when your blocking condition occurs to find out for certain where the consumer is blocking. You may get a surprise there.

You have several candidate problem areas:
What actual BlockingQueue are you using? Did you hit the upper limit of an ArrayBlockingQueue?
How much memory did you allocate for your process? I.e., what is the max heap for this process? If you hit the upper limit of that heap space from your overload of incoming messages, it's entirely possible that you had an OutOfMemoryError.
What actually happens during your message processing ("Do my staff here..." [sic])? Is it possible that you have a deadlock inside that code that you only expose when you send many messages per second. Do you have an Exception eater somewhere down in that call stack that's hiding the real problem that you're experiencing?
Where are your loggers logging to? Are you throwing away the indicative message because it's not logging to a location that you expect?

Related

Returning a value from thread

First of all, yes I looked up this question on google and I did not find any answer to it. There are only answers, where the thread is FINISHED and than the value is returned. What I want, is to return an "infinite" amount of values.
Just to make it more clear for you: My thread is reading messages from a socket and never really finishes. So whenever a new message comes in, I want another class to get this message. How would I do that?
public void run(){
while(ircMessage != null){
ircMessage = in.readLine();
System.out.println(ircMessage);
if (ircMessage.contains("PRIVMSG")){
String[] ViewerNameRawRaw;
ViewerNameRawRaw = ircMessage.split("#");
String ViewerNameRaw = ViewerNameRawRaw[2];
String[] ViewerNameR = ViewerNameRaw.split(".tmi.twitch.tv");
viewerName = ViewerNameR[0];
String[] ViewerMessageRawRawRaw = ircMessage.split("PRIVMSG");
String ViewerMessageRawRaw = ViewerMessageRawRawRaw[1];
String ViewerMessageRaw[] = ViewerMessageRawRaw.split(":", 2);
viewerMessage = ViewerMessageRaw[1];
}
}
}
What you are describing is a typical scenario of asynchronous communication. Usually solution could be implemented with Queue. Your Thread is a producer. Each time your thread reads a message from socket it builds its result and sends it into a queue. Any Entity that is interested to receive the result should be listening to the Queue (i.e. be a consumer). Read more about queues as you can send your message so that only one consumer will get it or (publishing) means that all registered consumers may get it. Queue implementation could be a comercialy available products such as Rabbit MQ for example or as simple as Java provided classes that can work as in memory queues. (See Queue interface and its various implementations). Another way to go about it is communication over web (HTTP). Your thread reads a message from a socket, builds a result and sends it over http using let's say a REST protocol to a consumer that exposes a rest API that your thread can call to.
Why not have a status variable in your thread class? You can then update this during execution and before exiting. Once the thread has completed, you can still query the status.
public static void main(String[] args) throws InterruptedException {
threading th = new threading();
System.out.println("before run Status:" + th.getStatus());
th.start();
Thread.sleep(500);
System.out.println("running Status:" + th.getStatus());
while(th.isAlive()) {}
System.out.println("after run Status:" + th.getStatus());
}
Extend thread to be:
public class threading extends Thread {
private int status = -1; //not started
private void setStatus(int status){
this.status = status;
}
public void run(){
setStatus(1);//running
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
setStatus(0); //exit clean
}
public int getStatus(){
return this.status;
}
}
And get an output of:
before run Status:-1
running Status:1
after run Status:0

Solving consumer producer concurrency issue with SynchronousQueue. Fairness property not working

I am having a issue debugging my SynchronousQueue. its in android studio but should not matter its java code. I am passing in true to the constructor of SynchronousQueue so its "fair" meaning its a fifo queue. But its not obeying the rules, its still letting the consumer print first and the producer after. The second issue i have is i want these threads to never die, do you think i should use a while loop on the producer and the consumer thread and let them keep "producing and consuming" each other ?
here is my simple code:
package com.example.android.floatingactionbuttonbasic;
import java.util.concurrent.SynchronousQueue;
import trikita.log.Log;
public class SynchronousQueueDemo {
public SynchronousQueueDemo() {
}
public void startDemo() {
final SynchronousQueue<String> queue = new SynchronousQueue<String>(true);
Thread producer = new Thread("PRODUCER") {
public void run() {
String event = "FOUR";
try {
queue.put(event); // thread will block here
Log.v("myapp","published event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
producer.start(); // starting publisher thread
Thread consumer = new Thread("CONSUMER") {
public void run() {
try {
String event = queue.take(); // thread will block here
Log.v("myapp","consumed event:", Thread
.currentThread().getName(), event);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
consumer.start(); // starting consumer thread
}
}
to start the threads i simple call new SynchronousQueueDemo().startDemo();
The logs always look like this no matter what i pass to synchronousQueue constructor to be "fair":
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
Checking the docs here, it says the following:
public SynchronousQueue(boolean fair)
Creates a SynchronousQueue with the specified fairness policy.
Parameters:
fair - if true, waiting threads contend in FIFO order for access; otherwise the order is unspecified.
The fairness policy relates to the order in which the queue is read. The order of execution for a producer/consumer is for the consumer to take(), releasing the producer (which was blocking on put()). Set fairness=true if the order of consumption is important.
If you want to keep the threads alive, have a loop condition which behaves well when interrupted (see below). Presumably you want to put a Thread.sleep() in the Producer, to limit the rate at which events are produced.
public void run() {
boolean interrupted = false;
while (!interrupted) {
try {
// or sleep, then queue.put(event)
queue.take(event);
} catch (InterruptedException e) {
interrupted = true;;
}
}
}
SynchronousQueue work on a simple concept. You can only produce if you have a consumer.
1) Now if you start doing queue.put() without any queue.take(), the thread will block there. So any soon as you have queue.take(), the Producer thread will be unblocked.
2) Similarly if you start doing queue.take() it will block until there is a producer. So once you have queue.put(), the Consumer Thread will be blocked.
So as soon as queue.take() is executed, both Producer and Consumer threads are unblocked. But you do realize that Producer and Consumer are both running in seperate threads. So any of the messages you put after the blocking calls can be executed. In my case the order of the output was this. Producer was getting printed first.
V/SynchronousQueueDemo$1$override(26747): myapp published event:PRODUCER FOUR
/SynchronousQueueDemo$2$override(26747): myapp consumed event: CONSUMER FOUR

Synchronize on DataOutputStream

I have gone through so many tutorials on Synchronization now that my head is spinning. I have never truly understood it :(.
I have a Java server(MainServer), that when a client connects creates a new thread(ServerThread) with a DataOutputStream.
The client talks to the ServerThread and the ServerThread responds. Every now and then the MainServer will distribute a message to all clients utilizing each ServerThread's DataOutputStream object.
I am quite certain that every now and then my issue is because both the MainServer and ServerThread are trying to send something to the client at the same time. Therefore I need to lock on the DataOutputStream object. For the life of me I cannot understand this concept any further. Every example I read is confusing.
What is the correct way to handle this?
ServerThread's send to client method:
public void replyToOne(String reply){
try {
commandOut.writeUTF(reply);
commandOut.flush();
} catch (IOException e) {
logger.fatal("replyToOne", e);
}
logger.info(reply);
}
MainServer's distribute to all clients method:
public static void distribute(String broadcastMessage){
for (Map.Entry<String, Object[]> entry : AccountInfoList.entrySet()) {
Object[] tmpObjArray = entry.getValue();
DataOutputStream temporaryCOut = (DataOutputStream) tmpObjArray[INT_COMMAND_OUT]; //can be grabbed while thread is using it
try {
temporaryCOut.writeUTF(broadcastMessage);
temporaryCOut.flush();
} catch (IOException e) {
logger.error("distribute: writeUTF", e);
}
logger.info(broadcastMessage);
}
}
I am thinking I should have something like this in my ServerThread class.
public synchronized DataOutputStream getCommandOut(){
return commandOut;
}
Is it really that simple? I know this has likely been asked and answered, but I don't seem to be getting it still, without individual help.
If this were me.....
I would have a LinkedBlockingQueue on each client-side thread. Then, each time the client thread has a moment of idleness on the socket, it checks the queue. If there's a message to send from the queue, it sends it.
Then, the server, if it needs to, can just add items to that queue, and, when the connection has some space, it will be sent.
Add the queue, have a method on the ServerThread something like:
addBroadcastMessage(MyData data) {
broadcastQueue.add(data);
}
and then, on the socket side, have a loop that has a timeout-block on it, so that it breaks out of the socket if it is idle, and then just:
while (!broadcastQueue.isEmpty()) {
MyData data = broadcastQueue.poll();
.... send the data....
}
and you're done.
The LinkedBlockingQueue will manage the locking and synchronization for you.
You are on the right track.
Every statement modifying the DataOutputStream should be synchronized on this DataOutputStream so that it is not concurrently accessed (and thus do not have any concurrent modification):
public void replyToOne(String reply){
try {
synchronized(commandOut) { // writing block
commandOut.writeUTF(reply);
commandOut.flush();
}
} catch (IOException e) {
logger.fatal("replyToOne", e);
}
logger.info(reply);
}
And:
public static void distribute(String broadcastMessage){
for (Map.Entry<String, Object[]> entry : AccountInfoList.entrySet()) {
Object[] tmpObjArray = entry.getValue();
DataOutputStream temporaryCOut = (DataOutputStream) tmpObjArray[INT_COMMAND_OUT]; //can be grabbed while thread is using it
try {
synchronized(temporaryCOut) { // writing block
temporaryCOut.writeUTF(broadcastMessage);
temporaryCOut.flush();
}
} catch (IOException e) {
logger.error("distribute: writeUTF", e);
}
logger.info(broadcastMessage);
}
}
Just putting my 2 cents:
The way I implement servers is this:
Each server is a thread with one task only: listening for connections. Once it recognizes a connection it generates a new thread to handle the connection's input/output (I call this sub-class ClientHandler).
The server also keeps a list of all connected clients.
ClientHandlers are responsible for user-server interactions. From here, things are pretty simple:
Disclaimer: there are no try-catches blocks here! add them yourself. Of course you can use thread executers to limit the number of concurrent connections.
Server's run() method:
#Override
public void run(){
isRunning = true;
while(isRunning){
ClientHandler ch = new ClientHandler(serversocket.accept());
clients.add(ch);
ch.start();
}
}
ClientHandler's ctor:
public ClientHandler(Socket client){
out = new ObjectOutputStream(client.getOutputStream());
in = new ObjectInputStream(client.getInputStream());
}
ClientHandler's run() method:
#Override
public void run(){
isConnected = true;
while(isConnected){
handle(in.readObject());
}
}
and handle() method:
private void handle(Object o){
//Your implementation
}
If you want a unified channel say for output then you'll have to synchronize it as instructed to avoid unexpected results.
There are 2 simple ways to do this:
Wrap every call to output in synchronized(this) block
Use a getter for output (like you did) with synchronized keyword.

Java Producer Consumer Pattern with Consumer completion notification

I want to implement a producer / consumer scenario where i have multiple producers and a single consumer. Producers keep adding items to a queue and consumer dequeues the items. When the consumer has processed enough items, both the producers and consumer should stop execution. Consumer can easily terminate itself when it process enough items. But the producers should also know when to exit. The typical producer poison pills do not work here.
One way to do it would be to have a shared boolean variable between consumer and producers. Consumer sets the boolean variable to true and producers periodically check the variable and exit if it set to true.
Any better ideas on how i can do this ?
I suppose you can have a shared counter and have a max. If an increment is greater than the max value then the thread cannot add to the queue.
private final AtomicInteger count = new AtomicInteger(0);
private final int MAX = ...;/
private final BlockingQueue<T> queue = ...;
public boolean add(T t){
if(count.incrementAndGet() > MAX)
return false;
return queue.offer(t);
}
Not sure if this approach would be any use.
Include a reference to the producer in the message.
Producer provides a call back method to tell them to stop producing.
Consumer keeps a registry of producers based on the unique set of references
that are passed to it.
When the consumer has had enough, it iterates over the registry of producers, and tells them to stop by calling the callback method.
Would only work if producer and consumer are in the same JVM
Wouldn't stop any new producers from starting up
And I'm not sure it maintains the separation of producer and consumer
Alternatively, as the Queue is the shared resource between these two objects, could you introduce an "isOpen" state on the queue which is checked before the producer writes to it and is set by the consumer when it has done as much work as it is happy to do?
From what I understand you'll need something like this:
private static final BlockingQueue<String> queue = new LinkedBlockingQueue<String>();
private static boolean needMore = true;
static class Consumer implements Runnable
{
Scanner scanner = new Scanner(System.in);
#Override
public void run()
{
do
{
try
{
String s = queue.take();
System.out.println("Got " + s);
needMore = scanner.nextBoolean();
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
while (needMore);
}
}
static class Producer implements Runnable
{
Random rand = new Random();
#Override
public void run()
{
System.out.println("Starting new producer...");
do
{
queue.add(String.valueOf(rand.nextInt()));
try
{
Thread.sleep(1000);
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
while (needMore);
System.out.println("Producer shuts down.");
}
}
public static void main(String[] args) throws Exception
{
Thread producer1 = new Thread(new Producer());
Thread producer2 = new Thread(new Producer());
Thread producer3 = new Thread(new Producer());
Thread consumer = new Thread(new Consumer());
producer1.start();
producer2.start();
producer3.start();
consumer.start();
producer1.join();
producer2.join();
producer3.join();
consumer.join();
return;
}
The consumer dynamically decides if it needs more data and stops when it has found what it was searching for example; this is simulated by the user inputting true/false for continuing/stopping.
Here is an I/O sample:
Starting new producer...
Starting new producer...
Starting new producer...
Got -1782802247
true
Got 314306979
true
Got -1787470224
true
Got 1035850909
false
Producer shuts down.
Producer shuts down.
Producer shuts down.
This may not look clean on first sight, but I think it's actually cleaner than having an extra variable etc. if you are trying to do this as a part of shutdown process.
Make your consumers an ExecutorService, and from your consumer task, call shutdownNow() when the task decides that the consumers had consumed enough. This will cancel all pending tasks on the queue, interrupt currently running tasks and the producers will start to get RejectedExecutionException upon submission. You can treat this exception as a signal from the consumers.
Only caveat is that when you have multiple consumers, calling shutdownNow() in a serial manner will not guarantee that no task will be executed after one consumer decided it was enough. I'm assuming that's fine. If you need this guarantee, then you can indeed share an AtomicBoolean and let all producers and consumers check it.

Producer/Consumer threads do not give results

I'm doing a CPU scheduling simulator project for my OS course. The program should consist of two threads: producer and consumer threads. The producer thread includes the generator that generates processes in the system and the long term scheduler that selects a number of processes and put them in an Object called Buffer of type ReadyQueue (which is a shared object by consumer and producer). The consumer thread includes the short term scheduler which takes processes from the queue and starts the scheduling algorithm. I wrote the whole program without using threads and it worked properly but now I need to add threads and I never used threads so I appreciate if someone can show me how to modify the code that I'm showing below to implement the required threads.
Here's the Producer class implementation:
public class Producer extends Thread{
ReadyQueue Buffer = new ReadyQueue(20); // Shared Buffer of size 20 between consumer and producer
JobScheduler js = new JobScheduler(Buffer);
private boolean systemTerminate = false; // Flag to tell Thread that there are no more processes in the system
public Producer(ReadyQueue buffer) throws FileNotFoundException{
Buffer = buffer;
Generator gen = new Generator(); // Generator generates processes and put them in a vector called memory
gen.writeOnFile();
}
#Override
public void run() {
synchronized(this){
js.select(); // Job Scheduler will select processes to be put in the Buffer
Buffer = (ReadyQueue) js.getSelectedProcesses();
while(!Buffer.isEmpty()){
try {
wait(); // When Buffer is empty wait until getting notification
} catch (InterruptedException e) {
e.printStackTrace();
}
systemTerminate = js.select();
Buffer = (ReadyQueue) js.getSelectedProcesses();
if(systemTerminate) // If the flag's value is true the thread yields
yield();
}
}
}
public ReadyQueue getReadyQueue(){
return Buffer;
}
}
This is the Consumer class implementation:
public class Consumer extends Thread{
ReadyQueue Buffer = new ReadyQueue(20);
Vector<Process> FinishQueue = new Vector<Process>();
MLQF Scheduler ;
public Consumer(ReadyQueue buffer){
Buffer = buffer;
Scheduler = new MLQF(Buffer,FinishQueue); // An instance of the multi-level Queue Scheduler
}
#Override
public void run() {
int count = 0; // A counter to track the number of processes
while(true){
synchronized(this){
Scheduler.fillQueue(Buffer); // Take contents in Buffer and put them in a separate queue in the scheduler
Scheduler.start(); // Start Scheduling algorithm
count++;
}
if(count >= 200) // If counter exceeds the maximum number of processes thread must yeild
yield();
notify(); // Notify Producer thread when buffer is empty
}
}
public void setReadyQueue(ReadyQueue q){
Buffer = q;
}
}
This is the main Thread:
public class test {
public static void main(String[] args) throws FileNotFoundException,InterruptedException {
ReadyQueue BoundedBuffer = new ReadyQueue(20);
Producer p = new Producer(BoundedBuffer);
Consumer c = new Consumer(p.getReadyQueue());
p.start();
System.out.println("Ready Queue: "+p.getReadyQueue());
p.join();
c.start();
c.join();
}
}
Thank you in advance.
One problem with your code is that it suffers from a common bug in multithreaded producer/consumer models. You must use a while look around the wait() calls. For example:
try {
// we must do this test in a while loop because of consumer race conditions
while(!Buffer.isEmpty()) {
wait(); // When Buffer is empty wait until getting notification
...
}
} catch (InterruptedException e) {
e.printStackTrace();
}
The issue is that if you have multiple threads that are consuming, you may notify a thread but then another thread made come through and dequeue the item that was just added. When a thread is moved from the WAIT queue to the RUN queue after being notified, it will usually be put at the end of the queue, possibly behind other threads waiting to synchronize on this.
For more details about that see my documentation about this.

Categories