Returning a value from thread - java

First of all, yes I looked up this question on google and I did not find any answer to it. There are only answers, where the thread is FINISHED and than the value is returned. What I want, is to return an "infinite" amount of values.
Just to make it more clear for you: My thread is reading messages from a socket and never really finishes. So whenever a new message comes in, I want another class to get this message. How would I do that?
public void run(){
while(ircMessage != null){
ircMessage = in.readLine();
System.out.println(ircMessage);
if (ircMessage.contains("PRIVMSG")){
String[] ViewerNameRawRaw;
ViewerNameRawRaw = ircMessage.split("#");
String ViewerNameRaw = ViewerNameRawRaw[2];
String[] ViewerNameR = ViewerNameRaw.split(".tmi.twitch.tv");
viewerName = ViewerNameR[0];
String[] ViewerMessageRawRawRaw = ircMessage.split("PRIVMSG");
String ViewerMessageRawRaw = ViewerMessageRawRawRaw[1];
String ViewerMessageRaw[] = ViewerMessageRawRaw.split(":", 2);
viewerMessage = ViewerMessageRaw[1];
}
}
}

What you are describing is a typical scenario of asynchronous communication. Usually solution could be implemented with Queue. Your Thread is a producer. Each time your thread reads a message from socket it builds its result and sends it into a queue. Any Entity that is interested to receive the result should be listening to the Queue (i.e. be a consumer). Read more about queues as you can send your message so that only one consumer will get it or (publishing) means that all registered consumers may get it. Queue implementation could be a comercialy available products such as Rabbit MQ for example or as simple as Java provided classes that can work as in memory queues. (See Queue interface and its various implementations). Another way to go about it is communication over web (HTTP). Your thread reads a message from a socket, builds a result and sends it over http using let's say a REST protocol to a consumer that exposes a rest API that your thread can call to.

Why not have a status variable in your thread class? You can then update this during execution and before exiting. Once the thread has completed, you can still query the status.
public static void main(String[] args) throws InterruptedException {
threading th = new threading();
System.out.println("before run Status:" + th.getStatus());
th.start();
Thread.sleep(500);
System.out.println("running Status:" + th.getStatus());
while(th.isAlive()) {}
System.out.println("after run Status:" + th.getStatus());
}
Extend thread to be:
public class threading extends Thread {
private int status = -1; //not started
private void setStatus(int status){
this.status = status;
}
public void run(){
setStatus(1);//running
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
setStatus(0); //exit clean
}
public int getStatus(){
return this.status;
}
}
And get an output of:
before run Status:-1
running Status:1
after run Status:0

Related

How to write Vertx worker verticle - indefinite blocking operation?

Following class is my worker verticle in which i want to execute a blocking code on recieving a message from event bus on a channel named events-config.
The objective is to generate and publish json messages indefinitely until i receive stop operation message on events-config channel.
I am using executeBlocking to achieve the desired functionality. However since am running the blocking operation indefinitely , vertx blocked threadchecker dumping warnings .
Question:
- Is there a way to disable blockedthreadchecker only for specific verticle ??
- Does the code below adheres to the best practice of executing infinite loop on need basis in vertx ? If not can you please suggest best way to do this ?
public class WorkerVerticle extends AbstractVerticle {
Logger logger = LoggerFactory.getLogger(WorkerVerticle.class);
private MessageConsumer<Object> mConfigConsumer;
AtomicBoolean shouldPublish = new AtomicBoolean(true);
private JsonGenerator json = new JsonGenerator();
#Override
public void start() {
mConfigConsumer = vertx.eventBus().consumer("events-config", message -> {
String msgBody = (String) message.body();
if (msgBody.contains(PublishOperation.START_PUBLISH.getName()) && !mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to start producing data onto kafka " + msgBody);
vertx.<Void>executeBlocking(voidFutureHandler -> {
Integer numberOfMessagesToBePublished = 100000;
if (numberOfMessagesToBePublished <= 0) {
logger.info("Skipping message publish :"+numberOfMessagesToBePublished);
return; // is it best way to do it ??
}
publishData(numberOfMessagesToBePublished);
},false, voidAsyncResult -> logger.info("Blocking publish operation is terminated"));
} else if (msgBody.contains(PublishOperation.STOP_PUBLISH.getName()) && mJsonGenerator.isPublishOnGoing()) {
logger.info("Message received to terminate " + msgBody);
mJsonGenerator.terminatePublish();
}
});
}
private void publishData(){
while(shouldPublish.get()){
//code to generate json indefinitely until some one reset shouldPublish variable
}
}
}
You don't want to use busy loops in your asynchronous code.
Use vertx.setPeriodic() or vertx.setTimer() instead:
vertx.setTimer(20, (l) -> {
// Generate your JSON
if (shouldPublish.get()) {
// Set timer again
}
});

Java wait(), notifyall(), not working as expected

I have one Server which accepts multiple clients connections and performs the following operations
Client 1 transmits one line of information to the server, waits for the server side operation to complete
Clients 2 Transmits one line of information to the server, waits for the server side operation to complete
Server when it has received information from both clients performs a certain operation, notify both the clients and again goes to wait state for both clients to transmit their line of information, but some how with the code i have written it seems not to be working in a proper way.
Server Code Snippet :
class ServerPattern extends Thread{
#Override
public void run() {
try
{
while(moreData){
if(clientId==1){
synchronized (BaseStation.sourceAReadMonitor){
BaseStation.sourceAReadComplete = false;
SourceARead.complete();
BaseStation.sourceAReadMonitor.notifyAll();
}
}
else if(clientId==2){
BaseStation.sourceBReadComplete = false;
}
//this.wait();
synchronized(BaseStation.patternGenerationReadMonitor){
BaseStation.patternGenerationReadMonitor.wait();
}
}
}
newSock.close();
}
catch(Exception e){
e.printStackTrace();
}
}
Client Code Snippet :
class sReadA extends Thread {
public static void serverJobComplete(){
System.out.println("Source A Server job complete , Notifying Thread");
synchronized(BaseStation.sourceAReadMonitor){
BaseStation.sourceAReadMonitor.notifyAll();
}
}
//public void readFile(){
public void run() {
try {
while((line = br.readLine())!= null){
synchronized (BaseStation.sourceAReadMonitor){
if(BaseStation.patternGenerationComplete == true && BaseStation.sourceAReadComplete == false){
BaseStation.sourceAReadComplete = true;
BaseStation.sourceAReadMonitor.wait();
}
else if (BaseStation.sourceAReadComplete == true)
{
synchronized (BaseStation.patternGenerationReadMonitor){
BaseStation.patternGenerationReadMonitor.notifyAll();
}
}
}
}
//ToDo : Wait for ServerSide Operation to Complete, later iterate till end of file
}
catch(Exception e){
e.printStackTrace();
}
}
}
public class SourceARead {
public static void complete(){
System.out.println("Complete Called");
sReadA.serverJobComplete();
}
public static void main(String args[]){
sReadA sAR = new sReadA(fName);
sAR.start();
}
}
Can you describe what is the problem you are facing with this.
You should use HTTP server port for this requirement. On server side open a server type of port and clients will connect to this port. java.net API can be useful for this.
You should not use wait/notify for this requirement as they are thread level control and they depends on the JVM implementation. All objects will have their thread wait list, once current running thread exits waiting threads are notified.
If this is your requirement then make sure that you are synchronizing on the correct object.
The wait notify or notify all is for multiple threads running under same JVM
BaseStation.patternGenerationReadMonitor.notifyAll(); will notify threads waiting for current object and not on BaseStation.
Hope this solves your problem

How to read Message in netty in other class

I want to read a message at a specific position in an class other than InboundHandler. I can't find a way to read it expect in the channelRead0 method, which is called from the netty framework.
For example:
context.writeMessage("message");
String msg = context.readMessage;
If this is not possible, how can I map a result, which I get in the channelRead0 method to a specific call I made in another class?
The Netty framework is designed to be asynchronously driven. Using this analogy, it can handle large amount of connections with minimal threading usage. I you are creating an api that uses the netty framework to dispatch calls to a remote location, you should use the same analogy for your calls.
Instead of making your api return the value direct, make it return a Future<?> or a Promise<?>. There are different ways of implementing this system in your application, the simplest way is creating a custom handler that maps the incoming requests to the Promises in a FIFO queue.
An example of this could be the following:
This is heavily based on this answer that I submitted in the past.
We start with out handler that maps the requests to requests in our pipeline:
public class MyLastHandler extends SimpleInboundHandler<String> {
private final SynchronousQueue<Promise<String>> queue;
public MyLastHandler (SynchronousQueue<Promise<String>> queue) {
super();
this.queue = queue;
}
// The following is called messageReceived(ChannelHandlerContext, String) in 5.0.
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
this.queue.remove().setSuccss(msg);
// Or setFailure(Throwable)
}
}
We then need to have a method of sending the commands to a remote server:
Channel channel = ....;
SynchronousQueue<Promise<String>> queue = ....;
public Future<String> sendCommandAsync(String command) {
return sendCommandAsync(command, new DefaultPromise<>());
}
public Future<String> sendCommandAsync(String command, Promise<String> promise) {
synchronized(channel) {
queue.offer(promise);
channel.write(command);
}
channel.flush();
}
After we have done our methods, we need a way to call it:
sendCommandAsync("USER anonymous",
new DefaultPromise<>().addListener(
(Future<String> f) -> {
String response = f.get();
if (response.startWidth("331")) {
// do something
}
// etc
}
)
);
If the called would like to use our a api as a blocking call, he can also do that:
String response = sendCommandAsync("USER anonymous").get();
if (response.startWidth("331")) {
// do something
}
// etc
Notice that Future.get() can throw an InterruptedException if the Thread state is interrupted, unlike a socket read operation, who can only be cancelled by some interaction on the socket. This exception should not be a problem in the FutureListener.

Threads and Interrupts: Continue or exit?

The official documentation and forum posts I could find are very vague on this. They say it's up to the programmer to decide whether to continue after being interrupted or exit, but I can't find any documentation of the conditions that would warrant one or the other.
Here is the code in question:
private final LinkedBlockingQueue<Message> messageQueue = new LinkedBlockingQueue<Message>();
// The sender argument is an enum describing who sent the message: the user, the app, or the person on the other end.
public void sendMessage(String address, String message, Sender sender) {
messageQueue.offer(Message.create(address, message, sender));
startSenderThread();
}
private Thread senderThread;
private void startSenderThread(){
if(senderThread == null || !senderThread.isAlive()){
senderThread = new Thread(){
#Override
public void run() {
loopSendMessage();
}
};
senderThread.start();
}
}
private void loopSendMessage(){
Message queuedMessage;
// Should this condition simply be `true` instead?
while(!Thread.interrupted()){
try {
queuedMessage = messageQueue.poll(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
EasyLog.e(this, "SenderThread interrupted while polling.", e);
continue;
}
if(queuedMessage != null)
sendOrQueueMessage(queuedMessage);
else
break;
}
}
// Queue in this context means storing the message in the database
// so it can be sent later.
private void sendOrQueueMessage(Message message){
//Irrelevant code omitted.
}
The sendMessage() method can be called from any thread and at any time. It posts a new message to send to the message queue and starts the sender thread if it isn't running. The sender thread polls the queue with a timeout, and processes the messages. If there are no more messages in the queue, the thread exits.
It's for an Android app that automates SMS message handling. This is in a class that handles the outbound messages, deciding whether to immediately send them or save them to send later, as Android has an internal 100 message/hour limit that can only be changed by rooting and accessing the settings database.
Messages can be sent from different parts of the app simultaneously, by the user or the app itself. Deciding when to queue for later needs to be handled synchronously to avoid needing atomic message counting.
I want to handle interrupts gracefully, but I don't want to stop sending messages if there are more to send. The Java documentation on threading says most methods simply return after being interrupted, but that will leave unsent messages in the queue.
Could anyone please recommend a course of action?
I guess the answer depends on why you are being interrupted? Often threads are interrupted because some other process/thread is trying to cancel or kill it. In those cases, stopping is appropriate.
Perhaps when interrupted, you send out all remaining messages and don't accept new ones?

BlockingQueue consumer has no response while queue is not empty

I have a distributed system, whose node receive message objects through socket. The messages are written to a BlockingQueue when received and processed in another thread. I make sure that there is just one BlockingQueue instance within a machine. The incoming rate for is very high, roughly thousands per second. The consumer works well at first, but blocks (have no response at all) after a certain period - I have checked that BlockingQueue is not empty, so should not be blocked by BlockingQueue.take(). When I manually decrease the rate of incoming message objects, the consumer works absolutely well. This is quite confusing...
Could you help me identify the problem? Thanks a lot in advance.
Consumer code:
ThreadFactory threadFactory = new ThreadFactoryBuilder()
.setNameFormat(id+"-machine-worker")
.setDaemon(false)
.setPriority(Thread.MAX_PRIORITY)
.build();
ExecutorService executor = Executors.newSingleThreadExecutor(threadFactory);
executor.submit(new Worker(machine));
public static class Worker implements Runnable {
Machine machine;
public Worker(Machine machine) {
this.machine = machine;
}
#Override
public void run() {
while (true) {
try {
Message message = machine.queue.take();
// Do my staff here...
} catch (Exception e) {
logger.error(e);
}
}
}
}
Producer code:
// Below code submits the SocketListener runnable described below
ExecutorService worker;
Runnable runnable = socketHandlerFactory.getSocketHandlingRunnable(socket, queue);
worker.submit(runnable);
public SocketListener(Socket mySocket, Machine machine, LinkedBlockingQueue<Message> queue) {
this.id = machine.id;
this.socket = mySocket;
this.machine = machine;
this.queue = queue;
try {
BufferedInputStream bis = new BufferedInputStream(socket.getInputStream(), 8192*64);
ois = new ObjectInputStream(bis);
} catch (Exception e) {
logger.error("Error in create SocketListener", e);
}
}
#Override
public void run() {
Message message;
try {
boolean socketConnectionIsAlive = true;
while (socketConnectionIsAlive) {
if (ois != null) {
message = (Message) ois.readObject();
queue.put(message);
}
}
} catch (Exception e) {
logger.warn(e);
}
}
If you are using an unbounded queue, it may happen that the whole system is getting bogged down due to memory pressure. Also, this means that the producing intensity is not limited by the consuming intensity. So, use a bounded queue.
Another advice: get a full thread stacktrace dump when your blocking condition occurs to find out for certain where the consumer is blocking. You may get a surprise there.
You have several candidate problem areas:
What actual BlockingQueue are you using? Did you hit the upper limit of an ArrayBlockingQueue?
How much memory did you allocate for your process? I.e., what is the max heap for this process? If you hit the upper limit of that heap space from your overload of incoming messages, it's entirely possible that you had an OutOfMemoryError.
What actually happens during your message processing ("Do my staff here..." [sic])? Is it possible that you have a deadlock inside that code that you only expose when you send many messages per second. Do you have an Exception eater somewhere down in that call stack that's hiding the real problem that you're experiencing?
Where are your loggers logging to? Are you throwing away the indicative message because it's not logging to a location that you expect?

Categories