ConcurrentLinkedQueue does'n work as expected - java

I'm developing an Andoid App that is made of a Service running in Background and some Activities connected to that Service. The Service runs on it's own Process.
My Service mainly has 3 classes: ServiceMain, ServiceWorker,Message.
ServiceMain has all the functions that are used by the Activities like logIn,logOut,send ... and so on.
Message represents a message that is sent to our server or recieved. Simply a String and a bool where the String is the message and the bool is a flag saying if a response from server is needed.
ServiceWorker is a subclass of Thread and does all the sending an recieving of messages using Sockets.
ServiceMain contains 2 Queues:
Queue<Message> Sendingqueue= new ConcurrentLinkedQueue<Message>();
Queue<Message> Recievequeue = new ConcurrentLinkedQueue<Message>();
If the logIn method is called a ServiceWorker is created and started. In it's constructor it gets references to both queues and holds them.
private final Queue<Message> Sendingqueue;
private final Queue<Message> Recievequeue;
ServiceMain then creates some messages (M1,M2 for example) and adds them to the Sendingqueue.
The ServiceWorker builds the connection to our server and then runs into a loop where it looks for messages in Sendingqueue, sends them and doing some othe stuff like recieving ....
Hope the scenario is clear now.
Within ServiceWorker something strange happens on Sendingqueue:
Let's say ServiceMain added two messages, M1 and M2 to Sendingqueue while ServiceWorker is doing something time consuming or is not connected to our server.
Sendingqueue now contains two messages.
If the ServiceWorker next time gets the length of the Queue it sees 2 items. Ok so far.
Then it calls peek() (the message is removed only if it was successfull sent) on the Sendingqueue and should get M1 because it was added first.
But it gets M2.
The Sendingqueue seems to be reverted.
What's going wrong here ? What can I do to avoid this?
Thanks for any constructive reply.
Detlef

ConcurrentLinkedQueue doesn't make any guarantees about order, but the order of elements shouldn't change if you are adding to the end and taking from the start (or visa versa) thsi should work. You could run into a problem if you add and remove from the start or end as this will mean you are processing the newest rather than the oldest each time.
If you had a large powerful server, I would still suggest this approach is overkill. Instead of having a background thread to perform the processing, I would use the main thread.
Note: The socket is already an input and output queue on the client and on the server, so adding a third layer of queuing may be redundant in a large system and inefficient in a smaller device.

Related

ZeroMQ failing to publish messages

I'm trying to get a basic implementation of a ZMQ publisher and subscriber working, but it's failing silently. I'm using JeroMQ 0.5.2 (the current version) and Java 8.
Consider the following official test file:
https://github.com/zeromq/jeromq/blob/master/src/test/java/org/zeromq/PubSubTest.java
I've copied the first test (testRaceConditionIssue322) in its entirety into a new main class and run it. The publisher binds to the port, and claims to send every message, but the receiver does not receive a single message. Adding logs indicates that the subscriber believes itself to be subscribed before the publisher sends messages.
I've tried this on two computers, as well as with different code, and it's the same net result each time. What gives?
Ok, I figured it out. Two things converged in an unfortunate way.
The test I linked was, possibly on purpose, starting the subscriber before the publisher. For some reason, the subscriber reported a successful connection, even though the publisher had not yet opened the port. It did not make a connection, and did not receive the messages subsequently sent. When I made sure the publisher was bound and listening for connections, and then the subscriber connected, and then the publisher published messages, it worked how I as expecting.
The OTHER code I was using, as a subscriber, had a line in it I didn't notice - socket.hasReceiveMore(). It was expecting two strings in one message, but I was sending two strings separately. This meant part of the receiver code never executed - it received the strings I was sending, but discarded them as partial messages. When I sent my first string with the flag publisher.send(msg, ZMQ.SNDMORE); (and the second without), it worked as I expected.

SimGrid. Asynchronous communications and failing links

Simulation has one master and seven workers. When workers end to execute data, they dsend messageTasks to master about completion of execution.
getHost().setProperty("busy", "no");
ReleaseTask releaseTask = new ReleaseTask(getHost().getName());
releaseTask.dsend("Master");
The link connects worker1 and master is broken. It is link1.fail file.
PERIODICITY 2
0 1
1 0
I expected that only one releaseTask (from worker1) can't reach master. But, unfortunately, no releaseTasks (from other workers) achieve master. This error-warning appears:
[13.059397] /builds/workspace/SimGrid-Multi/build_mode/Debug/node/simgrid-ubuntu-trusty-64/build/SimGrid-3.13/src/simix/smx_global.cpp:554: [simix_kernel/CRITICAL] Oops ! Deadlock or code not perfectly clean.
[13.059397] [simix_kernel/INFO] 16 processes are still running, waiting for something.
Master receive task in such way:
Task listenTask = Task.receive("Master");
When link connects worker1 and master isn't broken, all simulation works fine.
How can I avoid this problem?
UPDATED
My platform.xml file:
<link id="0_11" state_file="linkfailures/0_11.fail" bandwidth="3.430125Bps" latency="4.669142ms"/>
0_11.fail file:
PERIODICITY 2
0 1
1 0
Worker starts to dsend a MessageTask to master at 6.94 s. MessageTask transmission time is 0.07 sec. But at 7.00 s. the link which connects master and worker starts to be broken. I guess master continues timeless "receiving" data and error occurs. But how to handle it?
If you send your data with dsend, it only means that you don't care of whether the receiver gets it or whether an error occurs. It does not make the communication more robust (nor less robust either).
You updated your question, giving two possible outcomes to your simulation. Sometimes you say that no communication makes it to master and that the simulation ends when SimGrid reports a deadlock (16 processes are still running, waiting for something), and sometimes you report a that a TransferFailureError is occurring. But actually, that's exactly what is expected in your case, if I'm right.
Here is what happens:
you send a message with dsend
the message get lost because the link fails. Nope, it does not take for ever to deliver because the link fails, it just disappear immediately.
At this point there is two possible outcomes, depending on whether the link fails before or after the communication starts (before or after the receiver posts its recv).
If the link fails before the time where the receiver (the master in your case, it seems) posts its recv request, then the failure will not be noticed. Indeed, there is no receiver yet to inform and the sender said that it does not care about the communication outcome, by using a dsend.
If the link fails after the time where the receiver posts its request, then the sender does not notice anything (because of the dsend), and the receiver gets a TransferFailureException on its receive action. So the failing communication is killing someone even if you sent it with dsend, but actually that's the master who dies. That is why the other slaves cannot communicate with the master: he got an uncatched exception while receiving something from the fishy host.
If you want the sender to notice that your message did not went through (to resend it maybe), then you don't want to use dsend but isend (for an asynchronous communication) or send (for a blocking communication). And the sender has to pay attention for the status of the communication.
If you want your message to be really delayed but not destroyed, then try changing the bandwidth of the link to 0 for a while (using availability_file instead if state_file).
If you want your receiver to survive such communication issue, just catch the exception it gets.

Messages, Handlers and Threading : Lego Mindstorms bluetooth communication

This question refers to writing an application that communicates with the NXT block on a lego mindstorms robot.
What I want to do
NXC (not exactly C, a language for writing programs for the NXT) supplies a function until(condition) that waits until condition evaluates to true. I want to implement this using the bluetooth messaging protocol, talking to the NXT via bluetooth from an android application.
What I can do so far:
I'm able to send an input query message (getInputValue(int in)), which sends a message to the NXT asking for the current status of the input in. The NXT then sends back a message with this information, which is then written to a global variable that holds the most recently asked input value (let's call it myValue).
What the problem is:
I'm using bits and pieces from the lego MINDroid application - in this class I have a separate communication thread which handles direct communication with the NXT. When it receives a message, it forwards it on to the main thread, via a Handler. The problem occurs when I try to busy wait for a reply - doing:
while(myValue != valueIWant) {
sleep(100);
getInputValue(in);
}
ends up busying the main thread, so that the handler never actually gets to receive any messages. The communication thread receives the messages from the NXT, forwards them to the main thread, but the handler never gets called because it's doing other stuff.
What's the best way to get around this? I can't get the thread to wait in any way because that would stop it receiving messages also :(
Any suggestions would be appreciated! I'll also happily elaborate on any bits of code.
Links that may be useful
http://bricxcc.sourceforge.net/nbc/nxcdoc/nxcapi/main.html
http://github.com/NXT/LEGO-MINDSTORMS-MINDdroid
http://mindstorms.lego.com/en-us/support/files/default.aspx (for the bluetooth docs)
Solved, using callbacks :) Happy to elaborate if needed.
Edit: (sorry for late reply!)
I ended up implementing a callback procedure, where I attached a 'callback' function to some list. When the handler receives a message, it would look in the list of callbacks and see if the message received matched any of the callback functions that are present - if so, it would execute the method inside the callback.
I then made a class around these callbacks, where I could create execution queues (doA; doB; doC;) and it would wrap those up into a callback chain (callBack({doA; callBack({doB; call...})})), which gave the impression that I was operating in a synchronous environment, when in fact it was operating asynchronously.

Java Async Processing

I am currently developing a system that uses allot of async processing. The transfer of information is done using Queues. So one process will put info in the Queue (and terminate) and another will pick it up and process it. My implementation leaves me facing a number of challenges and I am interested in what everyone's approach is to these problems (in terms of architecture as well as libraries).
Let me paint the picture. Lets say you have three processes:
Process A -----> Process B
|
Process C <-----------|
So Process A puts a message in a queue and ends, Process B picks up the message, processes it and puts it in a "return" queue. Process C picks up the message and processes it.
How does one handle Process B not listening or processing messages off the Queue? Is there some JMS type method that prevents a Producer from submitting a message when the Consumer is not active? So Process A will submit but throw an exception.
Lets say Process C has to get a reply with in X minutes, but Process B has stopped (for any reason), is there some mechanism that enforces a timeout on a Queue? So guaranteed reply within X minutes which would kick off Process C.
Can all of these matters be handled using a dead letter Queue of some sort? Should I maybe be doing this all manually with timers and check. I have mentioned JMS but I am open to anything, in fact I am using Hazelcast for the Queues.
Please note this is more of a architectural question, in terms of available java technologies and methods, and I do feel this is a proper question.
Any suggestions will be greatly appreciated.
Thanks
IMHO, The simplest solution is to use an ExecutorService, or a solution based on an executor service. This supports a queue of work, scheduled tasks (for timeouts).
It can also work in a single process. (I believe Hazelcast supports distributed ExecutorService)
It seems to me that the type of questions you're asking are "smells" that queues and async processing may not be the best tools for your situation.
1) That defeats a purpose of a queue. Sounds like you need a synchronous request-response process.
2) Process C is not getting a reply generally speaking. It's getting a message from a queue. If there is a message in the queue and the Process C is ready then it will get it. Process C could decide that the message is stale once it gets it, for example.
I think your first question has already been answered adequately by the other posters.
On your second question, what you are trying to do may be possible depending on the messaging engine used by your application. I know this works with IBM MQ. I have seen this being done using the WebSphere MQ Classes for Java but not JMS. The way it works is that when Process A puts a message on a queue, it specifies the time it will wait for a response message. If Process A fails to receive a response message within the specified time, the system throws an appropriate exception.
I do not think there is a standard way in JMS to handle request/response timeouts the way you want so you may have to use platform specific classes like WebSphere MQ Classes for Java.
Well, kind of the point of queues is to keep things pretty isolated.
If you're not stuck on any particular tech, you could use a database for your queues.
But first, a simple mechanism to ensure two processes are coordinated is to use a socket. If practical, simply have process B create an open socket listener on some well know port, and process A will connect to that socket, and monitor it. If process B ever goes away, process A can tell because their socket gets shutdown, and it can use that as an alert of problems with process B.
For the B -> C problem, have a db table:
create table queue (
id integer,
payload varchar(100), // or whatever you can use to indicate a payload
status varchar(1),
updated timestamp
)
Then, Process A puts its entry on the queue, with the current time and a status of "B". B, listens on the queue:
select * from queue where status = 'B' order by updated
When B is done, it updates the queue to set the status to "C".
Meanwhile, "C" is polling the DB with:
select * from queue where status = 'C'
or (status = 'B' and updated < (now - threshold) order by updated
(with the threshold being however long you want things to rot on the queue).
Finally, C updates the queue row to 'D' for done, or deletes it, or whatever you like.
The dark side is there is a bit of a race condition here where C might try and grab an entry while B is just starting up. You can probably get through that with a strict isolation level, and some locking. Something as simply as:
select * from queue where status = 'C'
or (status = 'B' and updated < (now - threshold) order by updated
FOR UPDATE
Also use FOR UPDATE for B's select. This way whoever win the select race will get an exclusive lock on the row.
This will get you pretty far down the road in terms of actual functionality.
You are expecting the semantics of synchronous processing with async (messaging) setup which is not possible. I have worked on WebSphere MQ and normally when the consumer dies, the messages are kept in the queue forever (unless you set the expiry). Once the queue reaches its depth, the subsequent messages are moved to the dead letter queue.
I've used a similar approach to create a queuing and processing system for video transcoding jobs. Basically the way it worked was:
Process A posts a "schedule" message to Arbiter Q, which adds the job into its "waiting" queue.
Process B requests the next job from Arbiter Q, which removes the next item in its "waiting" queue (subject to some custom scheduling logic to ensure that a single user couldn't flood transcode requests and prevent other users from being able to transcode videos) and inserts it into its "processing" set before returning the job back to Process B. The job is timestamped when it goes into the "processing" set.
Process B completes the job and posts a "complete" message to Arbiter Q, which removes the job from the "processing" set and then modifies some state so that Process C knows the job completed.
Arbiter Q periodically inspects the jobs in its "processing" set, and times out any that have been running for an unusually long amount of time. Process A is then free to attempt to queue up the same job again, if it wants.
This was implemented using JMX (JMS would have been much more appropriate, but I digress). Process A was simply the servlet thread which responded to a user-initiated transcode request. Arbiter Q was an MBean singleton (persisted/replicated across all the nodes in a cluster of servers) that received "schedule" and "complete" messages. Its internally managed "queues" were simply List instances, and when a job completed it modified a value in the application's database to refer to the URL of the transcoded video file. Process B was the transcoding thread. Its job was simply to request a job, transcode it, and then report back when it finished. Over and over again until the end of time. Process C was another user/servlet thread. It would see that the URL was available, and present the download link to the user.
In such a case, if Process B were to die then the jobs would sit in the "waiting" queue forever. In practice, however, that never happened. If your Process B is not running/doing what it is supposed to do then I think that suggests a problem in your deployment/configuration/implementation of Process B more than it does a problem in your overall approach.

Handling Java Interupts

I am making an application that will work much like a real time chat. A user will be constantly writing on lets say a text area and messages will be send to other users. On the communications class I have set up a receiver. When a message from someone reaches the client, the receive method will be invoked and will get the message. What I can't understand is how the code will be executed. What happens if, while the user is typing/sending a message the receive message is invoked ? What do I need to do in order for this to work properly ?
Hope the question is clear enough.
ps : Im still in the design phase thats why I haven't tested it to see what happens.
Also atm I only use a second thread to receive messages which calls the receive method.
There should not be a problem at all.
When a message from someone reaches the client, the receive method
will be invoked and will get the message. What I can't understand is
how the code will be executed?
You should have a Receiver class that will encapsulate a socket (from which your receive data) and keep a set of listeners (see Observer pattern). A GUI can be one of the listeners. When a message is received via the socket, you need to notify all listeners by forwarding the data received. This way, you have a clean and nice way to notify the GUI about new messages arrivals.
What happens if, while the user is typing/sending a message the
receive message is invoked ?
This depends on the type of IP protocol you are using but in general your don't have to worry about this although I suggest you protect your sockets using lock mechanisms.
What do I need to do in order for this to work properly ?
Here is a nice example that can give you some inspiration :)
EDIT: As for your question regarding execution flow, sending and receiving are two different and uncorrelated operations that can happen at the same time. This can be achieved by implementing send and receive operations in two different threads. Here is an article on socket communications and multithreading.
You should either do what traditional Java EE app servers have done, which is assign a separate thread for processing each incoming message, or try a Java NIO solution along the lines of Netty.

Categories