I am using Mina2 with Camel to connect to a TCP server (over SSL).
Because Mina2 doesn't provide a default byte array decoder so I had to write a one on my own.
So, for some reasons, Mina splits the data received in chunks and calls the decode function multiple times and when I dug through the documentation, it says multiple threads could handle the received data and try to decode n number of times.
Here's the text:
MINA ensures that there will never be more than one thread simultaneously executing the decode() function for the same IoSession, but it does not guarantee that it will always be the same thread. Suppose the first piece of data is handled by thread-1 who decides it cannot yet decode, when the next piece of data arrives, it could be handled by another thread.
Now because I have a mina endpoint for I/O, camel maintains the same I/O session object for all transfers.
I am saving the decoder state using IOSession.setAttributes, say no. of bytes received so far for that particular exchange, and finish decoding when I receive the whole message.
My fear is that if there are two different exchanges going simultaneously and these different threads compete for the same IOSession property, the properties could get corrupt. I also can't seem to access the camel exchange property in Mina's decoder function, so I can't uniquely identify that particular exchange.
Any help in this regard would be great!!
Related
CometD 3.1.4 java implementation uses a buffer to keep track of the incoming message and this buffer has a default size of 2MB. When restarting from a disconnection, spikes can occur and you can exceed the limit.
What behaviour does the library have? Bytes are lost and if subsequent notifications from the server side are received they can be processed ?
The client-side buffering limit can be configured as explained here.
However, I recommend that you review your logic, as sending MiBs of messages to the client is typically not recommended, as the client may take a long while to process them all.
Also, if the messages are few but with large data, you may want to setup things so that the client gets a CometD message with a download URI to be downloaded on the side, rather than sending the data in the CometD messages.
Having said that, you can write a server-side extension to, for example, discard old messages so that you don't send MiBs of data in case of reconnections.
The message acknowledgment extension guarantees server-to-client delivery, so -- provided you don't exceed the client-side receive buffer -- you can guarantee that queued messages are delivered to clients.
You may need a combination of things that is specific to your application.
You may need a server-side listener to control the size of the message queue, the acknowledgment extension to guarantee delivery, and maybe a larger client-side buffer.
This is not done by default by CometD because everybody wants a different solution: some want to fail the session, some want to discard all messages, some want to keep only the last N, etc.
CometD provides you with the hooks you need to implement your logic.
I have a standard client/server setup.
The program I'd like to build acts a lot like a mail office(which is my Server). Multiple people (client with ObjectOutputStream) hand the office (server with the single ObjectInputStream) mail with an attached address and the office sends the mail where it is supposed to go. If possible, I'd like to have one ObjectInputStream in the server that blocks, waiting for "mail" to come in from any ObjectOutputStream, then sends the "mail" where it's supposed to go. This way I can just have one thread that is completely dedicated to receiving data and sending it.
I will have a thread for each person's client with their ObjectOutputStream, but would like to not also need a matching thread in the server to communicate with each person. I am interested in this idea because I find it excessive to build tons of threads to separately handle connections, when it's possible that a single thread will only send data once in my case.
Is this feasible? or just silly?
Use a JMS queue of Java Message Service, is the design pattern for this case.
http://en.wikipedia.org/wiki/Java_Message_Service
If you have in the server app just one instance of ObjectInputStream and you have many clients then this instance needs to be shared by all threads thus you need to synchronize the access to it.
You can read more here. Hope this helps.
OR
You can have a pool of ObjectInputStream instances and using a assignment algorithm like Round Robin (doc) you can return the same instance for each x order thread for example ... this will make the flow in the server app to be more paralleled
Your question doesn't make sense. You need a separate pair of ObjectInputStream and ObjectOutputStream per Socket. You also need a Thread per Socket, unless you are prepared to put up with the manifest limitations of polling via InputStream.available(), which won't prevent your reads from blocking. If you are using Object Serialization you are already committed to blocking I/O and therefore to a thread per Socket.
I am stuck in a strange issue while reading data from serial port in Java.
I have to read data from serial port via a polling method in a thread which is working fine, but I have a requirement where I need to write data to a serial port and read ACK back. Writing data to the serial port is successful but I am not able to read data back. Here there are two read operations one in thread and one in main thread.
Once I receive serial write data I paused the thread which is reading data from the serial port using a flag and started reading data from serial port again once write is done, but I am not able to read data. I disabled reading serial port after write operation and enabled thread which reads serial port in thread, here I am seeing ACK data from serial port.
Can any suggest what is going on wrong with this serial read operation? It is not buffered read/write operation.
I strongly recommend using only one dedicated thread for accessing serial port reading. The most reliable solution used to be an interrupt handler shoveling all the received data to a threadsafe state machine. Trying to read the serial port from multiple threads calls for problems. Serial port IO doesn't care that you "paused your thread", the data can be already fetched in and lost due to context switch.
So simply keep reading what comes in and if ACK is expected and obtained, inform the main thread via semaphore. In a dirty brutally simplified pseudocode:
Main thread loop:
{
serialReaderThread.isAckExpected = true
sendWriteCommand();
ackReceivedSemaphore.wait();
}
Serial reader thread loop:
{
readData();
if( isAckExpected && data == ack ) {
mainThread.ackReceivedSemaphore.notify();
isAckExpected = false
}
}
You need to set isAckExpected before sending the write command, because if your serial peer is fast enough, you might get the response back before your sendWriteCommand even returns.
You should not have different threads attempting to read from the serial port. The correct architecture is to have the single thread do the reading and distribute the incoming data to interested clients via multiple queues.
You would have a "normal read processing" thread that is given data by the read thread. When you need to do the write/ack sequence, the thread doing the write/ack would temporarily register itself with the read thread and divert the data stream.
You still have to deal with any interleaving of data (i.e. normal data being received after the write request but before the ack is received), but that is up to your application.
In a socket-based application (client/server), I want to make the server perform as a proxy(manager) to handle several clients, and to get the message from one client and send it to the client, identified by an ID.
How can I know the required client running on different thread, how can I get the socket of the associate client that the id represents?
Just keep an in-memory hashmap of some sort of client-id to the java.net.Socket object that represents that client's socket. You need to come up with some way of assigning client IDs, either client supplied, or server-supplied through some authorization scheme.
When a message comes in for a client ID, grab the socket from the map and send it a message. This map needs to be stored in a singleton-type object, and needs to be properly synchronized. Use a concurrent hash map. Also, socket reads and writes would need to be synchronized if you're going multi-threaded.
I have posted some example code as a github gist. It's a bit different than I explained above. I don't store sockets in the map, I store client handlers which have the socket. Also, socket reads don't need synchronization: each clients has its own thread which is the only thread reading from the socket. Socket writes do need to be synchronized though, because the thread of the sending client is writing to the socket of the receiving client.
You're probably better off using something like JBoss Netty rather than rolling your own though.
you can keep a lot of information about ID so each time it connects you get like the ip and save the thread it is running on and then you use like a hashmap to link the id to all that info then you can easily get the thread it is running on and send the information to the correct client
Save the messages to be delivered into a database, and make your threads check the database for new messages to be delivered to "their" clients on a regular basis.
If you do not want a dedicated database for the messages, build a flat file with simple ClientID->Socket mappings and use it like a "telephone book" kind of lookup system. Depending on the amount of clients you are planning to add, each thread could pre- and regularily reload such a file into it's memory for faster access...
I'm writing a Java application that will instantiate objects of a class to represent clients that have connected and registered with an external system on the other side of my application.
Each client object has two nested classes within it, representing front-end and back-end. the front-end class will continuously receive data from the actual client, and send indications and data to the back-end class, which will take that data from the front-end and send it to the external system in using the proper format and protocol that system requires.
In the design, we're looking to have each instantiation of a client object be a thread. Then, within each thread will naturally be two sockets [EDIT]with their own NIO channels each[/EDIT], one client-side, one system-side residing in the front- and back-end respectively. However, this now introduces the need for nonblocking sockets. I have been reading the tutorial here that explains how to safely use a Selector in your main thread for handling all threads with connections.
But, what I need are multiple selectors--each operating in their own thread. From reading the aforementioned tutorial, I've learned that the key sets in a Selector are not threadsafe. Does this mean that separate Selectors instantiated in their own repsective threads may create conflicting keys if I try to give them each their own pair of sockets and channels? Moving the selector up to the main thread is a slight possibility, but far from ideal based on the software requirements I've been given. Thank you for your help.
Using multiple selectors would be fine as long as you do not register the same channel with the same interests (OP_READ / OP_WRITE etc) with both the selector instances. Registering the same channel with multiple selector instances could cause a problem where selector1.select() could consume an event that selector2.select() could be interested in.
The default selectors on most of the platforms are poll() [or epoll()] based.
Selector.select internally calls the int poll( ListPointer, Nfdsmsgs, Timeout) method.
where the ListPointer structure can then be initialized as follows:
list.fds[0].fd = file_descriptorA;
list.fds[0].events = requested_events;
list.msgs[0].msgid = message_id;
list.msgs[0].events = requested_events;
That said, I would recommend the usage of a single selecting thread as mentioned in the ROX RPC nio tutorial. NIO implementations are platform dependant, and it is quite possible that what works on one platform may not work on another. I have seen problems across minor versions too.
For instance, AIX JDK 1.6 SR2 used a poll() based selector - PollSelectorImpl and the corresponding selector provider as PollSelectorProvider, our server ran fine. When I moved to AIX JDK 1.6 SR5, which used a pollset interface based optimized selector (PollSetSelectorImpl), we encountered frequent hangs in our server in the select() and socketchannel.close(). One reason I see is that we open multiple selectors in our application (as opposed to the ideal one Selecting Thread model) and the implementation of the PollSetSelectorImpl as described here.
If you have to use this single socket connection, you have to separate the process of receiving and writing data from and to the channel from the data processing itself. You do not must delegate the channel. The channel is like a bus. The bus (the single thread, that manages the channel) has to read the data and to write it into a (thread-safe) input queue including the information required, so your client thread(s) can pick up the correct datagram package from the queue. If the client thread likes to write data, that data is written to an output queue which is then read by the channels thread to write the data to the channel.
So from a concept of sharing a connection between actors using this connection with their unpredictable processing time (which is the main reason for blocks), you move to a concept of asynchronous data read, data processing and data writing. So, it's not the processing time which is unpredictable anymore, but the time, your data is read or written. Non-blocking means, that the stream of data is as constant as possible, despite what time is required to process that data.