I'm writing a Java app that sends and receives messages from a websocket server. When the app receives a message it might take some time to process it. Therefore I'm trying to use multiple threads to receive messages. To my understanding Grizzly has selector threads as well as worker threads. By default there is 1 selecter thread and 2 worker threads, in the following example I'm trying to increase those to 5 and 10 respectively.
In the below example I'm pausing the the thread that calls the onMessage method for 10sec to simulate processing of the incoming information. The information comes in every second, therefore 10 threads should be able to handle the amount of traffic.
When I profile the run, only 1 selector thread is running and 2 working threads. Furthermore, messages are only received at a 10sec interval. Indicating that only 1 thread is handling the traffic - I find this very odd. During profiling, one worker thread e.g. Grizzly(1) receives the first message sent. Then 10 seconds later 'Grizzly(2)' receives the second message - then Grizzly(2) keeps on receiving the messages, and Grizzly(1) does not perform any actions.
Can someone please explain this odd behavior and how to change it to e.g. 10 threads constantly waiting in line for a message?
Main:
public static void main(String[] args) {
WebsocketTextClient client = new WebsocketTextClient();
client.connect();
for (int i = 0; i < 60; i++) {
client.send("Test message " + i);
try {
Thread.sleep(1000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
}
}
WebsocketTextClient.java:
import java.net.URI;
import javax.websocket.ClientEndpointConfig;
import javax.websocket.EndpointConfig;
import javax.websocket.Session;
import javax.websocket.Endpoint;
import javax.websocket.MessageHandler;
import org.glassfish.tyrus.client.ClientManager;
import org.glassfish.tyrus.client.ThreadPoolConfig;
import org.glassfish.tyrus.container.grizzly.client.GrizzlyClientProperties;
public class WebsocketTextClient {
private ClientManager client;
private ClientEndpointConfig clientConfig;
WebsocketTextClientEndpoint endpoint;
public WebsocketTextClient() {
client = ClientManager.createClient();
client.getProperties().put(GrizzlyClientProperties.SELECTOR_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(5));
client.getProperties().put(GrizzlyClientProperties.WORKER_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(10));
}
public boolean connect() {
try {
clientConfig = ClientEndpointConfig.Builder.create().build();
endpoint = new WebsocketTextClientEndpoint();
client.connectToServer(endpoint, clientConfig, new URI("wss://echo.websocket.org"));
} catch (Exception e) {
return false;
}
return true;
}
public boolean disconnect() {
return false;
}
public boolean send(String message) {
endpoint.session.getAsyncRemote().sendText(message);
return true;
}
private class WebsocketTextClientEndpoint extends Endpoint {
Session session;
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("Connection opened");
this.session = session;
session.addMessageHandler(new WebsocketTextClientMessageHandler());
}
}
private class WebsocketTextClientMessageHandler implements MessageHandler.Whole<String> {
#Override
public void onMessage(String message) {
System.out.println("Message received from " + Thread.currentThread().getName() + " " + message);
try {
Thread.sleep(10000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
System.out.println("Resuming");
}
}
}
What you appear to be asking is for WebSockets to be able to receive multiple messages sent by the same client connection, to process those messages in separate threads, and to send the responses when they are ready - which means, potentially out of order. This scenario can only happen if the client is multi-threaded.
To deal with multiple threads on the same WebSocket session would generally require the ability for WebSockets to multiplex the data going to and from the client. This is not currently a feature of WebSockets, but could certainly be built on top of it. However, multiplexing those client and server threads on a single channel introduces a fair bit of complexity, because you need to stop all the client and server threads from inadvertently overwriting or starving one another.
The Java spec for MessageHandler is perhaps a little ambiguous about the threading model;
https://docs.oracle.com/javaee/7/api/javax/websocket/MessageHandler.html says:
Each web socket session uses no more than one thread at a time to call its MessageHandlers.
But the important term here is "socket session". If your client is sending multiple messages within the same WebSocket session, the server side handler will execute within a single thread. This doesn't mean you can't do lots of interesting stuff within the thread, particularly if you're using Input/OutputStreams (or Writers) on both ends. It does mean that communication with the client is mediated by just one thread. If you want to multiplex the communication, you'd have to write something on top of the socket to do so; that would include developing your own threading model for dispatching the requests.
An easier solution would be to create a new Session for each client request. Each client request starts a session (ie, TCP connection), sends the data, and waits for the result. This gives you multiple MessageHandler threads - one per session, per the spec.
This is the most straightforward way to get multi-threading on the server side; any other approach will tend to need a multiplexing mechanism - which, depending on your use case, is perhaps not worth the effort, and certainly carries some complexity and risk.
If you are concerned about the number of sessions (TCP/HTTP connections) between client/s and server/s, you could consider creating a pool of Sessions on the client side, and use each client Session one at a time, returning the session to the pool whenever the client is done with it.
Finally, perhaps not directly relevant: I found that when I used Payara Micro to serve the WebSocket endpoint, I needed to set this:
<resources>
...
<managed-executor-service maximum-pool-size="200" core-pool-size="10" long-running-tasks="true" keep-alive-seconds="300" hung-after-seconds="300" task-queue-capacity="20000" jndi-name="concurrent/__defaultManagedExecutorService" object-type="system-all"></managed-executor-service>
The default ManagedExecutorService only provides a single thread. This appears to be the case in Glassfish as well. This had me running around for hours thinking that I didn't understand the threading model, when it was just the pool size that was confusing me.
Related
We are using hazelcast distributed lock and cache functions in our products. Usage of distributed locking is vitally important for our business logic.
Currently we are using the embedded mode(each application node is also a hazelcast cluster member). We are going to switch to client - server mode.
The problem we have noticed for client - server is that, once the cluster is down for a period, after several attempts clients are destroyed and any objects (maps, sets, etc.) that were retrieved from that client are no longer usable.
Also the client instance does not recover even after the Hazelcast cluster comes back up (we receive HazelcastInstanceNotActiveException )
I know that this issue has been addressed several times and ended up as being a feature request:
issue1
issue2
issue3
My question : What should be the strategy to recover the client? Currently we are planning to enqueue a task in the client process as below. Based on a condition it will try to restart the client instance...
We will check whether the client is running or not via clientInstance.getLifecycleService().isRunning() check.
Here is the task code:
private class ClientModeHazelcastInstanceReconnectorTask implements Runnable {
#Override
public void run() {
try {
HazelCastService hazelcastService = HazelCastService.getInstance();
HazelcastInstance clientInstance = hazelcastService.getHazelcastInstance();
boolean running = clientInstance.getLifecycleService().isRunning();
if (!running) {
logger.info("Current clientInstance is NOT running. Trying to start hazelcastInstance from ClientModeHazelcastInstanceReconnectorTask...");
hazelcastService.startHazelcastInstance(HazelcastOperationMode.CLIENT);
}
} catch (Exception ex) {
logger.error("Error occured in ClientModeHazelcastInstanceReconnectorTask !!!", ex);
}
}
}
Is this approach suitable? I also tried to listen LifeCycle events but could not make it work via events.
Regards
In Hazelcast 3.9 we changed the way connection and reconnection works in clients. You can read about the new behavior in the docs: http://docs.hazelcast.org/docs/3.9.1/manual/html-single/index.html#configuring-client-connection-strategy
I hope this helps.
In Hazelcast 3.10 you may increase connection attempt limit from 2 (by default) to maximum:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(Integer.MAX_VALUE);
I've been searching for an answer to my problem, but none of the solutions so far have helped me solve it. I'm working on an app that communicates with another device that works as a server. The app sends queries to the server and receives appropriate responses to dynamically create fragments.
In the first implementation the app sent the query and then waited to receive the answer in a single thread. But that solution wasn't satisfactory since the app did not receive any feedback from the server. The server admin said he was receiving the queries, however he hinted that the device was sending the answer back too fast and that the app probably wasn't already listening by the time the answer arrived.
So what I am trying to achieve is create seperate threads: one for listening and one for sending the query. The one that listens would start before we sent anything to the server, to ensure the app does not miss the server response.
Implementing this so far hasn't been succesful. I've tried writing and running seperate Runnable classes and AsyncTasks, but the listener never received an answer and at some points one of the threads didn't even execute. Here is the code for the asynctask listener:
#Override
protected String doInBackground(String... params) {
int bufferLength = 28;
String masterIP = "192.168.1.100";
try {
Log.i("TCPQuery", "Listening for ReActor answers ...");
Socket tcpSocket = new Socket();
SocketAddress socketAddress = new InetSocketAddress(masterIP, 50001);
try {
tcpSocket.connect(socketAddress);
Log.i("TCPQuery", "Is socket connected: " + tcpSocket.isConnected());
} catch (IOException e) {
e.printStackTrace();
}
while(true){
Log.i("TCPQuery", "Listening ...");
try{
Log.i("TCPQuery", "Waiting for ReActor response ...");
byte[] buffer = new byte[bufferLength];
tcpSocket.getInputStream().read(buffer);
Log.i("TCPQuery", "Received message " + Arrays.toString(buffer) + " from ReActor.");
}catch(Exception e){
e.printStackTrace();
Log.e("TCPQuery", "An error occured receiving the message.");
}
}
} catch (Exception e) {
Log.e("TCP", "Error", e);
}
return "";
}
And this is how the tasks are called:
if (Build.VERSION.SDK_INT>=Build.VERSION_CODES.HONEYCOMB) {
listener.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
}
else {
listener.execute();
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
How exactly would you approach this problem? If this code is not sufficient I would be glad to post more.
This is because Android's AsyncTask is actually only one thread, no matter how many you create, so if you really want 2 threads running at the same time, I suggest you use standard Java concurrent package tools, not AsyncTask. As explained in the documentation:
AsyncTask is designed to be a helper class around Thread and Handler
and does not constitute a generic threading framework. AsyncTasks
should ideally be used for short operations (a few seconds at the
most.) If you need to keep threads running for long periods of time,
it is highly recommended you use the various APIs provided by the
java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and
FutureTask.
Look this is tcp connection. So you don't need to bother about data lose. This is port to port connection and it never sends end of stream (-1). Perhaps you have to care about read functionality. Because you can not conform all steams are received or not. Tcp read method is a blocking call. If your read buffer size is smaller than available stream size then it block until it can read fully. And you are using android device, perhaps available stream can vary depending upon your device network. So you have 2 options,
1) your buffer size should be dynamic. At first check your available input stream size by using is.available() and create your buf size by this size. If available size is zero then sleep for a certain time to check it is lost its stream availability or not.
2) set your input stream timeout. It really works, because it reads its available stream and wait for the timeout delay, if any stream is not available within the timeout period then it throws timeout exception.
Try to change your code.
I saw plenty of similar questions on SO but hardly any of them have Socket in the picture. So please take time to read the question.
I have server app (using ServerSocket) which listens for requests, and when a client attempts to connect, new thread is created to serve the client (and server is back to listening mode for new requests). Now, I need to respond one client based on what other client sent to server.
Example:
ServerSocket listening for incoming connections.
Client A connects, new thread is created to serve A.
Client B connects, new thread is created to serve B.
A sends message "Hello from A" to the Server.
Send this message as a response to Client B.
I'm new to this whole "inter-thread communication" thing. Obviously, above mentioned situation sounds dead simple, but I'm describing this to get a hint, as I'll be exchanging huge amount data among clients keeping server as intermediate.
Also, what if I want to keep a shared object limited to, say 10, particular Clients? such that, when 11th client connects to the server, I create new shared object, which will be used to exchange data between 11th, 12th, 13th..... upto 20th client. And so on for every single set of 10 clients.
What I tried: (foolish I guess)
I have a public class with that object supposed to be shared as public static, so that I can use it as global without instantiating it, like MyGlobalClass.SharedMsg.
That doesn't work, I was unable to send data received in one thread to the other.
I'm aware that there is an obvious locking problem since if one thread is writing to an object, other can't be accessing it until the first thread is done writing.
So what would be an ideal approach to this problem?
Update
Since the way in which I create threads for serving incoming connection requests, I can't understand how I can share same object among the threads, since using Global object as mentioned above doesn't work.
Following is how I listen for incoming connections and create serving threads dynamically.
// Method of server class
public void startServer()
{
if (!isRunning)
{
try
{
isRunning = true;
while (isRunning)
{
try
{
new ClientHandler(mysocketserver.accept()).start();
}
catch (SocketTimeoutException ex)
{
//nothing to perform here, go back again to listening.
}
catch (SocketException ex)
{
//Not to handle, since I'll stop the server using SocketServer's close() method, and its going to throw SocketException anyway.
}
}
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
else
System.out.println("Server Already Started!");
}
And the ClientHandler class.
public class ClientHandler extends Thread
{
private Socket client = null;
private ObjectInputStream in = null;
private ObjectOutputStream out = null;
public ClientHandler(Socket client)
{
super("ClientHandler");
this.client = client;
}
//This run() is common for every Client that connects, and that's where the problem is.
public void run()
{
try
{
in = new ObjectInputStream(client.getInputStream());
out = new ObjectOutputStream(client.getOutputStream());
//Message received from this thread.
String msg = in.readObject().toString();
System.out.println("Client # "+ client.getInetAddress().getHostAddress() +" Says : "+msg);
//Response to this client.
out.writeObject("Message Received");
out.close();
in.close();
client.close();
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}
I believe that the way I'm creating dynamic threads to serve each client that connects, sharing the same data source is not possible using Global object, since the body of run() above is exactly the same for every client that connects, hence this same method is both consumer and producer. What fixes should I make such that I could create dynamic threads for each connection and still share the same object.
You probably want a queue for communication between each client. Each Queue will be the 'pipeline' for data pushed from one client to the other.
You would use it like so (pseudo code):
Thread 1:
Receive request from Client A, with message for Client B
Put message on back of concurrent Queue A2B
Respond to Client A.
Thread 2:
Receive request from Client B.
Pop message from front of Queue A2B
Respond to Client B with message.
You might also want it generic, so you have a AllToB Queue that many clients (and thus many threads) can write to.
Classes of note: ConcurrentLinkedQueue, ArrayBlockingQueue.
If you want to limit the number of messages, then ArrayBlockingQueue with its capacity constructor allows you to do this. If you don't need the blocking functionality, you can use the methods offer and poll rather than put and take.
I wouldn't worry about sharing the queues, it makes the problem significantly more complicated. Only do this if you know there is a memory usage problem you need to address.
EDIT: Based on your update:
If you need to share a single instance between all dynamically created instances you can either:
Make a static instance.
Pass it into the constructor.
Example of 1:
public class ClientHandler extends Thread
{
public static final Map<ClientHandler, BlockingQueue<String>> messageQueues
= new ConcurrentHashMap<>();
<snip>
public ClientHandler(Socket client)
{
super("ClientHandler");
this.client = client;
// Note: Bad practice to reference 'this' in a constructor.
// This can throw an error based on what the put method does.
// As such, if you are to do this, put it at the end of the method.
messageQueues.put(this, new ArrayBlockingQueue<>());
}
// You can now access this in the run() method like so:
// Get messages for the current client.
// messageQueues.get(this).poll();
// Send messages to the thread for another client.
// messageQueues.get(someClient).offer(message);
A couple of notes:
The messageQueues object should really contain some sort of identifier for the client rather than an object reference that is short lived.
A more testable design would pass the messageQueues object into the constructor to allow mocking.
I would probably recommend using a wrapper class for the map, so you can just call offer with 2 parameters rather than having to worry about the map semantics.
I have socket connection which will send data into a queue via databaseQueue.add(message);. Next the the DatabaseProcessor class which is started as thread during the start where single database connection will be made. The connection will keep taking the message via databaseQueue.take(); and process. The good part about this part everything is that just one database connection is made. The problem arises when suddenly there is a surge of data. So another method is that for each data received I will open and close method. So based your experiences for heavy loads which is the best way to go here?
Some snippet of my codes.
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run() {
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run()
{
// open database connection
createConnection();
while (true)
{
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection()
{
System.out.println("Crerate Connection");
connCreated = new Date();
try
{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex)
{
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
//initiate the socket
}
As far as I understand you are managing a client-server Socket connection in which you send and receive message through a queue. If I also got it right, you are creating a thread for each new message on the queue.
Considering that there will be plenty of messages being sent and read I recommend you to declare your method(s) in your threads Synchronized so that you won't need to open and close streaming each time a data is received (refer to your second approach here). Synchronized Methods are usually the best way to handle surge of common data which can be modified by threads at the same time.
You can use connection pooling to get the best of both worlds: you are not limited to a single thread, and you also do not need to open connections for each request. Have a look at Apache DBCP.
This approach is fine. Except you can create an a DB pool using c3pO. Also use threadPool executor for miantaining your thread pool.
I have got 1000 dedicated Java threads where each thread polls a corresponding url every one second.
public class Poller {
public static Node poll(Node node) {
GetMethod method = null;
try {
HttpClient client = new HttpClient(new SimpleHttpConnectionManager(true));
......
} catch (IOException ex) {
ex.printStackTrace();
} finally {
method.releaseConnection();
}
}
}
The threads are run every one second:
for (int i=0; i <1000; i++) {
MyThread thread = threads.get(i) // threads is a static field
if(thread.isAlive()) {
// If the previous thread is still running, let it run.
} else {
thread.start();
}
}
The problem is if I run the job every one second I get random exceptions like these:
java.net.BindException: Address already in use
INFO httpclient.HttpMethodDirector: I/O exception (java.net.BindException) caught when processing request: Address already in use
INFO httpclient.HttpMethodDirector: Retrying request
But if I run the job every 2 seconds or more, everything runs fine.
I even tried shutting down the instance of SimpleHttpConnectionManager() using shutDown() with no effect.
If I do netstat, I see thousands of TCP connections in TIME_WAIT state, which means they are have been closed and are clearing up.
So to limit the no of connections, I tried using a single instance of HttpClient and use it like this:
public class MyHttpClientFactory {
private static MyHttpClientFactory instance = new HttpClientFactory();
private MultiThreadedHttpConnectionManager connectionManager;
private HttpClient client;
private HttpClientFactory() {
init();
}
public static HttpClientFactory getInstance() {
return instance;
}
public void init() {
connectionManager = new MultiThreadedHttpConnectionManager();
HttpConnectionManagerParams managerParams = new HttpConnectionManagerParams();
managerParams.setMaxTotalConnections(1000);
connectionManager.setParams(managerParams);
client = new HttpClient(connectionManager);
}
public HttpClient getHttpClient() {
if (client != null) {
return client;
} else {
init();
return client;
}
}
}
However after running for exactly 2 hours, it starts throwing 'too many open files' and eventually cannot do anything at all.
ERROR java.net.SocketException: Too many open files
INFO httpclient.HttpMethodDirector: I/O exception (java.net.SocketException) caught when processing request: Too many open files
INFO httpclient.HttpMethodDirector: Retrying request
I should be able to increase the no of connections allowed and make it work, but I would just be prolonging the evil. Any idea what is the best practise to use HttpClient in a situation like above?
Btw, I am still on HttpClient3.1.
This happened to us a few months back. First, double check to make sure you really are calling releaseConnection() every time. But even then, the OS doesn't actually reclaim the TCP connections all at once. The solution is to use the Apache HTTP Client's MultiThreadedHttpConnectionManager. This pools and reuses the connections.
See http://hc.apache.org/httpclient-3.x/performance.html for more performance tips.
Update: Whoops, I didn't read the lower code sample. If you're doing releaseConnection() and using MultiThreadedHttpConnectionManager, consider whether your OS limit on open files per process is set high enough. We had that problem too, and needed to extend the limit a bit.
There is nothing wrong with first error. You just depleted empirical ports available. Each TCP connection can stay in TIME_WAIT state for 2 minutes. You generate 2000/seconds. Soon or later, the socket can't find any unused local port and you will get that error. TIME_WAIT designed exactly for this purpose. Without it, your system might hijack a previous connection.
The second error means you have too many sockets open. On some system, there is a limit of 1K open files. Maybe you just hit that limit due to lingering sockets and other open files. On Linux, you can change this limit using
ulimit -n 2048
But that's limited by a system-wide max value.
As sudo or root edit the /etc/security/limits.conf file. At the end of the file just above “# End of File” enter the following values:
* soft nofile 65535
* hard nofile 65535
This will set the number of open files to unlimited.