I have socket connection which will send data into a queue via databaseQueue.add(message);. Next the the DatabaseProcessor class which is started as thread during the start where single database connection will be made. The connection will keep taking the message via databaseQueue.take(); and process. The good part about this part everything is that just one database connection is made. The problem arises when suddenly there is a surge of data. So another method is that for each data received I will open and close method. So based your experiences for heavy loads which is the best way to go here?
Some snippet of my codes.
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run() {
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run()
{
// open database connection
createConnection();
while (true)
{
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection()
{
System.out.println("Crerate Connection");
connCreated = new Date();
try
{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex)
{
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
//initiate the socket
}
As far as I understand you are managing a client-server Socket connection in which you send and receive message through a queue. If I also got it right, you are creating a thread for each new message on the queue.
Considering that there will be plenty of messages being sent and read I recommend you to declare your method(s) in your threads Synchronized so that you won't need to open and close streaming each time a data is received (refer to your second approach here). Synchronized Methods are usually the best way to handle surge of common data which can be modified by threads at the same time.
You can use connection pooling to get the best of both worlds: you are not limited to a single thread, and you also do not need to open connections for each request. Have a look at Apache DBCP.
This approach is fine. Except you can create an a DB pool using c3pO. Also use threadPool executor for miantaining your thread pool.
Related
I'm creating a game with a server and multiple clients.
I'm using Kryonet for networking and each connection has it's own listener where it receives packets.
There listeners are called on a background Kryonet thread and I can't block them cause it would affect all of the users.
I have created my database, configured a ConnectionPool in a synchronized Singleton class:
private static final BasicDataSource dataSource = new BasicDataSource();
static {
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl("jdbc:mysql://111.111.111.111/db");
dataSource.setUsername("server");
dataSource.setPassword("serverpass");
}
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
and now I need to execute some queries.
Here comes my issue. As we know, query could take long to return a result so it's totally unacceptable to execute it on a 'Kryonet' thread (when packet is received).
For example, when user sends his 'RegistrationPacket' I need to make a query to the database and return him a result within a packet. When I receive the packet, I need to put it in background and from there - send result to the client.
Here comes my question:
How to handle making database queries in background using Java?
Should I use Executors? Threads? As I know opening a Thread for each connection is a bad idea, (cause 200+ workers).equals(disaster). If someone could help me I would be grateful! :)
With jasync-sql You can do something like this:
// Connect to DB
Connection connection = new MySQLConnection(
new Configuration(
"server",
"111.111.111.111",
3306,
"serverpass",
"db"
)
);
CompletableFuture<?> connectFuture = connection.connect()
// Wait for connection to be ready
// ...
// Execute query
CompletableFuture<QueryResult> future = connection.sendPreparedStatement("select 0");
// Close the connection
connection.disconnect().get()
See more details on the lib itself. It is an async MySQL driver based on Netty.
I'm trying to develop a Java chat server. I don't know if the better solution is to do either of:
Create a socket for each client and keep it open
Set an interval in the client application and query a database to check if there are messages for the client.
Which is the best way to go for this situation?
i suggest you to learn Serialization if you want to develop an application with UI support. Moreover, you have to create a socket for each client especially in Server side. And a Server should have threads which maybe you can call client handler, to deal with clients' requests. Query a database for checking received messages is meaningless but you can save all messages in a database maybe. My advice is if you are going to use a database (well i suggest that), use it for dealing with registration process of clients. So whenever a client sends a request to server for logging in, a thread will check will check if that client has already have an account or not in database.If not you can implement a simple register form. And logically every client will have a friend list which you should keep them in a database.
EDIT: The Server will look like this.
public class Server {
public static void main(String[] args) {
try {
ServerSocket s = new ServerSocket(8087);
System.out.println("Server Started");
while (true) {
Socket incoming = s.accept();
System.out.println(incoming.getInetAddress().getHostAddress() + " was connected!");
new ClientHandler2(incoming).start();
}
} catch (Exception e) {}
}
}
So the main point is Server should never stop to listen the specified port.
Client Handler which is a thread created in Server side.
public class ClientHandler extends Thread {
private Socket incoming;
public ClientHandler(Socket incoming){
this.incoming = incoming;
}
#Override
public void run(){}
Server will send the initialized socket into the ClientHandler's constructor and call start() method to run it.
Actually you do not have to keep connection for eternity for each client ! All you have to do is store client's state server side and then communicate via any connection. Then you can get back resource and use them more wisely when your client doesn't seem to be active for a while.
I am passing Resultset object to each thread. Each thread is connecting to the database and inserting data. Untill thread 110 it is working fine. After it crosses 111 thread it throws the above exception.
I am using oracle 11g.
My sample Thread code is:
class MyThreadClass implements Runnable
{
public Connection connection;
public Statement statement2;
public ResultSet rs2;
public String cookie;
public MyThreadClass(ResultSet rs1)
{
rs2=rs1;
}
public void run()
{
try
{
cookie=rs2.getString("COOKIE");
driver = "oracle.jdbc.driver.OracleDriver";
url = "jdbc:oracle:thin:#127.0.0.1:1521:xx";
/* connection
statement2.executeUpdate("INSERT INTO visit_header VALUES ('"+cookie+"')");
}
I am not getting how to handle this exception.
Your multi-threaded application is opening too many Connections/Sessions. Hence, the listener is dropping and blocking new connections for a while.
Check your DB resource usage first:
SELECT * FROM v$resource_limit WHERE resource_name IN ('processes','sessions');
Check to see if your MAX_UTILIZATION for either your Processes or Sessions is getting too close to the LIMIT_VALUE. If yes, you should either:
Use DB Connection pooling to share Connection objects between threads. Or,
Increase the number of processes/sessions that Oracle can service simultaneously.
Actually, Connection Pooling (#1) should always be done. An application cannot scale up otherwise. Check Apache Commons DBCP for details. For #2, open a new SQL*Plus session as SYSTEM and run:
ALTER system SET processes=<n-as-per-number-of-threads> scope=spfile;
to increase backend concurrency. Then RESTART the Database. IMPORTANT!
I guess the database just don't accept more connections from your host. If I understand your question right you are making maybe 100 threads which each connects to the database in short time. Maybe you don't even close the connection correctly, or the accesses are lasting so long that a huge amount of connections are opened. The database have a limit to which it accepts connections.
You should definitely reduce the number of connections by some clever technique. Maybe reduce the number of concurrent threads and/or use a connection pool.
Try this solution at your end. It worked for me.
Close the connection in try/catch block and just after closing the connection,
write-
Thread.sleep(1000);
In this case you can write it as-
finally {
try {
if (conn != null && !conn.isClosed())
{
conn.close();
Thread.sleep(1000);
}
}
catch (SQLException e) {
e.printStackTrace();}
}
I saw plenty of similar questions on SO but hardly any of them have Socket in the picture. So please take time to read the question.
I have server app (using ServerSocket) which listens for requests, and when a client attempts to connect, new thread is created to serve the client (and server is back to listening mode for new requests). Now, I need to respond one client based on what other client sent to server.
Example:
ServerSocket listening for incoming connections.
Client A connects, new thread is created to serve A.
Client B connects, new thread is created to serve B.
A sends message "Hello from A" to the Server.
Send this message as a response to Client B.
I'm new to this whole "inter-thread communication" thing. Obviously, above mentioned situation sounds dead simple, but I'm describing this to get a hint, as I'll be exchanging huge amount data among clients keeping server as intermediate.
Also, what if I want to keep a shared object limited to, say 10, particular Clients? such that, when 11th client connects to the server, I create new shared object, which will be used to exchange data between 11th, 12th, 13th..... upto 20th client. And so on for every single set of 10 clients.
What I tried: (foolish I guess)
I have a public class with that object supposed to be shared as public static, so that I can use it as global without instantiating it, like MyGlobalClass.SharedMsg.
That doesn't work, I was unable to send data received in one thread to the other.
I'm aware that there is an obvious locking problem since if one thread is writing to an object, other can't be accessing it until the first thread is done writing.
So what would be an ideal approach to this problem?
Update
Since the way in which I create threads for serving incoming connection requests, I can't understand how I can share same object among the threads, since using Global object as mentioned above doesn't work.
Following is how I listen for incoming connections and create serving threads dynamically.
// Method of server class
public void startServer()
{
if (!isRunning)
{
try
{
isRunning = true;
while (isRunning)
{
try
{
new ClientHandler(mysocketserver.accept()).start();
}
catch (SocketTimeoutException ex)
{
//nothing to perform here, go back again to listening.
}
catch (SocketException ex)
{
//Not to handle, since I'll stop the server using SocketServer's close() method, and its going to throw SocketException anyway.
}
}
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
else
System.out.println("Server Already Started!");
}
And the ClientHandler class.
public class ClientHandler extends Thread
{
private Socket client = null;
private ObjectInputStream in = null;
private ObjectOutputStream out = null;
public ClientHandler(Socket client)
{
super("ClientHandler");
this.client = client;
}
//This run() is common for every Client that connects, and that's where the problem is.
public void run()
{
try
{
in = new ObjectInputStream(client.getInputStream());
out = new ObjectOutputStream(client.getOutputStream());
//Message received from this thread.
String msg = in.readObject().toString();
System.out.println("Client # "+ client.getInetAddress().getHostAddress() +" Says : "+msg);
//Response to this client.
out.writeObject("Message Received");
out.close();
in.close();
client.close();
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}
I believe that the way I'm creating dynamic threads to serve each client that connects, sharing the same data source is not possible using Global object, since the body of run() above is exactly the same for every client that connects, hence this same method is both consumer and producer. What fixes should I make such that I could create dynamic threads for each connection and still share the same object.
You probably want a queue for communication between each client. Each Queue will be the 'pipeline' for data pushed from one client to the other.
You would use it like so (pseudo code):
Thread 1:
Receive request from Client A, with message for Client B
Put message on back of concurrent Queue A2B
Respond to Client A.
Thread 2:
Receive request from Client B.
Pop message from front of Queue A2B
Respond to Client B with message.
You might also want it generic, so you have a AllToB Queue that many clients (and thus many threads) can write to.
Classes of note: ConcurrentLinkedQueue, ArrayBlockingQueue.
If you want to limit the number of messages, then ArrayBlockingQueue with its capacity constructor allows you to do this. If you don't need the blocking functionality, you can use the methods offer and poll rather than put and take.
I wouldn't worry about sharing the queues, it makes the problem significantly more complicated. Only do this if you know there is a memory usage problem you need to address.
EDIT: Based on your update:
If you need to share a single instance between all dynamically created instances you can either:
Make a static instance.
Pass it into the constructor.
Example of 1:
public class ClientHandler extends Thread
{
public static final Map<ClientHandler, BlockingQueue<String>> messageQueues
= new ConcurrentHashMap<>();
<snip>
public ClientHandler(Socket client)
{
super("ClientHandler");
this.client = client;
// Note: Bad practice to reference 'this' in a constructor.
// This can throw an error based on what the put method does.
// As such, if you are to do this, put it at the end of the method.
messageQueues.put(this, new ArrayBlockingQueue<>());
}
// You can now access this in the run() method like so:
// Get messages for the current client.
// messageQueues.get(this).poll();
// Send messages to the thread for another client.
// messageQueues.get(someClient).offer(message);
A couple of notes:
The messageQueues object should really contain some sort of identifier for the client rather than an object reference that is short lived.
A more testable design would pass the messageQueues object into the constructor to allow mocking.
I would probably recommend using a wrapper class for the map, so you can just call offer with 2 parameters rather than having to worry about the map semantics.
I'm writing a Java app that sends and receives messages from a websocket server. When the app receives a message it might take some time to process it. Therefore I'm trying to use multiple threads to receive messages. To my understanding Grizzly has selector threads as well as worker threads. By default there is 1 selecter thread and 2 worker threads, in the following example I'm trying to increase those to 5 and 10 respectively.
In the below example I'm pausing the the thread that calls the onMessage method for 10sec to simulate processing of the incoming information. The information comes in every second, therefore 10 threads should be able to handle the amount of traffic.
When I profile the run, only 1 selector thread is running and 2 working threads. Furthermore, messages are only received at a 10sec interval. Indicating that only 1 thread is handling the traffic - I find this very odd. During profiling, one worker thread e.g. Grizzly(1) receives the first message sent. Then 10 seconds later 'Grizzly(2)' receives the second message - then Grizzly(2) keeps on receiving the messages, and Grizzly(1) does not perform any actions.
Can someone please explain this odd behavior and how to change it to e.g. 10 threads constantly waiting in line for a message?
Main:
public static void main(String[] args) {
WebsocketTextClient client = new WebsocketTextClient();
client.connect();
for (int i = 0; i < 60; i++) {
client.send("Test message " + i);
try {
Thread.sleep(1000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
}
}
WebsocketTextClient.java:
import java.net.URI;
import javax.websocket.ClientEndpointConfig;
import javax.websocket.EndpointConfig;
import javax.websocket.Session;
import javax.websocket.Endpoint;
import javax.websocket.MessageHandler;
import org.glassfish.tyrus.client.ClientManager;
import org.glassfish.tyrus.client.ThreadPoolConfig;
import org.glassfish.tyrus.container.grizzly.client.GrizzlyClientProperties;
public class WebsocketTextClient {
private ClientManager client;
private ClientEndpointConfig clientConfig;
WebsocketTextClientEndpoint endpoint;
public WebsocketTextClient() {
client = ClientManager.createClient();
client.getProperties().put(GrizzlyClientProperties.SELECTOR_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(5));
client.getProperties().put(GrizzlyClientProperties.WORKER_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(10));
}
public boolean connect() {
try {
clientConfig = ClientEndpointConfig.Builder.create().build();
endpoint = new WebsocketTextClientEndpoint();
client.connectToServer(endpoint, clientConfig, new URI("wss://echo.websocket.org"));
} catch (Exception e) {
return false;
}
return true;
}
public boolean disconnect() {
return false;
}
public boolean send(String message) {
endpoint.session.getAsyncRemote().sendText(message);
return true;
}
private class WebsocketTextClientEndpoint extends Endpoint {
Session session;
#Override
public void onOpen(Session session, EndpointConfig config) {
System.out.println("Connection opened");
this.session = session;
session.addMessageHandler(new WebsocketTextClientMessageHandler());
}
}
private class WebsocketTextClientMessageHandler implements MessageHandler.Whole<String> {
#Override
public void onMessage(String message) {
System.out.println("Message received from " + Thread.currentThread().getName() + " " + message);
try {
Thread.sleep(10000);
} catch (Exception e) {
System.out.println("Error sleeping!");
}
System.out.println("Resuming");
}
}
}
What you appear to be asking is for WebSockets to be able to receive multiple messages sent by the same client connection, to process those messages in separate threads, and to send the responses when they are ready - which means, potentially out of order. This scenario can only happen if the client is multi-threaded.
To deal with multiple threads on the same WebSocket session would generally require the ability for WebSockets to multiplex the data going to and from the client. This is not currently a feature of WebSockets, but could certainly be built on top of it. However, multiplexing those client and server threads on a single channel introduces a fair bit of complexity, because you need to stop all the client and server threads from inadvertently overwriting or starving one another.
The Java spec for MessageHandler is perhaps a little ambiguous about the threading model;
https://docs.oracle.com/javaee/7/api/javax/websocket/MessageHandler.html says:
Each web socket session uses no more than one thread at a time to call its MessageHandlers.
But the important term here is "socket session". If your client is sending multiple messages within the same WebSocket session, the server side handler will execute within a single thread. This doesn't mean you can't do lots of interesting stuff within the thread, particularly if you're using Input/OutputStreams (or Writers) on both ends. It does mean that communication with the client is mediated by just one thread. If you want to multiplex the communication, you'd have to write something on top of the socket to do so; that would include developing your own threading model for dispatching the requests.
An easier solution would be to create a new Session for each client request. Each client request starts a session (ie, TCP connection), sends the data, and waits for the result. This gives you multiple MessageHandler threads - one per session, per the spec.
This is the most straightforward way to get multi-threading on the server side; any other approach will tend to need a multiplexing mechanism - which, depending on your use case, is perhaps not worth the effort, and certainly carries some complexity and risk.
If you are concerned about the number of sessions (TCP/HTTP connections) between client/s and server/s, you could consider creating a pool of Sessions on the client side, and use each client Session one at a time, returning the session to the pool whenever the client is done with it.
Finally, perhaps not directly relevant: I found that when I used Payara Micro to serve the WebSocket endpoint, I needed to set this:
<resources>
...
<managed-executor-service maximum-pool-size="200" core-pool-size="10" long-running-tasks="true" keep-alive-seconds="300" hung-after-seconds="300" task-queue-capacity="20000" jndi-name="concurrent/__defaultManagedExecutorService" object-type="system-all"></managed-executor-service>
The default ManagedExecutorService only provides a single thread. This appears to be the case in Glassfish as well. This had me running around for hours thinking that I didn't understand the threading model, when it was just the pool size that was confusing me.