I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).
Related
I am working in an application that connect to a PostgreSQL Database and will allow to access from differents computers connected in the same local network.
It's already working, but when it is used in more than one computer at the same time, the server disconnect the current computer to attend to a new connection.
There's some way to make that the PostgreSQL server attend to more than one computer at time?
I think that maybe, I'm doing something wrong in the way that I'm using the server.
When the application start I set the pgdata and pgport variables and check the server status with pg_isready, if it has no answer, I use pg_ctl start.
I'm using the 3389 port.
EDIT:
My problem was a logic error in the application, it was trying to open a new server with each connection instead of use the one that was already running.
The configuration file 'C:\Program Files\PostgreSQL\Y.X\data\postgresql.conf' has a parameter called max_connections, which controls the maximun numbers of connections allowed to your DB. Change it and you can allow more connections after restarting postgres.
I'm currently developing a Java Websocket application that is deployed on Wildfly 10. I cannot post the code, but here's the logic:
Multiple threads poll a database every 5 seconds(select query, reusing a PreparedStatement after closing previous ResultSet) and send via Websocket to all connected clients.
Have configured datasource that connects to MYSQL server (localhost).
The application runs fine until a while later, it crashes and the log is full with 'Unable to get managed connection from datasource' errors. Also, Websocket fails with 'ClosedChannelException'.
Services on the same server that open a connection and close it immediately do work fine. However there are 5-6 threads in the concerned code that must use connections after 5 secs so a thread is given a dedicated connection that is only torn down when the application context is destroyed.
Another thing is when the application fails, it works for a lesser time on disable-enable. Only a reboot gets it to work better.
Same project works without error on Glassfish.
Somehow, Wildfly seems to periodically reset either DB connections, or all TCP connections altogether.
Is there a setting that is relevant to Wildfly's behaviour towards threads? I have verified that only as many threads as are intended are actually created.
Any help would be appreciated.
Edit: This application works well on my local machine. When I deploy it on remote server, it works for a while (3 hours max) before failing altogether.
I use Netbeans 8 to compile, if that helps.
I am in the process of building a client-server application and I would really like an advise on how to design the server-database connection part.
Let's say the basic idea is the following:
Client authenticates himself on the server.
Client sends a request to server.
Server stores client's request to the local database.
In terms of Java Objects we have
Client Object
Server Object
Database Object
So when a client connects to the server a session is created between them through which all the data is exchanged. Now what bothers me is whether i should create a database object/connection for each client session or whether I should create one database object that will handle all requests.
Thus the two concepts are
Create one database object that handles all client requests
For each client-server session create a database object that is used exclusively for the client.
Going with option 1, I guess that all methods should become synchronized in order to avoid one client thread not overwriting the variables of the other. However, making it synchronize it will be time consuming in the case of a lot of concurrent requests as each request will be placed in queue until the one running is completed.
Going with option 2, seems a more appropriate solution but creating a database object for every client-server session is a memory consuming task, plus creating a database connection for each client could lead to a problem again when the number of concurrent connected users is big.
These are just my thoughts, so please add any comments that it may help on the decision.
Thank you
Option 3: use a connection pool. Every time you want to connect to the database, you get a connection from the pool. When you're done with it, you close the connection to give it back to the pool.
That way, you can
have several clients accessing the database concurrently (your option 1 doesn't allow that)
have a reasonable number of connections opened and avoid bringing the database to its knees or run out of available connections (your option 2 doesn't allow that)
avoid opening new database connections all the time (your option 2 doesn't allow that). Opening a connection is a costly operation.
Basically all server apps use this strategy. All Java EE servers come with a connection pool. You can also use it in Java SE applications, by using a pool as a library (HikariCP, Tomcat connection pool, etc.)
I would suggested a third option, database connection pooling. This way you create a specified number of connections and give out the first available free connection as soon as it becomes available. This gives you the best of both worlds - there will almost always be free connections available quickly and you keep the number of connections the database at a reasonable level. There are plenty of the box java connection pooling solutions, so have a look online.
Just use connection pooling and go with option 2. There are quite a few - C3P0, BoneCP, DBCP. I prefer BoneCP.
Both are not good solutions.
Problem with Option 1:
You already stated the problems with synchronizing when there are multiple threads. But apart from that there are many other problems like transaction management (when are you going to commit your connection?), Security (all clients can see precommitted values).. just to state a few..
Problem with Option 2:
Two of the biggest problems with this are:
It takes a lot of time to create a new connection each and every time. So performance will become an issue.
Database connections are extremely expensive resources which should be used in limited numbers. If you start creating DB Connections for every client you will soon run out of them although most of the connections would not be actively used. You will also see your application performance drop.
The Connection Pooling Option
That is why almost all client-server applications go with the connection pooling solution. You have a set connections in the pool which are obtained and released appropriately. Almost all Java Frameworks have sophisticated connection pooling solutions.
If you are not using any JDBC framework (most use the Spring JDBC\Hibernate) read the following article:
http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html
If you are using any of the popular Java Frameworks like Spring, I would suggest you use Connection Pooling provided by the framework.
First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu
If I connect with Java to MySQL on my localhost server, I access instantaneously.
But if I connect outside of the localhost, from a network PC (192.168.1.100), it is very slow (4-5 seconds).
And, if I connect from a public IP to my MY SQL server, it is also very slow (6 seconds or more).
The "why" is already been answered. It's just the network latency.
You're probably also interested in how to "fix" it. The answer is: use a connection pool. If you're running a Java webapplication, use the webserver-provided connection pooling facilities. To take Tomcat as an example, check this manual. If you're running a Java desktop application, use a decent connection pool implementation like c3p0 (tutorial here).
Because your computer needs time to send packets to external servers and they need time to send packets back. It's called network latency, and is not an issue with Java specifically, but a general network issue.
It will always take longer to make a connection across the network than to make the same connection locally. However, assuming you have a fairly typical local network, 4-5 seconds sounds a bit extreme. My guess (and it is just a guess) would be that the majority of the extra time is being consumed by network name resolution (i.e. DNS and/or netbios).
I would suggest that you try the connection using a numeric IP address, rather than a name.
Network latency plus connection creation time would be my guess. I don't know what else you have between the client machine and the MySQL server.
4 seconds on connecting could be a DNS problem and cannot be just a pure network latency.
Try starting MySQL server with "skip-name-resolve" parameter to skip resolving client's IP into hostname. Prior to that, make sure your grant tables are based on IPs and 'localhost' instead of symbolic names.