I am having lots of connection problems when working with Microsoft server and creating the connection from java.
I need to communicate with db in average every 2 sec, for verity of things. Most of my queries are within the 500 milliseconds.
About every 15 min or so, I am having a connection drop and one of my queries is failing. I got a retry mechanism, that always work within 3 tries.
My only problem is that, 500 milliseconds query turns to be 2 sec or longer when there is a connection drop.
What would be the best approach of connection to SQL server, the way I do it now:
create Connection
create Statement
execute it
and close both the statement and the connection
Or should I keep the connection opened and only create multiple statements for each of my queries?
Let us take one issue at a time, first the Connection.
Connection to a database is resource intensive, to create and to hold on to. It would be wiser to create a pool of connections and hold on to them until your application stops. A connection pool manager may be of great help in this context. I have personally used Apache's DBCP and found to be convenient and efficient. There could other alternatives too. When you need a connection borrow one from the pool and return it when its use is complete.
Related
I have the following scenario:
MethodA() calls MethodB()
MethodB() calls MethodC()
All methods have to execute some query to DB. So, to do this I create a connection object and pass it along the chain of methods to reuse the connection object.
Assumption here is that connection pooling is not being employed.
Now my question is, should only a single connection be opened and reused and be closed at the starting point (in the above example, the connection will be opened and closed in MethodA) ? or should I create a separate connection for each method?
Reusing the connection seems better, but then I will have to keep the connection open till the control comes back to MethodA().
I have read that reusing the connection is better as they are expensive to create. But then I have also read that its better to close the connection as soon as possible, i.e., once you are done with the query call.
Which approach is better and why?
It sounds like you are only querying the DB and not updating or inserting. If that is the case then you avoid many of the transactional semantics in such a nested procedure call.
If that is true, then simply connect once, do all your querying and close the connection. While usage of a connection pool is somewhat orthogonal to your question - use one if you can. They greatly simplify your code because you can have the pool automatically test the connection before it gives one to you. It will auto-create a new connection if the connection was lost (let's say because the DB was bounced).
Finally, you want to minimize the number of times you create a DB Connection BECAUSE it is expensive. However, this is often non-trivial. Databases themselves only support a maximum number of connections. If there are many clients, then you would need to take this into consideration. If you have the trivial case - one database and your program is the only one making connections, then open the connection and leave it open for the duration of the program. This would require you to validate it, so using a DB Pool, size of 1, avoids that.
This question already has answers here:
How to manage db connections on server?
(3 answers)
Closed 7 years ago.
I have a Java server and PostgreSQL database.
There is a background process that queries (inserts some rows) the database 2..3 times per second. And there is a servlet that queries the database once per request (also inserts a row).
I am wondering should I have separate Connection instances for them or share a single Connection instance between them?
Also does this even matter? Or is PostgreSQL JDBC driver internally just sending all requests to a unified pool anyway?
One more thing should I make a new Connection instance for every servlet request thread? Or share a Connection instance for every servlet thread and keep it open the entire up time?
By separate I mean every threads create their own Connection instances like this:
Connection connection = DriverManager.getConnection(url, user, pw);
If you use a single connection and share it, only one thread at a time can use it and the others will block, which will severely limit how much your application can get done. Using a connection pool means that the threads can have their own database connections and can make concurrent calls to the database server.
See the postgres documentation, "Chapter 10. Using the Driver in a Multithreaded or a Servlet Environment":
A problem with many JDBC drivers is that only one thread can use a
Connection at any one time --- otherwise a thread could send a query
while another one is receiving results, and this could cause severe
confusion.
The PostgreSQLâ„¢ JDBC driver is thread safe. Consequently, if your
application uses multiple threads then you do not have to worry about
complex algorithms to ensure that only one thread uses the database at
a time.
If a thread attempts to use the connection while another one is using
it, it will wait until the other thread has finished its current
operation. If the operation is a regular SQL statement, then the
operation consists of sending the statement and retrieving any
ResultSet (in full). If it is a fast-path call (e.g., reading a block
from a large object) then it consists of sending and retrieving the
respective data.
This is fine for applications and applets but can cause a performance
problem with servlets. If you have several threads performing queries
then each but one will pause. To solve this, you are advised to create
a pool of connections. When ever a thread needs to use the database,
it asks a manager class for a Connection object. The manager hands a
free connection to the thread and marks it as busy. If a free
connection is not available, it opens one. Once the thread has
finished using the connection, it returns it to the manager which can
then either close it or add it to the pool. The manager would also
check that the connection is still alive and remove it from the pool
if it is dead. The down side of a connection pool is that it increases
the load on the server because a new session is created for each
Connection object. It is up to you and your applications'
requirements.
As per my understanding,You should defer this task to the container to manage connection pooling for you.
As you're using Servlets,which will be running in a Servlet container, and all major Servlet containers that I'm aware of provide connection pool management.
See Also
Best way to manage database connection for a Java servlet
I'd like to come up with the best approach for re-connecting to MS SQL Server when connection from JBoss AS 5 to DB is lost temporarily.
For Oracle, I found this SO question: "Is there any way to have the JBoss connection pool reconnect to Oracle when connections go bad?" which says it uses an Oracle specific ping routine and utilizes the valid-connection-checker-class-name property described in JBoss' Configuring Datasources Wiki.
What I'd like to avoid is to have another SQL run every time a connection is pulled from the pool which is what the other property check-valid-connection-sql basically does.
So for now, I'm leaning towards an approach which uses exception-sorter-class-name but I'm not sure whether this is the best approach in the case of MS SQL Server.
Hoping to hear your suggestions on the topic. Thanks!
I am not sure it will work the way you describe it (transparently).
The valid connection checker (this can be either a sql statement in the *ds.xml file or a class that does the lifting) is meant to be called when a connection is taken from the pool, as the db could have closed it while it is in the pool. If the connection is no longer valid,
it is closed and a new one is requested from the DB - this is completely transparent to the application and only happens (as you say) when the connection is taken out of the pool. You can then use that in your application for a long time.
The exception sorter is meant to report to the application if e.g. ORA-0815 is a harmless or bad return code for a SQL statement. If it is a harmless one it is basically swallowed, while for a bad one it is reported to the application as an Exception.
So if you want to use the exception sorter to find bad connections in the pool, you need to be prepared that basically every statement that you fire could throw a stale-connection Exception and you would need to close the connection and try to obtain a new one. This means appropriate changes in your code, which you can of course do.
I think firing a cheap sql statement at the DB every now and then to check if a connection from the pool is still valid is a lot less expensive than doing all this checking 'by hand'.
Btw: while there is the generic connection checker sql that works with all databases, some databases provide another way of testing if the connection is good; Oracle has a special ping command for this, which is used in the special OracleConnectionChecker class you refer to. So it may be that there is something similar for MS-SQL, which is less expensive than a simple SQL statement.
I used successfully background validation properties: background-validation-millis from https://community.jboss.org/wiki/ConfigDataSources
With JBoss 5.1 (I don't know with other versions), you can use
<valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MSSQLValidConnectionChecker</valid-connection-checker-class-name>
Trying to figure out how to manage/use long-living DB connections. I have too little experience of this kind, as I have used DB only with small systems (up to some 150 concurrent users, each one had its own DB user/pass, so there were up to 150 long-living DB connections at any time) or web pages (each page request has its own DB connection that lasts for less than a second, so number of concurrent DB conncetions isn't huge).
This time there will be a Java server and Flash client. Java connects to PostgreSQL. Connections are expected to be long-living, i.e., they're expected to start when Flash client connects to Java server and to end when Flash client disconnects. Would it be better to share single connection between all users (clients) or to make private connection for every client? Or some other solution would be better?
*) Single/shared connection:
(+) pros
only one DB connection for whole system
(-) cons:
transactions can't be used (e.g., "user1.startTransaction(); user1.updateBooks(); user2.updateBooks(); user1.rollback();" to a single shared connection would rollback changes that are done by user2)
long queries of one user might affect other users (not sure about this, though)
*) Private connections:
(+) pros
no problems with transactions :)
(-) cons:
huge number of concurrent connections might be required, i.e., if there are 10000 users online, 10000 DB connections are required, which seems to be too high number :) I don't know anything about expected number of users though, as we are still in process of researching and planning.
One solution would be to introduce timeouts, i.e., if DB connection is not used for 15/60/900(?) seconds, it gets disconnected. When user again needs a DB, it gets reconnected. This seems to be a good solution for me, but I would like to know what might be the reasonable limits for this, e.g., what might be the max number of concurrent DB connections, what timeout should be used etc.
Another solution would be to group queries into two "types" - one type that can safely use single shared long-living connection (e.g., "update user set last_visit = now() where id = :user_id"), and another type that needs a private short-living connection (e.g., something that can potentially do some heavy work or use transactions). This solution does not seem to be appealing for me, though if that's the way it should be done, I could try to do this...
So... What do other developers do in such cases? Are there any other reasonable solutions?
I don't use long-lived connections. I use a connection pool to manage connections, and I keep them only for as long as it takes to perform an operation: get the connection, perform my SQL operation, return the connection to the pool. It's much more scalable and doesn't suffer from transaction problems.
Let the container manage the pool for you - that's what it's for.
By using single connection, you also get very low performance because the database server will only allocate one connection for you.
You definitely need a connection pool. If you app runs inside an application server, use the container pool. Or you can use a connection pool library like c3p0.
What is a Connection Object in JDBC ? How is this Connection maintained(I mean is it a Network connection) ? Are they TCP/IP Connections ? Why is it a costly operation to create a Connection every time ? Why do these connections become stale after sometime and I need to refresh the Pool ? Why can't I use one connection to execute multiple queries ?
These connections are TCP/IP connections. To not have to overhead of creating every time a new connection there are connection pools that expand and shrink dynamically. You can use one connection for multiple queries. I think you mean that you release it to the pool. If you do that you might get back the same connection from the pool. In this case it just doesn't matter if you do one or multiple queries
The cost of a connection is to connect which takes some time. ANd the database prepares some stuff like sessions, etc for every connection. That would have to be done every time. Connections become stale through multiple reasons. The most prominent is a firewall in between. Connection problems could lead to connection resetting or there could be simple timeouts
To add to the other answers:
Yes, you can reuse the same connection for multiple queries. This is even advisable, as creating a new connection is quite expensive.
You can even execute multiple queries concurrently. You just have to use a new java.sql.Statement/PreparedStatement instance for every query. Statements are what JDBC uses to keep track of ongoing queries, so each parallel query needs its own Statement. You can and should reuse Statements for consecutive queries, though.
The answers to your questions is that they are implementation defined. A JDBC connection is an interface that exposes methods. What happens behind the scenes can be anything that delivers the interface. For example, consider the Oracle internal JDBC driver, used for supporting java stored procedures. Simultaneous queries are not only possible on that, they are more or less inevitable, since each request for a new connection returns the one and only connection object. I don't know for sure whether it uses TCP/IP internally but I doubt it.
So you should not assume implementation details, without being clear about precisely which JDBC implementation you are using.
since I cannot comment yet, wil post answer just to comment on Vinegar's answer, situation with setAutoCommit() returning to default state upon returning connection to pool is not mandatory behaviour and should not be taken for granted, also as closing of statements and resultsets; you can read that it should be closed, but if you do not close them, they will be automatically closed with closing of connection. Don't take it for granted, since it will take up on your resources on some versions of jdbc drivers.
We had serious problem on DB2 database on AS400, guys needing transactional isolation were calling connection.setAutoCommit(false) and after finishing job they returned such connection to pool (JNDI) without connection.setAutoCommit(old_state), so when another thread got this connection from pool, inserts and updates have not commited, and nobody could figure out why for a long time...