I'm currently creating an API server that reads and writes. Using MongoDB
The library uses Mongoose.
I wonder if db.close() must be used when reading and writing.
datamodel.js:
var db = mongoose.connect('mongodb://localhost/testdb', {useNewUrlParser: true,useUnifiedTopology:true});
mongoose.Promise = global.Promise;
.....
Boards = mongoose.model("boards", BoardSchema);
exports.Boards = Boards;
routes/getList.js:
let result = await Boards.find().sort({"date": -1});
Should I close the DB declared above with db.close() when reading or writing?
(Very generic answer, but should help you get started with what to research)
Closing MongoDB connection depends on how is the connection established in the first place.
Are you initialising the connection on server startup: If yes, you should not close the connection. (But initialising the connection on server startup is bad idea because, if connection is lost to server (like database server restart), then you would also have to restart the application or set reconnectTries)
Are you using connection pool: If you are using connection pool, then closing and opening of connections is taken care by Mongoose itself. All you have to do is, release the connection after use, so that, it's available for other requests.
Are you creating connection per request: If yes, then you should close the connection before returning the response or you would quickly run out of available connections at database server.
you can call mongoose.disconnect() to close the connection
Related
We are currently dealing with the function, that has to work partially with the database, and partially with a service, whice operations are time-consuming. So, generally speaking, here is a transactional method, that has a code like this:
Connection conn = null;
try {
conn = getConnection(); // This I get from the connection pool
Employee emp = queryDatabase(id);
// Point A - ??
Response resp = makeLongTimeServiceCall(emp);
// Point B - ??
anotherQueryDatabase(resp);
} catch (Exception e) {
throw e; // And this also rolls back the transaction
} finally {
// If conn is not closed, close it, so there would be no leaks
}
So the big question is - should I close the connection at point A and then get it again from the connection pool at point B, so that other servlets could use that connection while I interact with the service? My guess is that I should, but will this hurt my transaction?
In many circumstances: yes, closing and reopening the connection sounds good. However you need to understand the implication of doing them in two separate transactions (by closing and re-opening the connection you are inherently doing them in separate transaction).
If another user invokes an operation that changes the state of your data at Point B will the end result still be correct?
If you don't have a transaction you can close the connection and ask for a new connection when needed.
Remember that if you are using a connection pool closing a connection will not really close the connection, but only flag it as reusable by other threads.
Database connections shouldn't be left open. Open the connections just when you need to execute a query and close it as early as possible. So my solution is, YES.
if your application doesn't close connection properly may lead to some issues like the
connection pool maxing out.
Applications Not Properly Closing Connections:
When writing an application to use the WebSphere Application Server(WAS) datasource, the best way is fallowing get/use/close pattern.
1.get
-This is when the application makes a request to get a connection from the pool.
The application will first lookup the datasource name and then do a getConnection() call.
2.use
-This is when the application executes a query and waits for a response
3.close
-This is the final stage when the application has received the response from the database and is now done using that connection.
-The application should call close() in a finally block to close out the connection and return it to the free pool.
If your application does not follow this pattern, you may see connections staying open for long periods of time. This is especially seen when the application does not close the connection properly. After the response is received from the database, the application should be calling close in a finally block. If the application does not explicitly close the connection, that connection will stay open forever in WAS until that server is restarted. This is known as a connection leak. To resolve this type of issue, you will have to modify your application to close every connection
for further information: https://www.ibm.com/support/pages/common-reasons-why-connections-stay-open-long-period-time-websphere-application-server
I'm having problems with the connection pool in Tomcat after a database outage.
There's this heartbeat service servlet that won't come back up after a database outage.
I've already tried the standard answer, and beyond, specifically:
Adding to the Resource section in server.xml
validationQuery="select 1 from dual"
testOnBorrow="true"
removeAbandoned="true"
removeAbandonedTimeout="120"
logAbandoned="true"
Trying to close the connection with a check to validity:
if (connection != null && !connection.isValid(10)) {
connection.close();
}
(Resulted in java.sql.SQLException: Connection is closed)
Trying to abort the connection (not sure whether done right)
if (connection != null && !connection.isValid(10)) {
connection.abort();
}
(Resulted in java.lang.NoSuchMethodError: java.sql.Connection.abort(Ljava/util/concurrent/Executor;)V)
Tries 2) and 3) show that indeed the connection is invalid, and it knows it. The question is - how to destroy it?
Tomcat version: 7.0.29
The pool will destroy the invalid connection for you thanks to "testOnBorrow=true" setting (note that the "Abandoned" settings only apply to connections which are not returned to the pool).
So this is how it should go:
servlet borrows connection, does queries and returns connection to the pool.
servlet borrows connection, database crashes, servlet receives all kinds of horrible SQL errors but still returns connection to the pool.
database has restarted
on next heartbeat, servlet borrows connection. Tomcat's pool will "test on borrow", find that the old connection is broken, removes (and closes) connection, tries another, finds that it is also broken, etc, and THEN the pool decides to create a new connection, test it OK and hands it over to the servlet.
servlet receives new valid connection, does queries and returns connection to the pool.
I'm not sure how Tomcat's pool behaves when a "test on borrow" indicates a connection is broken (1): it might not create new connections right away or throw an error when it can't get a valid connection from the pool. But I expect the pool to effectively flush itself and re-populate with new (valid) connection.
(1) That is, if the "test on borrow" is actually done which this post indicates is not the case ...
If the pool does not flush itself, you can try to do this programmatically once you find connections are invalid. I have not tried this before, hopefully you can get it to work. Following method is described here:
Reach into the JNDI context, pull-out the DataSource object, cross your fingers, cast it to org.apache.tomcat.jdbc.pool.DataSource, and call the purge() method.
Alterntively, use JMX and call the purge method via the MBean.
If you experience hanging threads after a database crash, you might have to resort to a work around described in this answer.
If we want to dispose an ill java.sql.connection from Tomcat jdbc connection pool,
we may do this explicitly in the program.
Unwrap it into an org.apache.tomcat.jdbc.pool.PooledConnection, setDiscarded(true)
and close the JDBC connection finally, the ConnectionPool will remove the underlying connection once it has been returned.
(ConnectionPool.returnConnection(....))
PooledConnection pconn = conn.unwrap(PooledConnection.class);
pconn.setDiscarded(true);
conn.close();
I have Java application with Hibernate framework(no spring) connect to MySQL DB , manage connection pooling via c3p0
i try to configure my apllication to read from slave db and write to master db , i have following this link to some extend Master/Slave load balance
let's say if the application already got a session with connection in pool and it need to execute a read-only method , like this
public someReadOnlyMethod()
{
Session session = (get session from current Thread)
//set read-only so that it read from slave db
session.connection().setReadOnly(true);
(...connect to db to do something...)
//set it back in case of this method is followed by write method so that it go to master db
session.connection().setReadOnly(false);
}
Is the pooling create a new connection to connect to db 2 times for read-only and write operation(if so,this will heavily impact performance) or it smart enough to swap the operation to already existing read-only and writable connection pool ?
thx for your advice.
so this has nothing to do with the pool; it's all in the mysql driver. c3p0 will pass your call to setReadOnly (whether true or false) to the underlying Connection, and the Connection will route to the master or the slaves accordingly.
if you don't like how your Connections default (probably by default they are not read only), you can set the read-only property in the onAcquire method of a c3p0 ConnectionCustomizer, and the value use set (true or false) will become th default that c3p0 resets Connections to.
good luck!
tl;dr: It will re-use existing connections whenever you switch setReadOnly(true/false).
JDBC will connect to all servers listed in your connection URL when you do ReplicationDriver().connect(url). Those connections will remain open for re-use no matter how many times you switch setReadOnly().
Source: I just tested Connector/J version 5.1.38 with com.mysql.jdbc.ReplicationDriver.
The database connection is get like below
public Connection getDBConection(){
Context context = new InitialContext();
DataSource dataSource = (javax.sql.DataSource) context.lookup("java:myDataSource");
Connection conn = dataSource.getConnection();
}
For a userA, is each database request should call getDBConnection() once; but no need to control all request use the same connection?
That is, if userA has three database request, then userA should call getDBConnection() three times, and call Connection.closed() after used in each request?
If the userA call getDBConnection() three times (that is, call dataSource.getConnection() three times), is three connection created? Or it is unknown and controlled by weblogic?
I feel very chaos, is it true that there should be one new connection for one database request? or just call DataSource.getConnection() for each database request and the number of new connection created is controlled by web server, no need to think how many connection is actually created.
Every time you call DataSource.getConnection, the data source will retrieve a connection for you. It should be true that the returned connection is not being actively used by anyone else, but it is not necessarily a brand-new connection.
For example, if you use a connection pool, which is a very common practice, then when you call Connection.close, the connection is not actually closed, but instead returns to a pool of available connections. Then, when you call DataSource.getConnection, the connection pool will see if it has any spare connections lying around that it hasn't already handed out. If so, it will typically test that they haven't gone stale (usually by executing a very quick query against a dummy table). If not, it will return the existing connection to the caller. But if the connection is stale, then the connection pool will retrieve a truly new connection from the underlying database driver, and return that instead.
Typically, connection pools have a maximum number of real connections that they will keep at any one time (say, 50). If your application tries to request more than 50 simultaneous connections, DataSource.getConnection will throw an exception. Or in some implementations, it will block for a while until one becomes available, and then throw an exception after that time expires. For a sample implementation, have a look at Apache Commons DBCP.
Hopefully that answers your question!
I am getting the following error:
java.sql.SQLException: Closed
Connection at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208)
at
oracle.jdbc.driver.PhysicalConnection.getMetaData(PhysicalConnection.java:1508)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.moveToNextResultsSafely(SqlExecutor.java:348)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:320)
at
com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:277)
at
com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:34)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588)
at
com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118)
at
org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266)
at
gov.hud.pih.eiv.web.authentication.AuthenticationUserDAO.isPihUserDAO(AuthenticationUserDAO.java:24)
at
gov.hud.pih.eiv.web.authorization.AuthorizationProxy.isAuthorized(AuthorizationProxy.java:125)
at
gov.hud.pih.eiv.web.authorization.AuthorizationFilter.doFilter(AuthorizationFilter.java:224)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
at
I am really stumped and can't figure out what could be causing this error. I am not able to reproduce the error on my machine but on production it is coming a lot of times. I am using iBatis in the whole application so there are no chances of my code not closing connections.
We do have stored procedures that run for a long time before they return results (around 15 seconds).
does anyone have any ideas on what could be causing this? I dont think raising the # of connections on the application server will fix this issue buecause if connections were running out then we'd see "Error on allocating connections"
Sample code snippet:
this.setSqlMapClientTemplate(getSqlTempl());
getSqlMapClientTemplate().queryForList("authentication.isUserDAO", parmMap);
this.setSqlMapClientTemplate(getSqlTemplDW());
List results = (List) parmMap.get("Result0");
I am using validate in my connection pool.
Based on the stack trace, the likely cause is that you are continuing to use a ResultSet after close() was called on the Connection that generated the ResultSet.
What is your DataSource framework? Apache Commons DBCP?
do you use poolPrepareStatement property in data source configuration?
Check the following:
Make sure testOnBorrow and testOnReturn are true and place a simple validationQuery like select 0 from dual.
Do you use au
do you use autoCommit? Are you using START TRANSACTION, COMMIT in your stored procedures? After several days of debugging we found out that you can't mix transaction management both in Java and in SQL - you have to decide on one place to do it. Where are you doing yours?
Edit your question with answers to this, an we'll continue from there.
When a db server reboots, or there are some problems with a network, all the connections in the connection pool are broken and this usuall requires a reboot of application server
And if broken connection detected, you shold create a new one to replace it in connection pool. It's common problem called deadly connections.