Vertx outlines that this is the normal way to connect to a database here https://vertx.io/docs/vertx-jdbc-client/java/ :
String databaseFile = "sqlite.db";
JDBCPool pool = JDBCPool.pool(
this.context.getVertx(),
new JDBCConnectOptions()
.setJdbcUrl("jdbc:sqlite:".concat(databaseFile)),
new PoolOptions()
.setMaxSize(1)
.setConnectionTimeout(CONNECTION_TIMEOUT)
);
This application I am writing has interprocess communication, so I want to use WAL mode, and synchronous=NORMAL to avoid heavy disk usage. The WAL pragma (PRAGMA journal_model=WAL) is set to the database itself, so I dont need to worry about it on application startup. However, the synchronous pragma is set per connection, so I need to set that when the appplication starts. Currently that looks like this:
// await this future
pool
.preparedQuery("PRAGMA synchronous=NORMAL")
.execute()
I can confirm that later on the synchronous pragma is set on the database connection.
pool
.preparedQuery("PRAGMA synchronous")
.execute()
.map(rows -> {
for (Row row : rows) {
System.out.println("pragma synchronous is " + row.getInteger("synchronous"))
}
})
and since I enforce a single connection in the pool, this should be fine. However I cant help but feel that there is a better way of doing this.
As a side note, I chose a single connection because sqlite is synchronous in nature, there is only ever one write happening at a time to the database. Creating write contention within a single application sounds detrimental rather than helpful, and I have designed my application to have as little concurrent writes within a single process as possible, though inter-process concurrency is real.
So these arent definitive answers, but I have tried a few other options, and want to outline them here.
For instance, vertx can instantiate a SQLClient without a pool:
JsonObject config = new JsonObject()
.put("url", "jdbc:sqlite:"+databaseFile)
.put("driver_class", "org.sqlite.jdbcDriver")
.put("max_pool_size", 1);
Vertx vertx = Vertx.vertx();
SQLClient client = JDBCClient.create(vertx, config);
though this still uses a connection pool, so I have to make the same adjustments to set a single connection in the pool, so that the pragma sticks.
There is also a SQLiteConfig class from the sqlite library, but I have no idea how to connect that into the vertx jdbc wrappers
org.sqlite.SQLiteConfig config = new org.sqlite.SQLiteConfig();
config.setSynchronous(SynchronousMode.NORMAL);
is a pool required with vertx? I did try running the sqlite jdbc driver directly, without a vertx wrapper. But this ran into all kinds of SQLITE_BUSY exceptions.
Related
I'm currently creating an API server that reads and writes. Using MongoDB
The library uses Mongoose.
I wonder if db.close() must be used when reading and writing.
datamodel.js:
var db = mongoose.connect('mongodb://localhost/testdb', {useNewUrlParser: true,useUnifiedTopology:true});
mongoose.Promise = global.Promise;
.....
Boards = mongoose.model("boards", BoardSchema);
exports.Boards = Boards;
routes/getList.js:
let result = await Boards.find().sort({"date": -1});
Should I close the DB declared above with db.close() when reading or writing?
(Very generic answer, but should help you get started with what to research)
Closing MongoDB connection depends on how is the connection established in the first place.
Are you initialising the connection on server startup: If yes, you should not close the connection. (But initialising the connection on server startup is bad idea because, if connection is lost to server (like database server restart), then you would also have to restart the application or set reconnectTries)
Are you using connection pool: If you are using connection pool, then closing and opening of connections is taken care by Mongoose itself. All you have to do is, release the connection after use, so that, it's available for other requests.
Are you creating connection per request: If yes, then you should close the connection before returning the response or you would quickly run out of available connections at database server.
you can call mongoose.disconnect() to close the connection
My server app uses prepared statements in almost all cases, to prevent sql injection. Nevertheless a possibility is needed providing special users executing raw SELECT queries.
How can I more or less securely make sure the query does not modify the database? Is it possible to execute a query read only, or is there any other 'secure' way making sure noone tries any sql injection?
(Using sqlite3, so I cannot use any privileges)
Thanks a lot!
JDBC supports read-only connections by calling Connection.setReadOnly(true). However the javadoc says:
Puts this connection in read-only mode as a hint to the driver to enable database optimizations.
Some JDBC drivers will enforce the read-only request, others will use it for optimizations only, or simply ignore it. I don't know how sqlite3 implements it. You'll have to test that.
Otherwise, you could do a "simple" parse of the SQL statement, to ensure that it's a single valid SELECT statement.
I'm not aware of a general JBDC configuration which specifies readonly. But Sqlite does have special database open modes and this can be leveraged in your connection to your sqlite database. Eg.
Properties config = new Properties();
config.setProperty("open_mode", "1"); //1 == readonly
Connection conn = DriverManager.getConnection("jdbc:sqlite:sample.db", config);
Credit: https://stackoverflow.com/a/18092761/62344
FWIW All supported open modes can be seen here.
If you use some sort of factory class to create or return connections to the database, you can individually set connections to be read-only:
public Connection getReadOnlyConnection() {
// Alternatively this could come from a connection pool:
final Connection conn = DriverManager.getConnection("jdbc:sqlite:sample.db");
conn.setReadOnly(true);
return conn;
}
If you're using a connection pool, then you may also want to provide a method for getting writeable connections too:
public Connection getWriteableConnection() {
final Connection conn = getPooledConnection(); // I'm assuming this method exists!
conn.setReadOnly(false);
return conn;
}
You could also provide just a single getConnection(boolean readOnly) method and simply pass the parameter through to the setReadOnly(boolean) call. I prefer the separate methods personally, as it makes your intent much clearer.
Alternatively, some databases like Oracle provide a read only mode that can be enabled. SQLite doesn't provide one, but you can emulate it by simply setting the actual database files (including directories) to read only on the filesystem itself.
Another way of doing it is as follows (credit goes to deadlock for the below code):
public Connection getReadOnlyConnection() {
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
Connection conn = DriverManager.getConnection("jdbc:sqlite:sample.db",
config.toProperties());
}
I am writing a small program, which is going to be launched on Apache web-server (not Tomcat) through CGI in respond to a POST request.
The program does the following:
read the xml, sent via http in request
execute a stored procedure in a database with the data extracted from the xml
return the result of the stored procedure as the respond to the POST request
The database is Oracle. I use jdbc OCI to access it.
Class.forName("oracle.jdbc.OracleDriver");
String dbCS = "jdbc:oracle:oci:#//ip:port/service"
Connection conn = DriverManager.getConnection(dbCS, dbUserId, dbPwd);
CallableStatement cs = conn.prepareCall("{ call ? := my_pkg.my_sp(?,?,?,?)}");
cs.registerOutParameter(pReturnValue, OracleTypes.NUMBER);
cs.setInt("p1", p1);
cs.setString("p2", p2);
cs.setString("p3", p3);
cs.registerOutParameter("p_out", Types.VARCHAR);
try {
cs.executeQuery();
return cs.getString(pReqResponse);
} finally {
try {
cs.close();
} catch (SQLException ex) {
//....
}
}
While doing a single request, it worked fine (the whole programm finished in 2 sec.). However, if I tryed to send multiple POST requests at once, I got all of them stuck for some amount of time, depending on the quantity of requests (it's approximately, 10 sec. for 10 req., 15 sec. for 15 req.).
I tried to estimate, which part of code gave the delay. It appeared to be two lines:
Connection conn = DriverManager.getConnection(dbConnectionString, dbUserId, dbPwd);
CallableStatement cs = conn.prepareCall("{ call ? := my_pkg.my_sp(?,?,?,?)}");
The execution itself finished almost immediatelly.
Why is this so?
P.S.: I experimented the same on Windows7. Of course, it wasn't launched from a web server, but just as a simple console process. It also has to read the xml from a file on a hard drive. All concurrently launched instances of the programm finished in a second all together.
What prevents it from working as fast on Linux through Apache?
Based on comments
I tried to set poolling properties for my connection but all in vain. I tried the following:
While specifying UserId and Password in the url
jdbc:oracle:oci:login/password#//ip:port/service
I tried to set the connection properties:
Properties p = new Properties();
p.setProperty("Pooling", "true");
p.setProperty("Min Pool Size", "1");
p.setProperty("Max Pool Size", "10");
p.setProperty("Incr Pool Size", "4");
Connection conn = DriverManager.getConnection(dbConnectionString, p);
I tried to use OCI Connection Pooling:
OracleOCIConnectionPool cpool = new OracleOCIConnectionPool();
cpool.setUser("user");
cpool.setPassword("pwd");
cpool.setURL(dbConnectionString);
Properties p = new Properties();
p.put(OracleOCIConnectionPool.CONNPOOL_MIN_LIMIT, "1");
p.put(OracleOCIConnectionPool.CONNPOOL_MAX_LIMIT, "5");
p.put(OracleOCIConnectionPool.CONNPOOL_INCREMENT, "2");
p.put(OracleOCIConnectionPool.CONNPOOL_TIMEOUT, "10");
p.put(OracleOCIConnectionPool.CONNPOOL_NOWAIT, "true");
cpool.setPoolConfig(p);
Connection conn = (OracleOCIConnection) cpool.getConnection();
I tried to use the apache DBCP component:
basicDataSource = new BasicDataSource();
basicDataSource.setUsername("user");
basicDataSource.setPassword("pwd");
basicDataSource.setDriverClassName("oracle.jdbc.OracleDriver");
basicDataSource.setUrl(dbConnectionString);
Connection conn = basicDataSource.getConnection();
The behaviour remained the same, i.e. a big delay on getConnection in all concurrent requests.
All these attempts seem to try to solve some other problem to me, as in my case all connections are established from separate processes, and it looks unobvious to manage connections from one pool among different processes (am I mistaken here??).
What options do I have? Or probably did I do anything wrong?
Also I should say, I am quite new to java in general, so I may be missing some basic things..
Could this be an OS or web-server issue? Probably something should be setup there, not in code...?
Also I tried to use thin client instead of oci. However it worked even more weirdly: the first request finished in a second, while the second delayed for a minute.
Poor concurrency with Oracle JDBC drivers states a problem similar to mine.
In the end we found out that processes, launched by Apache through CGI, occupied all 100% of CPU (and a lion share of memory), so they simply did not have enough resources. Unfortunately I do not know, why a very simple and basic programm (reading an xml and establishing one connection to DB to execute a stored procedure) launched simultanuosly only 20 times, eats all resources.
However the solution appeared to be very obvious indeed. I've refactored it to a java web application using servlets, we deployed it on Apache Tomcat, and MAGIC....it started working as expected, without any visible effect on resources.
I think the problem is with cgi. When you make a cgi request, it starts a new cpu process to handle the request. Each new request is also in a new JVM, so connection pooling is not an option.
Even so, it should be quicker than that to get a connection. Maybe in Oracle itself there are config options governing the number of concurrent connections you can have, but I'm no Oracle expert.
In a javase database application, I process a lot of short-lived object (say accounting documents like bills etc.). Processing each object consists in opening a connection to a database and looking up for some data. Not all objects are looked up on the same database, but i select a specific database according to some object property, so i'll end up having several connections opened.
What i actually need is no more than one connection for each database.
So I've done something like this:
public MyPool {
Map<String, Connection> activeConnections = new TreeMap<String, Connection>();
public Connection getConnection(String database_name) throws SQLException {
if (activeConnections.containsKey(database_name)) {
return activeConnections.get(database_name);
}
//Retrive the configuration data from a configuration object
ConnectionConfig c = Configuration.getConnectionConfig(database_name);
Connection connection = DriverManager.getConnection(c.url, c.user, c.password);
return connection;
}
The questions are:
1) Since I see around a lot of pooling libraries, DBCP, c3p0 and others: what is the point of all those libraries, what do they add to a "basic" approach like this?
Tutorials like this doesn't help much in responding this question, since the basic solution exposed here fits perfectly their definition of connection pooling.
2) This is a piece of code that will be "exposed" to other developers, that in turn may develop procedures to retrieve data from databases with different structures, probably getting connections from this "pool object".
Is it correct, in the docs and in the code, referring to it as a "pool", or is it something different, so that calling "pool" would be misleading?
Your code isn't a connection pool implementation in the colloquial use of the term since each datasource only has a single physical connection. The concept behind object pooling (in this case, the object is a connection) is that some objects require overhead to configure. In the case of a connection pool, as you know, a database connection must be opened before you can talk to the database.
The difference here is that your code isn't thread safe for a concurrent environment like the popular connection pool implementations you've mentioned. Applications running in high concurrency circumstances like the web shouldn't need to absorb the overhead of establishing a connection on each request. Instead, a pool of open connections is maintained and when the request has finished working on the connection, it is returned to the pool for subsequent requests to make use of.
This is required because connections are stateful. You can't have multiple requests sharing the same connection at the same time and guarantee any sort of reasonable transaction semantics.
Use BoneCP and wrap the connection pool like this:
Do not try to create your own connection pool, that is what BoneCP or any number of other very good and well tested pools are for.
object ConnectionPool {
Class.forName("[ENTER DRIVER]")
private val connstring = [ENTER YOUR STRING]
private var cp : BoneCP = _
createConnectionPool() //upon init create the cp
/**
* Create a new connection pool
*/
def createConnectionPool() = {
if(cp == null) {
try {
val config = new BoneCPConfig()
config.setJdbcUrl(connstring)
config.setMaxConnectionsPerPartition(3)
config.setMinConnectionsPerPartition(1)
config.setPartitionCount(1)
cp = new BoneCP(config)
}
catch {
case e: SQLException => e.printStackTrace()
}
}
}
def getConnection () = { cp.getConnection }
I'm pretty new to Servlets and JSP and to using databases.
At the moment I have a class in the 'model' part of my web app which has lots of methods that I've written to perform database queries and updates. At the moment, in each of those methods I am creating a database connection, doing the SQL stuff, then closing the connection.
This works fine while I'm just making small apps for myself but I'm starting to realise that if lots of people were using my app concurrently then it would start to become apparent that creating database connections and closing them for each method call is a time costly process. So I need to change the way I do things.
In Head First Servlet & JSP by Basham, Sierra & Bates, they describe how it's possible to use a ServletContextListener implementation to create an object on the deployment of the web app that will be added as an attribute of the ServletContext. The authors don't go into it, but imply that people often add a database connection as an attribute of the ServletContext. I thought I would like to implement this for myself, but after reading this stackoverflow article on database connection management I'm not so sure.
However as I'm just starting with servlets and JSP, let alone the rest of J2EE, a lot of that article is beyond me.
The points that stand out for me from that article are:
Something could happen to break that database connection and if we are relying only on that connection then we would need to redeploy our app in order to restart a connection. is this correct?
We should reply on the container to manage the database connections for us. Great, but how is this acheived? How can I communicate with the container? (Please bear in mind that I've just started with Servlets and JSP).
In terms of Servlet design in general, I have one servlet class per request type and that normally only has one type of call to a database ie a specific update or query. Instead of having a class with all the methods for querying the database, is it a better design for me to have the methods within their respective servlets or would that contravene the Model-View-Controller pattern?
I can't imagine that I'll be having too many problems with too many users slowing down the user experience just yet :) but I'd like to start doing things right if possible.
Many thanks in advance for your comments
Joe
The following page on Tomcat's website describes how to connect Tomcat and mySQL in detail. You do not want to roll your own, there are too many DataSource pools already available that have been debugged and tried in production environments.
The main thing about using a pool is that a connection is not terminated when you call close, instead it is just returned to the pool. Therefore it is important to make sure that you close your resources in a try/finally block. Look here for a sample.
I would check out connection pooling and specifically frameworks like C3P0 or Apache Commons DBCP.
Both these packages will look after maintaining and managing a collection of database connections for you.
Typically connections are established in advance, and handed out to requesting threads as they require them. Connections can be validated prior to being handed out (and remade in advance of the client using them if the connection is broken).
The way to go in a web application is to have a connection pool manage your connections. This will allow your threads of execution to share the database connections, which is an important point as connecting to a database is usually a costly operation. Using a connection pool is usually just a configuration task as most containers support managing a connection pool.
From the viewpoint of your code, there are very few changes. Basically:
Connection pools are accessed through the DataSource interface, while non-pooled connections may be accessed through the old DriverManager.
To get the DataSource you will usually have to use JNDI, as this is the standard method for publishing connection pools in a J2EE application.
You want to close() Connection objects as soon as possible. This returns the connection to the pool wothout disconnecting from the DB, so that other threads can use it.
As always, you should call close() on every JDBC resource (connections, statements, resultsets) to avoid leaking. This is specially important in a server application because they are infrequently restarted so leaks accumulate over time and eventually will make your application malfunction.
This is some example code from http://download.oracle.com/javase/1.4.2/docs/guide/jdbc/getstart/datasource.html (NOTE: not exception-safe). As you can see, once you get the Connection reference there is nothing really special.
Context ctx = new InitialContext();
DataSource ds = (DataSource)ctx.lookup("jdbc/AcmeDB");
Connection con = ds.getConnection("genius", "abracadabra");
con.setAutoCommit(false);
PreparedStatement pstmt = con.prepareStatement(
"SELECT NAME, TITLE FROM PERSONNEL WHERE DEPT = ?");
pstmt.setString(1, "SALES");
ResultSet rs = pstmt.executeQuery();
System.out.println("Sales Department:");
while (rs.next()) {
String name = rs.getString("NAME");
String title = rs.getString("TITLE");
System.out.println(name + " ; ;" + title);
}
pstmt.setString(1, "CUST_SERVICE");
ResultSet rs = pstmt.executeQuery();
System.out.println("Customer Service Department:");
while (rs.next()) {
String name = rs.getString("NAME");
String title = rs.getString("TITLE");
System.out.println(name + " ; ;" + title);
}
rs.close();
pstmt.close();
con.close();
The authors don't go into it, but imply that people often add a database connection as an attribute of the ServletContext.
That's not the standard way to handle this. The traditional approach is to use a connection pool i.e. a pool of ready to use connections. Then, applications borrow connections from and return them to the pool when done.
There are several standalone connection pool implementations available (C3P0, Commons DBCP, Bone CP) that you can bundle in your application. But when using a Servlet or Java EE container, I would use the connection pool provided by the container. Then, obtain a DataSource (a handle on a connection pool) via JNDI from the application to get a JDBC connection from it (and close it to return it to the pool).
The way to configure a pool is obviously container specific so you need to refer to the documentation of your container. The good news is that Tomcat provides several examples showing how to do this, how to obtain a datasource via JNDI and how to write proper JDBC code (read until the bottom of the page).
References
Apache Tomcat User Guide
JDBC Data Sources
JNDI Datasource HOW-TO