We have a Java J2EE application that was using individual web service calls for each database row insert/update. That turned out to be WAY too slow. They have brought me in to "quickly" fix it. I plan to convert all the web service calls to plain JDBC. To do that, I need to get a JDBC connection from the pool and then use it in multiple different methods. I need to use the same JDBC connection in multiple DAOs to string it all together into a single database transaction. I can explicitly pass around the JDBC connection to each DAO that needs it, but that would require me to change a LOT of method signatures, plus a LOT of unit tests (which goes against the "quickly" part).
I am trying to come up with a good way put the JDBC connection somewhere and then just grab it in the methods that need it without having to explicitly pass it around everywhere. We can't use Spring, JPA, or Hibernate on this project because the support team won't support those technologies. I can put the JDBC connection into an EJB, but I am not sure how reliable that would be. I could create a custom Singleton to manage database connections for each user (session?), but I would have to be careful about thread safety. If anyone has tried to do something like this before, I would appreciate some advice.
You could use a ThreadLocal. Have the entry point set it up and than the DAOs
class ConnectionUtil {
public static final ThreadLocal<Connection> connection = new ThreadLocal<Connection>();
}
public Return method(Args arg) {
ConnectionUtil.connection.set(newConnection());
try {
...
} finally {
ConnectionUtil.connection.remove();
}
}
Pretty ugly, but that seems to be what your boss wants.
Use Apache Commons DBCP. It's the Connection Pool project from Apache, and what's used internally in many engines.
We have done that before (5 years ago or so on an IBM WebSphere).
We wrote an own pool and stored the jdbc-connections in a hashtable with the sessionID. The only pitfall was to close the connection on sessionend and return it to the pool (we did that with a sessionlistener). If one user session connects only to one jdbc-connection, the thread safety is inherited. So the singleton approach does definitly work.
Our performance gain was awful.
Additionally what I have done for DAOs in this is not to pass the connection in the method signature but in the Constructor instead. Then I hold the object that originally creates the connection responsible to close it.
Related
I had tried to terminate that by myself, but i can't.
I need to save objects in multiple relative tables of my database and it must be in one single transaction.
I am using Servlets, JSP, JDBC.
(already have the dao layer and service layer)
As we know, the transactions always must to be in the service layer.
When I was using Spring MVC, I always used this annotation in services:
#Transactional
and had options for TransactionManager in my spring.xml
Now I need to do the same with servlets.
Can anyone help me with some small example for servlets transactions or maybe somebody have suggestive thoughts for this?
You have different ways to manage transactions at JDBC level.
The simplest way, is a filter: you open a transaction at the beginning or the request processing and commit (or rollback) it at the end. It is as little invasive as possible in other layers, but you cannot have the transaction demarcation at the service layer.
At the opposite, you can add code to explicitely create and commit transactions in all (relevant) service methods. You can put real code in common methods to limit code duplication, but you will have to consistently modify all your service layer.
An alternate way, as you have an existant service layer, would be to mimic Spring and use proxies around your service classes. The proxies would open create transaction, call the real method and commit the transaction. IMHO, it would still be a little invasive method with little code duplication.
My choice would be to use method 1 for very simple use cases or prototyping and method 3 for more serious ones - but this is just my opinion.
I think first of all you need to understand which specification you would like to work with and then figure out how to do the integration.
There are so many techniques and technologies in Java to access the database.
In general if you want to access the DB at the lowest layer (JDBC) you'll have to manage transactions by yourself.
This link can be useful, because it provides a lot of examples. In general you have setAutoCommit(false)' and thenrollback/commitonConnection` jdbc interface.
If you wish to use stuff like hibernate (note, you still don't need spring for this) - the Transaction inteface can be handy.
Here is the Example
Spring as an integration framework allows using transaction management by means of definition of relevant beans, so you kind of chose by yourself which transaction management technology should be used.
This is a broad topic, you might be interested to read This to understand more the spring way to manage transactions.
In general, JDBC is the most low level of accessing the database in java, all other APIs are built on top of it.
Hope this helps
On your Service Method you should handle transaction yourself, you'll find below an example:
try {
dbConnection = getDBConnection();
dbConnection.setAutoCommit(false);
// do your database work preparedStatement Insert, Update
//OR
// If you are doing your work on DAO you can pass connection to your DAO
//XDao xDao = new XDao(dbConnection);
//YDao yDao = new YDao(dbConnection);
//xDao.doWork();
//yDao.doWork()
dbConnection.commit();
System.out.println("Done!");
} catch (SQLException e) {
System.out.println(e.getMessage());
dbConnection.rollback();
} finally {
//Close prepared statements
//close connection
if (dbConnection != null) {
dbConnection.close();
}
}
For advanced Pattern and uderstanding, I recommend this blog post here
My application sometimes can lost connection to MySQL database.
I think that good solution will be schedule some timer to try to recconect after some time.
How better it can be done? May be separate thread that try to connecto to db? Or exist the stardard practices ?
Thanks.
JDBC is a great way to start building a java database application, but managing object mappings and connections / transactions can very rapidly lead to alot of boiler plate and rewriting of logic which has already been written many times by many programmers.
It is expected that you should lose/close a connection normally, unless you have a high throughput application, in which case you might keep several connections alive (this is known as connection pooling).
There are essentially 3 "high-level" approaches to maintaining efficient connections and transactions :
1) The simplest solution is to check when you are reusing a connection to make sure it is valid, or reopen it every time.
2) A more sophisticated solution is to use a connection pooling mechanism, such as the apache http://commons.apache.org/dbcp/ dbcp library.
3) Finally, in my opinion, the most maintainable solution is to use a JDBC framework like ibatis/hibernate which will , providing you a simple , declarative interface for managing object relational mapping / transactions / database state ---- while also transparently maintaining connection logic for you.
ALSO : If object relational mapping is not your thing, then you can use a framework such as DBUtils by apache to manage querying and connections, without the heavy weight data mapping stuff getting in the way.
JDBC is a simple API to abstract the operations of different database system. It makes something uniform, such different native types to java type.
However, the lost connection is another big issue. Using the connection pool library is better than writing a new one by yourself. It is too many details to implement a connection pool from scratch without bug.
Consider to use the mature library:
Commons-DBCP
bonecp
etc
Commons DBCP is based on the Commons Pool.
You should understand the configurable options both of them.
bonecp is another new connection pool, no lock is its advantage.
Validated SQL String is important to check a connection is dead or alive.
Lost connection checking enable by the validation string set.
Here is the dbcp configuration page:
http://commons.apache.org/dbcp/configuration.html
It says:
NOTE - for a true value to have any effect, the validationQuery
parameter must be set to a non-null string.
For example:
dataSource.setValidationQuery(isDBOracle() ? "select 1 from dual" : "select 1");
dataSource.setTestWhileIdle(true);
dataSource.setTestOnReturn(true);
dataSource.setRemoveAbandoned(true);
dataSource.setRemoveAbandonedTimeout(60 * 3 /* 3 mins */);
dataSource.setMaxIdle(30);
dataSource.setMaxWait(1000 * 20 /* 20 secs*/);
Remind: If you use the Common DBCP in weblogic, don't forget the older Commons library in the server, it will drive your application use the different version. The prefer-web-inf-classes setting will help you.
I am developing a program in Java that I will use for registering data. I store the data in a MySQL database. We have not been making "big" programs in my class that uses databases for storage so what we have done is to make an adapter(We usually name it DBAdapter or something) that creates the connection and returns it. Then we have made a database handler class where all the statements are being executed. Then last the controller + view class have a reference to this handler and call whatever methods available.
However, my question is: When dealing with multiple tables and maybe different model data wouldn't it be good to separate the code in the handler into smaller chunks? Such as private classes or other public classes that a "main" handler could have references too? Example: If you make a system for a company that ship goods you probably would have a database that stores data about the goods. Then you would have a handler that has many select-statments for various stuff. You would also have many employees and then you probably want to separate the select-statements for the goods from the select-statements for the employees.
I also wonder if handler/adapter etc. is the correct terminology?
(This is not homework btw. I am making a program that will be a used for registering data for my iPhone app)
Thank you for your time.
You may want to look into Object-relational mapping libraries for Java, such as OpenJPA or Hibernate. If you'd rather stick to SQL - or like the fine-grained control - you may find Ibatis interesting.
In any case, I wouldn't manage the connections to the DB myself but rely on a connection pool, usually accessed through the DataSource interface.
While ORM may well be where you head, I'd start with
a connection pool
implementing a DAO pattern strategy. If you are using Spring, look at JDBCTemplate - it will be pretty easy to convert this to HibernateTemplate, etc.
If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way?
From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method?
Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset.
EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not?
THnaks
First, if you have involved a database, you are no longer unit testing. You have entered integration (for connection configuration) or functional testing land. And those are very different beasts.
The connection method should definitely be separate from data fetch. In fact, your connection should come from a factory so that you can pool it. As far as testing the connection, really all you can test is that your configuration is correct by making a connection to the DB. You shouldn't be trying to test your connection pool, as that should probably be a library someone else wrote (dbcp or c3p0). Furthermore, you probably can't test this, as your unit/integration/function tests should NEVER connect to a production level database.
As for testing that your data access code works. That's functional testing and involves a lot of framework and support. You need a separate testing DB, the ability to create the schema on the fly during testing, insert any static data into table, and return the database to a known clean state after each tests. Furthermore, this DB should be instantiated and run in such a way that 2 people can run the tests at once. Especially if you have more than 1 developer, plus an automated testing box.
Asserts should be against data that is either static data (list of states for example, that doesn't change often) or against data that is inserted during the test and removed afterwords so it doesn't interfere with other tests.
EDIT: As noted, there are frameworks to assist with this. DBUnit is fairly common.
You can grab ideas from here. I would go for mock objects when unit testing DB.
Otherwise, if application is huge and you are running long and complex unit tests, you can also virtualize your DB server and easily revert it to a saved snapshot to run again your tests on a known environment.
Using my Acolyte framework ( https://github.com/cchantep/acolyte ) you can mimick any JDBC supported DB, describing cases (how to handle each query/update executed) and which resultset/updatecount to returned in each case (describe fixtures as row list for queries, count for update).
Such connection can be directly used passing instance where JDBC is required, or registered with unique id in JDBC URL namespace jdbc:acolyte: to be available for code getting connection thanks to JDBC URL resolution.
Whatever way of creating connection, Acolyte keep each one isolated which is right for unit test (without having extra cleanup to do on a test DB).
As persistence cases can dispatched to different isolated connection, you no longer need a big-all-in-on-hard-to-manage db (or fixtures file): it can be easily split in various connection, e.g. one per persistence method/module.
My Acolyte framework is usable either in pure Java, or Scala.
If the goal is to test method functionality, not the database SP or SQL statement, then you may want to consider dependency injection in sense of data provider interface. In other words, your class uses an interface with methods returning data. The default implementation uses the database. The unit test implementation has several options:
mocking (NMock, Moq, etc.), great way, I live mocking.
in-memory database
static database with static data
I don't like anything but first. As a general rule, programming to interfaces is always much more flexible.
For database connection establish testing: you could let the connection execute a very simple SQL as testing method. Some application servers have such configuration, following snippet is from JBoss DB configuration:
<!-- sql to call on an existing pooled connection when it is obtained from pool
<check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql>
For each client, I have separate databases but business logic and tables are same for each client. I want common service and dao layer for each client. In dao, I select datasource based on logged user client. In #Transactional, I have to pass bean id of transaction manager. How to make common service layer with #Transactional annotation.
Same question is here
Multiple transaction managers - Selecting a one at runtime - Spring
Choose between muliple transaction managers at runtime
but nobody reply
If you want to create a database connection dynamically, then have a look at this SO post.
From the post linked : Basically in JDBC most of these properties are not configurable in the
API like that, rather they depend on implementation. The way JDBC
handles this is by allowing the connection URL to be different per
vendor.
So what you do is register the driver so that the JDBC system can know
what to do with the URL:
DriverManager.registerDriver((Driver)
Class.forName("com.mysql.jdbc.Driver").newInstance());
Then you form
the URL:
String url =
"jdbc:mysql://[host][,failoverhost...][:port]/[database][?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]"
And finally, use it to get a connection:
Connection c = DriverManager.getConnection(url);
In more
sophisticated JDBC, you get involved with connection pools and the
like, and application servers often have their own way of registering
drivers in JNDI and you look up a DataSource from there, and call
getConnection on it.
In terms of what properties MySQL supports, see here (The link is dead).
EDIT: One more thought, technically just having a line of code which
does Class.forName("com.mysql.jdbc.Driver") should be enough, as the
class should have its own static initializer which registers a
version, but sometimes a JDBC driver doesn't, so if you aren't sure,
there is little harm in registering a second one, it just creates a
duplicate object in memeory.
I don't know if this will work, since I have not tested it, but you
could try.
Now what you could do is, use the #Transactional annotation on top of the DAOs without specifying any values (That works). Now in your DAO classes, instead of injecting any DataSource bean, create your own dataSource dynamically as specified in the above link and then either inject that dependency at runtime, use getter setter methods, or just use the new keyword. I hope that'd do the trick.
NOTE: I have not tested it myself yet, so if this works, do let me know.
You do not need to configure and switch between multiple transaction managers to accomplish your end goal. Instead use the Spring provided org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource mechanism.
Detailed examples can be found here :
https://spring.io/blog/2007/01/23/dynamic-datasource-routing/
http://howtodoinjava.com/spring/spring-orm/spring-3-2-5-abstractroutingdatasource-example/