I am facing a use case that I have not ran into before with respect to using the jdbc template. I looked all over the web and also here, but can't find an answer to my use case.
In the current framework that I am using which was developed at the company I am currently working at, jdbc connections are readily availabe and obtained in the following format:
ConnectionManager.getInstance().getConnection(some_database_name);
The database name is a parameter provided and it can be different at times.
So in order to use the jdbc template, I created a sub class of the commons dbcp data source and added a constructor which takes in the connection above
public CommonsDBCPExtension(Connection conn) {
this.conn = conn;
}
Then I just instantiate a jdbc template and set the datasource:
this.jdbcTemplate = new JdbcTemplate();
jdbcTemplate.setDataSource(CommonsDBCPExtension);
After that so far all my tests are working. The reason I want to stick with the jdbc template is so that I can avoid all the nightmare jdbc code that I'll have to write which in my opinion is worse than what I have done so far.
So I am wondering if there is a more elegant way of doing this in spring, or this is just flat out wrong, or if it's the right approach. This is the first time I have run into this use case in my career, so a bit of help from anyone who has had experience with something like this would be greatly appreciated.
Related
I have an old Java project (no frameworks/build tools used) that has a class full of SQL methods and corresponding Bean-classes. The SQL methods mostly use SELECT, INSERT and UPDATE queries like this:
public static void sqlUpdateAge(Connection dbConnection, int age, int id) {
PreparedStatement s = null;
ResultSet r = null;
String sql = "UPDATE person SET age = ? WHERE id = ?";
try {
s = dbConnection.prepareStatement(sql);
s.setInt(1, age);
s.setInt(2, id);
s.addBatch();
s.executeBatch();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (r != null)
r.close();
if (s != null)
s.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
What is the best practice in unit testing when it comes to SQL queries?
The easiest way I can think of, would be to use my development database; just call the sqlUpdateAge() in the test class, query the database for a result set and assertTrue that the set age is in the result set. However, this would fill up the development database with unnecessary data, and I would like to avoid that.
Is the solution to create a so-called in-memory database or somehow rollback the changes I made?
If I need an in-memory database:
Where and how would I create it? Straight to the test class, or perhaps to a config file?
How do I pass it to the updateAge() method?
I would suggest to see if you can start with autmoating the build. That is either by introducing a build tool such as maven or gradle - or if not possible - scipting the build. In any case, your goal should be to get to a point where it's easy for you to trigger a buil together with tests whenever code changes.
If you are not able to produce a consistent build on every change with the guarantee that all unit tests have been run, then there's really no value in writing unit tests in the first place. That is because otherwise, your tests are going to fail eventually due to code modifications and you wouldn't notice unless all your tests are automatically run.
Once you have that, you might have some hints to how you would like to run unit or integration tests.
As you can't benefit from testing support that many application frameworks provide, you're basically left on your own for how to configure database testing setup. In that case, I don't think that an inmemory database is really the best opion, because:
It's a different database technology than what you are normally using, and as the code indicates you are not using an ORM that will take care of different SQL dialects for you. As that's the case, you might find yourself in a position, where you are unable to accurately test your code because of SQL dialect differeces.
You will need to do all the setup of the inmemory DB yourself - which is of course possible, but still it's a piece of code that you need to maintain and that can also fail.
The two alternatives I can think of are:
Use Docker to start your actual database technology for every time you run the tests. (that's also something you have to script for yourself, but it will most likely be a very simple and short command you need to execute)
have a test database running on your test environment that you use. Every time before you run the tests, esure the database is reset to the original state. (easiest way to do this is to drop the existing schema and restore to the original schema). In this case, you will need to ensure that you don't run multiple builds in parallell against the same test database.
These suggestions apply only if you have experience on the shell and/or have support from someone in ops. If not, setting up H2 might be easier and more straight forward.
Things would have been easy with a Spring Boot project. In your case, you have many strategies:
Configure a H2 database. You can initialize your database with the creation of a schema and insertion of data in a setUp method with the #BeforeEach annotation.
You can use a dedicated framework like DbUnit.
You will have to initialize your dbConnection also in a setUp method in your unit test.
In order to do some functional tests, a fake application is required to run. In order to make all tests independent the fake application needs to use a clean database every time a test is setup. In order to do so, I found on the documentation that the ideal way is to use a inmemory database like this:
#Before
public void before() {
app = fakeApplication(inMemoryDatabase());
start(app);
initializeData();
}
#After
public void after() {
stop(app);
}
The problem with this is that it does not set the MODE to MYSQL, since the normal database is a MYSQL database. In order to set this mode, one must add options. Which is done using the code below. Note that "test" is the name for the database (different than "default" to avoid confusion).
app =fakeApplication(inMemoryDatabase("test", ImmutableMap.of("MODE", "MYSQL")));
However this code does weird stuff: it uses the default database (which is not even an inmemory database) configured in application.conf So then I changed the code to following:
app =fakeApplication(inMemoryDatabase("default", ImmutableMap.of("MODE", "MYSQL")));
But this does nothing: It still says that MODE MYSQL is not set.
Can someone help me?
For the framework java play 2.5 is used.
You can get some help from the documentation https://www.playframework.com/documentation/2.5.x/JavaTestingWithDatabases.
In my desktop application new databases get opened quite often. I use Hibernate/JPA as an ORM.
The problem is, creating the EntityManagerFactory is quite slow, taking about 5-6 Seconds on a fast machine. I know that the EntityManagerFactory is supposed to be heavyweight but this is just too slow for a desktop application where the user expects the new database to be opened quickly.
Can I turn off some EntityManagerFactory features to get an instance
faster? Or is it possible to create some of the EntityManagerFactory lazily to speed up cration?
Can I somehow create the EntityManagerFactory object before
knowing the database url? I would be happy to turn off all
validation for this to be possible.
By doing so, can I pool EntityManagerFactorys for later use?
Any other idea how to create the EntityManagerFactory faster?
Update with more Information and JProfiler profiling
The desktop application can open saved files. Our application document file format constists of 1 SQLite database + and some binary data in a ZIP file. When opening a document, the ZIP gets extracted and the db is opened with Hibernate. The databases all have the same schema, but different data obviously.
It seems that the first time I open a file it takes significantly longer than the following times.
I profiled the first and second run with JProfiler and compared the results.
1st Run:
create EMF: 4385ms
build EMF: 3090ms
EJB3Configuration configure: 900ms
EJB3Configuration <clinit>: 380ms
.
2nd Run:
create EMF: 1275ms
build EMF: 970ms
EJB3Configuration configure: 305ms
EJB3Configuration <clinit>: not visible, probably 0ms
.
In the Call tree comparison you can see that some methods are significantly faster (DatabaseManager. as starting point):
create EMF: -3120ms
Hibernate create EMF: -3110ms
EJB3Configuration configure: -595ms
EJB3Configuration <clinit>: -380ms
build EMF: -2120ms
buildSessionFactory: -1945ms
secondPassCompile: -425ms
buildSettings: -346ms
SessionFactoryImpl.<init>: -1040ms
The Hot spot comparison now has the interesting results:
.
ClassLoader.loadClass: -1686ms
XMLSchemaFactory.newSchema: -184ms
ClassFile.<init>: -109ms
I am not sure if it is the loading of Hibernate classes or my Entity classes.
A first improvement would be to create an EMF as soon as the application starts just to initialize all necessary classes (I have an empty db file as a prototype already shipped with my Application). #sharakan thank you for your answer, maybe a DeferredConnectionProvider would already be a solution for this problem.
I will try the DeferredConnectionProvider next! But we might be able to speed it up even further. Do you have any more suggestions?
You should be able to do this by implementing your own ConnectionProvider as a decorator around a real ConnectionProvider.
The key observation here is that the ConnectionProvider isn't used until an EntityManager is created (see comment in supportsAggressiveRelease() for a caveat to that). So you can create a DeferredConnectionProvider class, and use it to construct the EntityManagerFactory, but then wait for user input, and do the deferred initialization before actually creating any EntityManager instances. I'm written this as a wrapper around ConnectionPoolImpl, but you should be able to use any other implementation of ConnectionProvider as the base.
public class DeferredConnectionProvider implements ConnectionProvider {
private Properties configuredProps;
private ConnectionProviderImpl realConnectionProvider;
#Override
public void configure(Properties props) throws HibernateException {
configuredProps = props;
}
public void finalConfiguration(String jdbcUrl, String userName, String password) {
configuredProps.setProperty(Environment.URL, jdbcUrl);
configuredProps.setProperty(Environment.USER, userName);
configuredProps.setProperty(Environment.PASS, password);
realConnectionProvider = new ConnectionProviderImpl();
realConnectionProvider.configure(configuredProps);
}
private void assertConfigured() {
if (realConnectionProvider == null) {
throw new IllegalStateException("Not configured yet!");
}
}
#Override
public Connection getConnection() throws SQLException {
assertConfigured();
return realConnectionProvider.getConnection();
}
#Override
public void closeConnection(Connection conn) throws SQLException {
assertConfigured();
realConnectionProvider.closeConnection(conn);
}
#Override
public void close() throws HibernateException {
assertConfigured();
realConnectionProvider.close();
}
#Override
public boolean supportsAggressiveRelease() {
// This gets called during EntityManagerFactory construction, but it's
// just a flag so you should be able to either do this, or return
// true/false depending on the actual provider.
return new ConnectionProviderImpl().supportsAggressiveRelease();
}
}
a rough example of how to use it:
// Get an EntityManagerFactory with the following property set:
// properties.put(Environment.CONNECTION_PROVIDER, DeferredConnectionProvider.class.getName());
HibernateEntityManagerFactory factory = (HibernateEntityManagerFactory) entityManagerFactory;
// ...do user input of connection info...
SessionFactoryImpl sessionFactory = (SessionFactoryImpl) factory.getSessionFactory();
DeferredConnectionProvider connectionProvider = (DeferredConnectionProvider) sessionFactory.getSettings()
.getConnectionProvider();
connectionProvider.finalConfiguration(jdbcUrl, userName, password);
You could put the initial set up of the EntityManagerFactory on a separate thread or something, so that the user never has to wait for it. Then the only thing they'll wait for, after specifying the connection info, is the setting up of the connection pool, which should be fairly quick compared to parsing the object model.
Can I turn off some EntityManagerFactory features to get an instance faster?
Don't believe so. EMFs don't really have too many features, other than initializing a JDBC connection/pool.
Or is it possible to create some of the EntityManagerFactory lazily to
speed up cration?
Rather than creating the EMF lazily, when the user will notice the performance hit, I suggest you should head in the opposite direction - create the EMF proactively before the user actually needs it. Create it once, up-front, possibly in a separate thread during application initialisation (or at least as soon as you know about your database). Reuse it throughout the existence of your application/database.
Can I somehow create the EntityManagerFactory object before knowing the database url?
No - it creates a JDBC connection.
I think a better question is: why does your application dynamically discover database connection URLs? Are you saying your databases are created/made available on-the-fly and there's no way to anticipate in advance the connection parameters. That really is to be avoided.
By doing so, can I pool EntityManagerFactorys for later use?
No, you can't pool EMFs. It's the connections that you can pool.
Any other idea how to create the EntityManagerFactory faster?
I agree - 6 seconds is too slow for initialisation of EMFs.
I suspect it's more to do with your selected database technology than JPA/JDBC/JVM. My guess is that maybe your database is initialising itself as you connect. Are you using Access? What DB are you using?
Are you connecting to a database remotely located? Over a WAN? Is network speed/latency good?
Are the client PCs limited in performance?
EDIT: Added after comments
Implementing your own ConnectionProvider as a decorator around a real ConnectionProvider will not speed up the user's experience at all. The database instance still needs to be initialised, the EMF & EM created and the JDBC connection still needs to be subsequently established.
Options:
Share a common preloaded DB instance: seems not possible for your business scenario (although JSE technology supports this and also supports client-server design).
Change to a DB with a faster startup: Derby (a.k.a. Java DB) is included in modern JVMs and has a startup time of about 1.5 seconds (cold) and 0.7 seconds (warm - data pre-loaded).
In many (most?) scenarios, the fastest solution would be to load data directly into in-memory java objects using JAXB with STAX. Subsequently, use in-memory cached data (particularly using smart structures like maps, hashing and arraylists). Just as JPA can map POJO classes to database tables & columns, so JAXB can map POJO classes to XML schema & work with XML doc instances. If you have very complex queries using SQL set-based logic with multiple joins and strong use of DB indexes, this would be less desirable.
(2) would probably give the best improvement for limited effort.
Additionally:
- try to unzip the data files during deployment rather than during app usage.
- initialize the EMF in a startup thread that runs in parallel to the UI startup - try to start the DB initializing as one of the very first steps of the app (that means connecting to the actual instance using JDBC).
I am developing a MongoDB app with Java but I think this question related to datastore connections for web apps in general.
I like to structure all web apps with four top-level packages which are called (which I think will be self explanatory):
Controller
Model
Dao
Util
Ideally I would like to have a class in the Dao package that handles all the connections details.
So far I have created a class that looks like this:
public class Dao {
public static Mongo mongo;
public static DB database;
public static DB getDB() throws UnknownHostException, MongoException{
mongo = new Mongo("localhost");
database = mongo.getDB("mydb");
return database;
}
public static void closeMongo(){
mongo.close();
}
}
I use it in my code with something like this
public static void someMethod(String someData){
try {
DB db = Dao.getDB();
DBCollection rColl = db.getCollection("mycollection");
// perform some database operations
Dao.closeMongo();
} catch (UnknownHostException e) { e.printStackTrace(); } catch (MongoException e) { e.printStackTrace();
}
}
This seems to work fine, but I'd be curious to know what people think is the "best" way to handle this issue, if there is such a thing.
The rule of thumb when connecting to relational database server is to have a pool. For example if you connect to an oracle database using a pool gives you some performance benefits both in terms of connection setup time and sql parsing time (if you are using bind variables). Other relational database may vary but my opinion is that a pool is a good pattern even for some other reason (eg. you may want to limit the maximum number of connections with your db user). You are using MongoDB so the first thing to check is how MongoDB handles connections, how expnsive is creating a connection,etc. I suggest to use/build a class that can implements a pool logic because it gives you the flexibility you may need in the future. Looking at your code it seems that you api
DB db=Dao.getDB();
should be paired with:
Dao.closeDB(DB db);
So you have a chance to really close the connection or to reuse it without affecting the Dao code. with these two methods can switch the way you manage connections without recoding the Dao objects
I would suggest you can write a java class to establish the connection with the database.
The arguments to the method should be the database name, password, host port and other necessary credentials.
You can always call the parametrized constructor everywhere where there is a need to establish database connectivity. This can be a model.
I got a 'nice' solution from this article. http://www.lennartkoopmann.net/post/722935345
Edit Since that link is dead, here's one from waybackmachine.org
http://web.archive.org/web/20120810083748/http://www.lennartkoopmann.net/post/722935345
Main Idea
What I found interesting was the use of a static synchronised method that returns an instance of the static class and its variables. Most professional devs probably find this obvious. I found this to be a useful pattern for managing the db connections.
Pooling
Mongo does automatic connection pooling so the key is to use just one connection to the datastore and let it handle its own pooling.
I think it is better if you call a method inside DAO to get data from database as well. If you do it in this way, say your database got changed. Then you have to edit many classes if you get data directly calling db queries. So if you separate db calling methods inside the DAO class itself and call that method to get data it is better.
Since Connection and Statements are interfaces, interface methods will be abstract by default. How can we create Connection and Statement interfaces in our Java program while connecting to the database?
Actually the implementation is provided by JDBC drivers. And the API give you a way to use the thing. Look at this example.
The implementations are provided by the JDBC driver for your database. e.g. for MySql you download the JDBC Driver for MySQL (Connector/J) and place the jar file on the classpath for your application.
Then you make a call to DriverManager (e.g.) this gives you a instance of a class which implements the Connection interface.
Connection c = DriverManager.getConnection("jdbc:mysql://[yourhost]/[nameofyourdb]?user=[username]&password=[password]");
Now you can get an instance of a class which implements the Statement interface from the Connection instance.
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM [yourtablename]");
....
For more examples just google around a bit (JDBC, java, [nameofyourdb])
For Connector/J there is a nice documentation herer
MySQL Connector/J
Connector/J Examples
Actually, the interfaces are given by Sun and as everyone know interfaces are set of rules that will be used for a specific nature of programming and for a consistent structure. So the database vendors take that interface and will result the API which doesn't ruin the prototype. As a result every vendors implemented with same names as that interface has, irrespective of its logic.
So developers need not to worry about its implementation and must know only about the interface names.
I have worked on an abstraction for databases at work. Here is some example code
Database database = new Database(new SQL("host", "port","database", "username","password"));
List<tStd_M_lt> bases = database.select(tStd_M_lt.class, new Where(tStd_M_lt.FIELDS.ITEM_ID.getDatabaseFieldName(), OPERATOR.LIKE, "?"), "%Base%");
for (tStd_M_lt base : bases)
{
List<tShower_Base_Kit> kits = database.selectByForiegnKey(tShower_Base_Kit.class, base);
}
The project is called azirt.