Hibernate connection parameters - java

Does Hibernate support connection 'parameters'? I'm new to Hibernate, but I notice that ConnectionProvider.getConnection() does not support any arguments.
Namely, I'm trying to implement a connection strategy (this for testing purposes) where a connection is selected (to be provided) based on certain arguments passed in during runtime (these can be different between calls). Does Hibernate even support such a thing? (i.e., something like getConnection(<+certain connection selection criteria/arguments>)?)

I found it: http://docs.jboss.org/hibernate/orm/4.1/javadocs/org/hibernate/service/jdbc/connections/spi/MultiTenantConnectionProvider.html#getConnection(java.lang.String)
Essentially the connection provider SPI that will return a specific connection for a given key...

Related

What the standard way in jdbc to manage lost connection?

My application sometimes can lost connection to MySQL database.
I think that good solution will be schedule some timer to try to recconect after some time.
How better it can be done? May be separate thread that try to connecto to db? Or exist the stardard practices ?
Thanks.
JDBC is a great way to start building a java database application, but managing object mappings and connections / transactions can very rapidly lead to alot of boiler plate and rewriting of logic which has already been written many times by many programmers.
It is expected that you should lose/close a connection normally, unless you have a high throughput application, in which case you might keep several connections alive (this is known as connection pooling).
There are essentially 3 "high-level" approaches to maintaining efficient connections and transactions :
1) The simplest solution is to check when you are reusing a connection to make sure it is valid, or reopen it every time.
2) A more sophisticated solution is to use a connection pooling mechanism, such as the apache http://commons.apache.org/dbcp/ dbcp library.
3) Finally, in my opinion, the most maintainable solution is to use a JDBC framework like ibatis/hibernate which will , providing you a simple , declarative interface for managing object relational mapping / transactions / database state ---- while also transparently maintaining connection logic for you.
ALSO : If object relational mapping is not your thing, then you can use a framework such as DBUtils by apache to manage querying and connections, without the heavy weight data mapping stuff getting in the way.
JDBC is a simple API to abstract the operations of different database system. It makes something uniform, such different native types to java type.
However, the lost connection is another big issue. Using the connection pool library is better than writing a new one by yourself. It is too many details to implement a connection pool from scratch without bug.
Consider to use the mature library:
Commons-DBCP
bonecp
etc
Commons DBCP is based on the Commons Pool.
You should understand the configurable options both of them.
bonecp is another new connection pool, no lock is its advantage.
Validated SQL String is important to check a connection is dead or alive.
Lost connection checking enable by the validation string set.
Here is the dbcp configuration page:
http://commons.apache.org/dbcp/configuration.html
It says:
NOTE - for a true value to have any effect, the validationQuery
parameter must be set to a non-null string.
For example:
dataSource.setValidationQuery(isDBOracle() ? "select 1 from dual" : "select 1");
dataSource.setTestWhileIdle(true);
dataSource.setTestOnReturn(true);
dataSource.setRemoveAbandoned(true);
dataSource.setRemoveAbandonedTimeout(60 * 3 /* 3 mins */);
dataSource.setMaxIdle(30);
dataSource.setMaxWait(1000 * 20 /* 20 secs*/);
Remind: If you use the Common DBCP in weblogic, don't forget the older Commons library in the server, it will drive your application use the different version. The prefer-web-inf-classes setting will help you.

Unit testing a database connection and general questions on database-dependent code and unit testing

If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way?
From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method?
Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset.
EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not?
THnaks
First, if you have involved a database, you are no longer unit testing. You have entered integration (for connection configuration) or functional testing land. And those are very different beasts.
The connection method should definitely be separate from data fetch. In fact, your connection should come from a factory so that you can pool it. As far as testing the connection, really all you can test is that your configuration is correct by making a connection to the DB. You shouldn't be trying to test your connection pool, as that should probably be a library someone else wrote (dbcp or c3p0). Furthermore, you probably can't test this, as your unit/integration/function tests should NEVER connect to a production level database.
As for testing that your data access code works. That's functional testing and involves a lot of framework and support. You need a separate testing DB, the ability to create the schema on the fly during testing, insert any static data into table, and return the database to a known clean state after each tests. Furthermore, this DB should be instantiated and run in such a way that 2 people can run the tests at once. Especially if you have more than 1 developer, plus an automated testing box.
Asserts should be against data that is either static data (list of states for example, that doesn't change often) or against data that is inserted during the test and removed afterwords so it doesn't interfere with other tests.
EDIT: As noted, there are frameworks to assist with this. DBUnit is fairly common.
You can grab ideas from here. I would go for mock objects when unit testing DB.
Otherwise, if application is huge and you are running long and complex unit tests, you can also virtualize your DB server and easily revert it to a saved snapshot to run again your tests on a known environment.
Using my Acolyte framework ( https://github.com/cchantep/acolyte ) you can mimick any JDBC supported DB, describing cases (how to handle each query/update executed) and which resultset/updatecount to returned in each case (describe fixtures as row list for queries, count for update).
Such connection can be directly used passing instance where JDBC is required, or registered with unique id in JDBC URL namespace jdbc:acolyte: to be available for code getting connection thanks to JDBC URL resolution.
Whatever way of creating connection, Acolyte keep each one isolated which is right for unit test (without having extra cleanup to do on a test DB).
As persistence cases can dispatched to different isolated connection, you no longer need a big-all-in-on-hard-to-manage db (or fixtures file): it can be easily split in various connection, e.g. one per persistence method/module.
My Acolyte framework is usable either in pure Java, or Scala.
If the goal is to test method functionality, not the database SP or SQL statement, then you may want to consider dependency injection in sense of data provider interface. In other words, your class uses an interface with methods returning data. The default implementation uses the database. The unit test implementation has several options:
mocking (NMock, Moq, etc.), great way, I live mocking.
in-memory database
static database with static data
I don't like anything but first. As a general rule, programming to interfaces is always much more flexible.
For database connection establish testing: you could let the connection execute a very simple SQL as testing method. Some application servers have such configuration, following snippet is from JBoss DB configuration:
<!-- sql to call on an existing pooled connection when it is obtained from pool
<check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql>

Is JPA an appropriate ORM for this scenario?

We have a central database for accounts. It contains login information, and a field called database profile. The database profile indicates what database connection should be used for the account. For example, we would have Profile1, Profile2, Profile3... ProfileN
If a user is indicated to have Profile1, they would be using a different database than a user who is indicated to be a part of Profile2.
My understanding of JPA is that you would need a new EntityManagerFactory for each Profile (Persistence Unit), even though the databases all have the same schema, just different connection information. So if we ended up having 100 profiles we would have 100 Entity Manager Factories, which doesn't seem ideal.
I've looked into the EntityManagerFactory, and it doesn't seem to let you change the database connection options.
Is my only option to have N EntityManagerFactory's, and if so would their be be any major consequences to this (such as bad performance)?
Thanks for any advice.
The kinds of things you're talking about are getting out of the scope of the JPA abstraction. Things like specific connection management are generally more provider specific. For example, with Hibernate SessionFactory you can certainly create new sessions from an aribtraty JDBC connection. There would be pitfalls to consider such as ID generation schemes (you'll probably have to use sequences generated in the DB), and you're basically hooped for L2 caching, but with careful programming it could be made to work.
Just use javax.persistence.Persistence#createEntityMananagerFactory(String,Map), and provide in the Map the connection parameters. Cache the EMF, and use the connections judiciously, don't mix n match object from the different EMFs.
If you are using spring then I know there is a way to dynamically switch the DataSource. Find more information here

how to pass user created connection to hibernate

is there any way, to restrict hibernate not to create connection of its own(what we define in hibernate.properties or hibernate.cfg.xml), instead i can create and pass the connection to hibernate for further processing.
The problem is that i need to set the ApplicationContext on the connection given that the i am using oracle connection. I know how to create a connection and set the applicationContext on to it.. but the problem is that i don't know how to force hibernate to use the connection that i have created.. Please help..
The right way to do this would be to use a custom implementation of o.h.c.ConnectionProvider. In the getConnection() method, you will have the opportunity to cast the regular Connection into an OracleConnection and to do dark voodoo with it before to return it.
This interface has several implementations that you can extend to ease the work, depending on how you get the initial connection (e.g. from a Datasource).
This post in the Hibernate forums shows an implementation that could be used as a kickoff example (the poster is also doing black magic with an OracleConnection so it's a good example).
Have a look at this section of the Hibernate manual.

Multiple transaction managers in spring and select one at runtime

For each client, I have separate databases but business logic and tables are same for each client. I want common service and dao layer for each client. In dao, I select datasource based on logged user client. In #Transactional, I have to pass bean id of transaction manager. How to make common service layer with #Transactional annotation.
Same question is here
Multiple transaction managers - Selecting a one at runtime - Spring
Choose between muliple transaction managers at runtime
but nobody reply
If you want to create a database connection dynamically, then have a look at this SO post.
From the post linked : Basically in JDBC most of these properties are not configurable in the
API like that, rather they depend on implementation. The way JDBC
handles this is by allowing the connection URL to be different per
vendor.
So what you do is register the driver so that the JDBC system can know
what to do with the URL:
DriverManager.registerDriver((Driver)
Class.forName("com.mysql.jdbc.Driver").newInstance());
Then you form
the URL:
String url =
"jdbc:mysql://[host][,failoverhost...][:port]/[database][?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]"
And finally, use it to get a connection:
Connection c = DriverManager.getConnection(url);
In more
sophisticated JDBC, you get involved with connection pools and the
like, and application servers often have their own way of registering
drivers in JNDI and you look up a DataSource from there, and call
getConnection on it.
In terms of what properties MySQL supports, see here (The link is dead).
EDIT: One more thought, technically just having a line of code which
does Class.forName("com.mysql.jdbc.Driver") should be enough, as the
class should have its own static initializer which registers a
version, but sometimes a JDBC driver doesn't, so if you aren't sure,
there is little harm in registering a second one, it just creates a
duplicate object in memeory.
I don't know if this will work, since I have not tested it, but you
could try.
Now what you could do is, use the #Transactional annotation on top of the DAOs without specifying any values (That works). Now in your DAO classes, instead of injecting any DataSource bean, create your own dataSource dynamically as specified in the above link and then either inject that dependency at runtime, use getter setter methods, or just use the new keyword. I hope that'd do the trick.
NOTE: I have not tested it myself yet, so if this works, do let me know.
You do not need to configure and switch between multiple transaction managers to accomplish your end goal. Instead use the Spring provided org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource mechanism.
Detailed examples can be found here :
https://spring.io/blog/2007/01/23/dynamic-datasource-routing/
http://howtodoinjava.com/spring/spring-orm/spring-3-2-5-abstractroutingdatasource-example/

Categories