I'm trying to get a pooled JDBC DataSource in my application. I'm writing an open-source library, and want to make configuration as, well, configurable as possible. I'd rather not use JNDI or force the use of any particular type of configuration file.
Right now, I do this:
MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource();
ds.setDatabaseName("blah");
ds.setUrl("jdbc:mysql://blaggablah");
ds.setUser("jimbob");
ds.setPassword("dweezil");
Obviously not ideal if you want to use something other than MySQL in the future.
Is there some kind of generic PooledDataSourceFactory that will take a few parameters, like database type and connection info, and give me a pooled DataSource?
I'm not sure what kind of application (library?) you are creating, but unless it's a web app. stack or something, you could ask the client programmer to pass-in the datasource, e.g. via the constructor. This is really the gist of dependency injection.
If you app. is e.g. a web app. stack like Django, you can still make it so that you internally pass a datasource to your middleware (a.k.a DI). Such design facilitates modularity, and makes it easy to use that part of code in other projects. However, you'll have to go from configuration to DataSource somehow, so at some point you need to decide what the mechanism for that will be (e.g. JNDI). I think that's perfectly acceptable.
You can use a standard pooling/caching library and let the use configure it. Like Apache Jakarta Pool, or ehcache?
Related
I am trying to implement row level security so our application can enforce more stringent access control.
One of the technologies we are looking into is Oracle's Virtual Private Database, which allows row level security by basically augmenting all queries against specific tables with a where clause predicate. Since we are in a web environment, we need to set up a special context within Oracle, inside a single request's thread. We use connection pooling with a service account.
I started to look into Eclipse Link and Hibernate. Eclipse Link seems to have events that fit perfectly into this model.
This would involve us migrating from hibernate, which is not a problem, but we would then be bound to EL for these events.
Oracle seems to imply that they implement at the data source level in Web Logic product.
The context is set and cleared by the WebLogic data source code.
Question: Is it more appropriate to do this at the DataSource level with some series of events. What are the events or methods that I should pay the most attention too?
Added Question: How would I extend a connection pool to safely initialize an oracle context with some custom data? I am digging around in Apache, and it seems like extending BasicDataSource doesn't give me access to anything that would allow me to clean up the connection when Spring is done with it.
I need to set up a connection, and clean up a connection as the exit / enter the connection pool. I am hoping for an implementation that is so simple, no one can mess it up by breaking some delicate balance of products.
- Specifically we are currently using Apache Commons DBCP Basic Data Source
This would allow us to use various ways to connect to the database and still have our security enforced. But I don't see a great example or set of events to work with, and rolling my own security life cycle is never a good idea.
I eventually solved my problem by extending some of the Apache components.
First I extended org.apache.commons.pool.impl.GenericObjectPool and overrode both borrowObject() and returnObject(). I knew the type of the objects in the pool (java.sql.Connection) so I could safely cast and work with them.
Since for my case I was using Oracle VPD, I was able to set information in the Application context. I recommend you read about that in more depth. It is a little complicated and there are a lot of different options to hide or share data at various contexts level, and across RAC nodes. Start
In essence what I did was generate a nonce and use it to instantiate a session within oracle, and then set the access level of the user to a variable in that session, that the Oracle VPD policy would then read and use to do the row level filtering.
I instantiated and destroyed that information in my overridden borrowObject() and returnObject() The SQL I ran was something like this:
CallableStatement callStat =
conn.prepareCall("{call namespace.cust_ctx_pkg.set_session_id(" + Math.random() + ")}");
callStat.execute();
Note math.random() isn't a good nonce.
Next was to simply extend org.apache.commons.dbcp.BasicDataSource and set my object pool by overriding createConnectionPool(). Note that the way I did this disabled some functionality I did not need, so you may need to rewrite more or less than I did.
You can try any object level security mechanism for simplicity, like Spring Security ACL.
You will want to do this at the application layer. You will want a pre-commit hook and a post read hook.
The pre-commit hook is used to ensure that data from the client is being presented by a user authorized to modify that data. This prevents an unauthorized user from overwriting data that they shouldn't be able to access.
It's not intuitive, but the post read hook is used to keep the client from accessing data the user shouldn't be allowed to view. This happens post-view because this is being enforced at the application layer, not at the data layer. The application has no way to know if the caller is allowed to access the data until it's been retrieved from the data layer. In the post read hook you evaluate the credential on each row returned against the credential of the logged in user in order to determine whether or not access is allowed. If access is denied on any row then an exception would be raised and the data would not be returned to the client.
Application level security done in this way requires that you have a way to connect each row in a table to a permission/role required to access it and a way to evaluate a user's permissions on the server at runtime.
Hope that helps.
You will get better control by using one of the other Commons DBCP Datasources.
The Basic one is just that: basic :)
The ones in org.apache.commons.dbcp.datasources package gives you more fine-grained control.
I am unable to configure/change the Map(declared as part of hazelcast config in spring) properties after hazelcast instance start up. I am using hazelcast integrated with spring as hibernate second level cache. I am trying to configure the properties of map (like TTL) in an init method (PostConstruct annotated) which is called during spring bean initialization.
There is not enough Documentation , if there is please guide me to it.
Mean while i went through this post and found this Hazelcast MapStoreConfig ignored
But how does the management center changes the config, will it recreate a new instance again ?
Is hazelcast Instance light weight unlike session factory ? i assume not,
please share your thoughts
This is not yet supported. JCache is the only on-the-fly configuration data structure at the moment.
However you'll most probably be able to destroy a proxy (DistributedObject like IMap, IQueue, ...), reconfigure it and recreate it. Anyhow at the time of recreation you must make sure that every node sees the same configuration, for example by storing the configuration itself inside an IMap or something like that. You'll have to do some wrapping on your own.
PS: This is not officially supported and an implementation detail that might change at later versions!
PPS: This feature is on the roadmap for quite some time but didn't made it into a release version yet, it however is still expected to have full support at some time in the future.
I have a web application access to a database which it's connection info(connection string,username,pwd) are enter by the user at runtime.
thus, I can not notice any information in deploy time.
The system is supposed to support multiple type of database with different jdbc
How can I manage this situation using spring/hibernate(I doubt that hibernate can handle this because the data structure is known in runtime)??
You can use an approach similar to the one described here
Basically just subclass AbstractRoutingDataSource and override the method determineTargetDataSource (if you need to create the datasources from within your application) or determineCurrentLookupKey (if your datasources are going to be already created in the app server).
In the determineTargetDataSource method you can return whatever datasource you need, or create a new one if need be.
I am planning an application that must provide services that are very much like those of a Java EE container to third party extension code. Basically, what this app does is find a set of work items (currently, the plan is to use Hibernate) and dispatch them to work item consumers.
The work item consumers load the item details, invoke third party extension code, and then if the third party code did not fail, update some state on the work item and commit all work done.
I am explicitly not writing this as a Java EE application. Essentially, my application must provide many of the services of a container, however; it must provide transaction management, connection pooling and management, and a certain amount of deployment support. How do I either A) provide these directly, or B) choose a third party library to provide them. Due to a requirement of the larger project, the extension writers will be using Hibernate, if that makes any difference.
It's worth noting that, of all of the features I've mentioned, the one I know least about is transaction management. How can I provide this service to extension code running in my container?
Hi I recommend using the Spring Framework. It provides a nice way to bring together a lot of the various services you are talking about.
For instance to address your specific needs:
Transaction Management/Connection pooling
I built a spring-based stand-alone application that used Apache commons connection pooling. Also I believe spring has some kind of transaction mgmt built in.
Deployment support
I use ant to deploy and run things as a front-loader. It works pretty well. I just fork a seperate process using ant to run my Spring stand-alone app.
Threading.
Spring has support for Quartz which deals well with threads and thread pools
DAO
Spring integrates nicely with Hibernate and other similar projects
Configuration
Using its xml property definitions -- Spring is pretty good for multiple-environment configuration.
Spring does have transaction management. You can define a DataSource in your application context using Apache DBCP (using a org.apache.commons.dbcp.BasicDataSourceorg.springframework.jdbc.datasource.DataSourceTransactionManager for the DataSource. After that, any object in your application can define its own transactions programatically if you pass it the TransactionManager, or you can use AOP interceptors on the object's definition in your application context, to define which methods need to be run inside a transaction.
Or, the easier approach nowadays with Spring is to use the #Transactional annotation in any method that needs to be run inside a transaction, and to add something like this to your application context (assuming your transactionManager is named txManager):
<tx:annotation-driven transaction-manager="txManager"/>
This way your application will easily accept new components later on, which can have transaction management simply by using the #Transactional annotation or by directly creating transactions through a PlatformTransactionManager that they will receive through a setter (so you can pass it when you define the object in your app context).
You could try Atomikos TransactionsEssentials for Java transaction management and connection pooling (JDBC+JMS) in a J2SE environment. No need for any appservers, and it is much more fun to work with ;-)
HTH
Guy
I'm writing an application which has to be configurable to connect to Oracle, SQL Server and MySQL depending on client whim.
Up till now I'd been planning on using the JDBC-ODBC bridge and just connecting to the databases using different connection strings.
I'm told this is not very efficient.
Is there a pattern or best practice for connecting to multiple database systems? Or for selecting which driver to use?
Should I have it configurable? but include all three drivers or build three separate clients?
I'm not doing anything complex just pumping (inserting) data into the database from an event stream.
I would suggest that you make it configurable and include the three drivers. You can use a pattern like this: Create a super class (lets call it DAO) that provides the functionality of connecting to the database. This could be abstract.
Create a concrete sub class for each type of database that you wish to connect to. So you may end up with MySQLDAO, MSSQLDAO, and OracleDAO. each one will load the respective driver and use its respective connection string.
Create another class (lets call it DAOFactory) with a method getDAO(DB) that will create an instance of the DAO depending on the value of DB.
So for instance(in Pseudocode):
if(DB.equals("MySQL")){
DAO = new MySQLDAO();
}
return DAO;
So any code that needs to connect to the database will call the DAOFactory and ask for a DAO instance. You may store the DB value in an external file (like a properties file) so that you do not have to modify code to change the type of database.
this way your code does not need to know which type of database it is connecting to, and if you decide to support a fourth type of database later you will have to add one more class and modify the DAOFactory, not the rest of your code.
If you need anything complex, Hibernate is a good choice.
otherwise, what I would do is store your connection details in a properties file (or some other form of configuration) - namely: driver classname, JDBC url, username and password.
Then, all you need to do is load up the connection details from your properties file and include the correct JAR file on your classpath and you're done.
You could use a library such as Commons-DBCP if you wanted it to be a little easier to configure but other than that it's all you need to do (provided your SQL statements work on every database, of course).
If you're careful (and you test), you can do this with straight JDBC and just vary the driver class and connection information. You definitely want to stay away from the JDBC-ODBC bridge as it's generally slow and unreliable. The bridge is more likely to behave differently across dbs than JDBC is.
I think the DAO path is overkill if your requirements are as simple as listed.
If you're doing a lot of inserts, you might want to investigate prepared statements and batched updates as they are far more efficient. This might end up being less portable - hard to say without testing.
Take a look at Datasource. This is the preferred mechanism for obtaining a database connection.
IMO this provides an adminstrator the greatest flexibility for choosing database, connection pooling, and transaction strategies.
If you're using tomcat, then see here for how to register a Datasource with tomcat's JNDI.
If you're using Spring, then you can obtain a Datasource using jee:jndi-lookup.
If you're using Spring, but don't want to use JNDI, take a look at DriverManagerDataSource for a discussion of how to obtain a pooled Datasource (DBCP or C3P0).