Database caching with Spring and being able to query it - java

So, I have a Java EE application using Spring framework and JDBCtemplate. And, my application has to do several JDBC database read requests (no/very little writes) on the same database (which is a Postgres DB but is not normalized for a bunch of reasons) but with different sql statements (different where clauses). So, given this situation, I would like to be able to cache the database and be able to run queries on the cache, thereby saving me expensive JDBC calls. So, please suggest appropriate tools or frameworks or any other solutions.

You can start with using simple maps depending the query parameter you are using. A more viable solution is using ehcache.

If you use Spring 3.1 or later, you can use #Cacheable on methods. You need to include <cache:annotation-driven /> in your application context configuration. For simple cases you may use spring's ConcurrentCacheFactoryBean as cache manager. For more complex cases you can use ehcache via spring's ehcache adapter. Use #CacheEvict to reset cache.

Related

Cache MySQL DB with Apache Ignite

I have some application written in JAVA.
We are using MySQL DB.
It is possible to integrate that MySQL DB with Apache Ignite as In Memory cache and use that configuration without any updates in JAVA application (of course some DB connection details should be changed)?
So my application do the same staff but only difference will be connection with Apache Ignite instead of MySQL?
It is possible this kind of configuration?
I suppose you are looking for the write-through feature. I'm not sure what is your use case, but you should be aware of some limitations like your data have to be preloaded into Ignite before running SELECT queries. From a very abstract perspective, you need to define POJOs and implement a custom CacheStore interface. Though GridGain Control Center can do the latter for you automatically, check this demo as a reference.

Mock Database, MockMvc

I have a simple REST app with MySQL database, everything works fine, but while testing do we need to create a dummy object and test on it, or test via Mock database?
The dummy object has quite large constructor and nested classes, which is a lot of work.
IMO, there's little point using a mock database, unless you're testing connectivity handling. For example, how does my application behave if the database connection is dropped etc.
For testing SQL, you will do no better than testing against the actual database you're going to use in production. If you use another database as a substitute, i.e. H2, make sure you understand that you are testing a DB driver and database that will be different to your production deployment and this means that you may not catch potential errors in your tests that use this setup.
For testing data handling, you could also use a mock of some kind but again, if you're always going to be better off using the actual database you will be using in production, whenever you can.
If you're using Hibernate as an ORM provider, as part of setting up your integration tests, you can have it execute DML scripts to load your data for testing purposes.
If you using spring boot, then H2 is one of the popular in memory databases. Spring Boot has very good integration for H2.
For integration tests, you should consider using a database in-memory, such as H2.
H2 supports compatibility modes for IBM DB2, Apache Derby, HSQLDB, Microsoft SQL Server, MySQL, Oracle and PostgreSQL. To use the MySQL mode, use the database URL as shown below (and refer to the documentation for further details):
jdbc:h2:~/test;MODE=MySQL;DATABASE_TO_LOWER=TRUE

How to use a JTA transaction with two databases?

App1 interacts with App2(EJB application) using some client api exposed by App2.Uses CMT managed JTA transaction in Jboss.We are getting the UserTransaction from App2(Jboss) using JNDI look up.
App1 makes a call to App2 to insert data into DS2 using UserTransaction's begin() and commit().
App1 makes a call to DS1 using Hibernate JPA to insert data into DS1 using JPATransaction Manager.
Is it possible to wrap above both DB operations in a single transaction(Distributed transaction)
PFB the image which describes requirement
To do this it´s necessary to implement your own transactional resource, capable of joining an ongoing JTA transaction. See this answer as well for some guidelines, one way to see how this is done is to look at XA driver code for a database or JMS resource, and base yourself on that.
This is not trivial to do and a very rare use case, usually solved in practice by adopting an alternative design. One way would be to extract the necessary code from App2 into a jar library, and use it in Tomcat with a JTA transaction manager like Atomikos connected to two XA JTA datasources.
Another way is to flush the SQL statements into the database into tomcat and see if that works, before sending a synchronous call to JBoss, returning the result if the transaction in JBoss went through.
Depending on that commit/rollback in tomcat. This does not guarantee that will work 100% of the times (network failure etc) but might be acceptable depending on what the system does and the business consequences of a failed transaction.
Yet another way is to make the operation revertable in JBoss side and expose a compensate service used by tomcat in case errors are detected. For that and making the two servers JBoss you could take advantage of the JBoss Narayana engine, see also this answer.
Which way is better it depends on the use case, but implementing your own XA transactional services is a big undertaking, I would be simpler to change the design. The reason that very few projects are doing it is doing it is that it´s complex and there are simpler alternatives.
Tomcat is a webserver, so it does not support Global Transactions.
JBoss is an Application server, so it supports Global transactions.
If you have to combine both, you have to use JOTM or ATOMIKOS which acts as Trasaction Managers and commits or rollbacks.

Handling muliple database using spring and hibernate

We want to scale horizontally by pushing data to different database based on user groups. This is required since data would be huge. Right now we are looking at RDBMS only. Spring and hibernate/Eclipse-link is what options we have. I have few questions around it and I see similar questions have been asked multiple times. I am asking this again since I want to understand few more specifics.
What are the best practices one should follow when using multiple databases?(Detailed questions below)
Multiple session factories or single session factory? What is the recommended approach? I see lot of posts talking about creating multiple session factories and dynamic data source implementation uses single session factory and provides different data source based on user group. Any scalability issues using single session factory having many user groups?
All sessions are tied to session factories or the underlying data source? I am assuming that multiple connection pools would be created based on each db am I right?
Dynamic data source implementation of spring to handle multiple databases or Hibernate multi-tenancy?
Are their any issues with Transaction mgmt when it comes to using dynamic data source? I didn't see any posts except for 2nd level cache.
If used c3P0 for connection pooling how its handled in the case of dynamic data source approach?
Any Dos and donts for above approaches?
There is a project developed by Hibernate people and Google, called Hibernate Shards. It is there for just what you need, sharding data across multiple databases.
It provides interfaces named:
org.hibernate.shards.session.ShardedSession
org.hibernate.shards.ShardedSessionFactory
org.hibernate.shards.criteria.ShardedCriteria
org.hibernate.shards.query.ShardedQuery
Each one extends Hibernate's classic interface (with similar name!)
By using Hibernate Shards all multi-tenancy logic goes behind Hibernate, and most of the time you don't have to deal with it.
So transaction management remains, in your case, you shall have a distributed JTA transaction manager, and it would be much easier if you use an application server such as JBoss.

Best approach for Spring+MyBatis with Multiple Databases to support failovers

I need to develop some services and expose an API to some third parties.
In those services I may need to fetch/insert/update/delete data with some complex calculations involved(not just simple CRUD). I am planning to use Spring and MyBatis.
But the real challenge is there will be multiple DB nodes with same data(some external setup will takes care of keeping them in sync). When I got a request for some data I need to randomly pick one DB node and query it and return the results. If the selected DB is unreachable or having some network issues or some unknown problem then I need to try to connect to some other DB node.
I am aware of Spring's AbstractRoutingDataSource. But where to inject the DB Connection Retry logic? Will Spring handle transactions properly if I switch the dataSource dynamically?
Or should I avoid Spring & MyBatis out-of-the-box integration and do Transaction management by myself using MyBatis?
What do you guys suggest?
I propose to you using of NoSQL database like MongoDB. It is easy clustering. You can configure for example use 10 servers and do replication of data 3 times.
Thats mean that if 2 of your 10 servers will fails - you still got data save.
NoSQL databases is different comparing to RDBS, but they can give hight performance for clustering.
Also, there is no transactions support for NoSQL - you have to do it manually in case of financial operations.
Actually you should thing in different way when developing with NoSQL.
Yes, it will work. Get AbstractRoutingDataSource and code your own one. The only thing you cannot do is to change the target database while a transaction is running.
So what you have to do is putting the db retry code in the getConnection. If during the transaction that connection becomes invalid you should let it fail.

Categories