Empty Hibernate cache on demand - java

I'm writing a soap web service: Jboss + Hibernate + Java. Database on PostrgreSQL. After publishing the webservice, it runs perfectly.
For testing purposes I change data on the database by opening pgAdmin, and changing the values on the rows by hand. Now, the problem is, Hibernate is not aware of those changes. Not until I re-publish the web service.
Is there any way to tell Hibernate to empty the cache or reload the data from the database so it will take the last values available?
Thanks!

I'm assuming you're talking about the second level cache...
The Cache API exposes several methods allowing to evict various regions. Check all the evictXxx() methods. Use SessionFactory#getCache() to obtain the Cache.

Related

ORM query? or updated DB table information should updated into hibernete web application?

The problem statement :
Example : I have table name called "STUDENT" and it has 10 rows and consider one of the rows has name as "Jack". So when my server started and running I make the DB database into cache memory so my application has the value of "jack" and I am using it all over my application.
Now external source changed my "STUDENT" table and changed name "Jack" into "Prabhu Jack". I want the updated information asap into my application with out reloading/refresh into my application.. I dont want to run some constant thread to monitor and update my application. All I want is part of hibernate or any feasible solution to achieve this?
..
What you describe is the classic case of whether to pull or push updates.
Pull
This approach relies on the application using some background thread or task system that periodically polls a resource and requests the desired information. Its the responsibility of the application to perform this task.
In order to use a pull mechanism in conjunction with a cache implementation with Hibernate, this would mean that you'd want your Hibernate query results to be stored in a L2 cache implementation, such as ehcache.
Your ehcache would specify the storage capacity and expiration details and you simply query for the student data at each point you require it. The L2 cache would be consulted first, which lives on the application server side, and would only consult the database if the L2 cache had expired.
The downside is you would need to specify a reasonable time-to-live setting for the L2 cache so that the cache got updated by a query within reason after the rows were updated. Depending on the frequency of change and usage, maybe a 5 minute window is sufficient.
Using the L2 cache prevents the need for a useless background poll thread and allows you to specify a reasonable poll time all within the Hibernate framework backed by a cache implementation.
Push
This approach relies on the point where a change occurs to be capable of notifying interested parties that something changed and allowing the interested party to perform some action.
In order to use a push mechanism, your application would need to expose a way to be told a change occurred and preferably what the change actually was. Then when your external source modifies the table in question, that operation would need to raise an event and notify interested parties.
One way to architect this would be to use a JMS broker and have the external source submit a JMS message to a queue and have your application subscribe to the JMS queue to read the message when its sent.
Another solution would be to couple the place where the external source manipulates the data tightly with your application such that the external source doesn't just manipulate the data in question, but also sends a JSON request to your application, allowing it to update its internal cache immediately.
Conclusion
Using a push situation could require the introduction of additional middleware components, should you want to efficiently decouple the external source side & your application. But it does come with the added benefit that the eventual consistency between the database and your application's cache should happen with relative real-time. This solution also has no additional needs for querying the database after startup for those rows.
Using a pull situation doesn't require anything more than what you're likely already using in your application, other than maybe using a supported L2 cache provider rather than some homegrown solution. However, the eventual consistency between the database and your application's cache is completely dependent on your TTL configuration for that entity's cache. But be aware that this solution will continue to query the database to refresh the cache once your TTL has expired.

How can I configure simple web cache for Java client application?

My application sends http requests to a webservice, but because the Terms of Service limit it to one query per second it is very important for me not to send more queries than I need. I put the results of some queries into a database that I check before trying the query again but some queries results are not well suited to putting in database so I would like some sort of dumb cache that would intercept my webservice calls and if the call was a duplicate just send the results of the previous call. I would expect to be able to configure the size of the cache and have it automatically remove the oldest entry if it fills up, it would be great if the cache could be configured as a file rather than use heap memory because my application is already quite memory intensive
For a simple caching solution try Google Guava libraries. The CacheBuilder/CacheLoader could be configured to your requirements. Guava provides a simple caching solution that is more sofisticated than java's own HashMap but light weight when compared to Ehcache and others. This cache could be used in a web service request interceptor that helps to decide whether to initiate a web service call.
A good tutorial with an example for Guava cache could be found here

Is there any way to clear entity cache in a weblogic server without restarting? [for debugging]

I am investigating a bug in my application which runs on Weblogic 10.3.4 server. For this investigation, sometimes I need to clear some tables in the database directly (using SQL Navigator). But these changes doesn't reflect immediately in the weblogic server unless I restart it. It is a time consuming task to restart this every time I modify the data in the database.
I was wondering whether there is an easy and quick way to clear the database cache in the weblogic server and force it to reload the modified data. I think if I add a ejb which calls the flush method related to every entity and if I call that method, it will do this task.
But do you have any suggestion or any other way to do this task, may be by changing a weblogic server setting?
Is there any one method call we can do for forcing flushing of all the entities in current container?
JPA 2.0 has a Cache API that allows you to clear the cache (evictAll).
EclipseLink also has its own API previous the JPA 2.0.
See,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Cache_API

Best approach for Spring+MyBatis with Multiple Databases to support failovers

I need to develop some services and expose an API to some third parties.
In those services I may need to fetch/insert/update/delete data with some complex calculations involved(not just simple CRUD). I am planning to use Spring and MyBatis.
But the real challenge is there will be multiple DB nodes with same data(some external setup will takes care of keeping them in sync). When I got a request for some data I need to randomly pick one DB node and query it and return the results. If the selected DB is unreachable or having some network issues or some unknown problem then I need to try to connect to some other DB node.
I am aware of Spring's AbstractRoutingDataSource. But where to inject the DB Connection Retry logic? Will Spring handle transactions properly if I switch the dataSource dynamically?
Or should I avoid Spring & MyBatis out-of-the-box integration and do Transaction management by myself using MyBatis?
What do you guys suggest?
I propose to you using of NoSQL database like MongoDB. It is easy clustering. You can configure for example use 10 servers and do replication of data 3 times.
Thats mean that if 2 of your 10 servers will fails - you still got data save.
NoSQL databases is different comparing to RDBS, but they can give hight performance for clustering.
Also, there is no transactions support for NoSQL - you have to do it manually in case of financial operations.
Actually you should thing in different way when developing with NoSQL.
Yes, it will work. Get AbstractRoutingDataSource and code your own one. The only thing you cannot do is to change the target database while a transaction is running.
So what you have to do is putting the db retry code in the getConnection. If during the transaction that connection becomes invalid you should let it fail.

Are there any design patterns that could work in this scenario?

We have a system (Java web application) that's been in active development / maintenance for a long time now (something like ten years).
What we're looking at doing is implementing a RESTful API to the web app. This web application, using Jersey, will be a separate project with the intent that it should be able to run alongside the main application or deployed in the cloud.
Because of the nature and age of our application, we've had to implement a (somewhat) comprehensive caching layer on top of the database (postgres) to help keep load down. Anyway, for the RESTful API, the idea is that GET requests will go to the cache first instead of the database to keep load of the database.
The cache will be populated in a way to help ensure that most things registered API users will need should be in there.
If there is a cache miss, the needed data should be retrieved from the database (also being entered into the cache in the process).
Obviously, this should remain transparent from the RESTful endpoint methods in my code. We've come up with the idea of creating a 'Broker' to handle communications with the DB and the cache. The REST layer will simply pass across ids (if looking to retrieve) or populated Java objects (if looking to insert / update) and the broker will take care of retrieving / updating / invalidating, etc.
There is also the issue of extensibility. To begin with, the API will be living alongside the rest of servers so access to the database won't be an issue however if we deploy to the cloud, we're going to need a different Broker implementation that will communicate with the system (namely the database) in a different manner (potentially through the use of an internal API).
I already have a rough idea on how I can implement this but it struck me that is probably a problem for which a suitable pattern could exist. If I could follow an established pattern as opposed to coming up with my own solution, that'll probably be a better choice. Any ideas?
Ehcache has an implementation of just such a cache that it calls a SelfPopulatingCache.
Requests are made to the cache, not to the database. Then if there is a cache miss Ehcache will call the database (or whatever external data source you have) on your behalf.
You just need to implement a CacheEntryFactory which has a single method:
Object createEntry(Object key) throws Exception;
So as the name suggests, Ehcache implements this concept with a pretty standard factory pattern...
There's no pattern. Just hide the initial DB services behind interfaces, build tests around their intended behavior, then switch in an implementation that uses the caching layer. I guess dependency injection would be the best thing to help you do that?
Sounds like decorator pattern will suit your need: http://en.wikipedia.org/wiki/Decorator_pattern.
You can create an DAO interface for data access, something like:
Value get(long id);
And firstly create a direct DB implementation, then create a Cache implementation which calls underlying DAO instance, in this case it should be the DB implementation.
The Cache implementation will try to get value from its own managed Cache, and from underlaying DAO if it fails.
So both of your old application or the REST will only see DAO interface, without knowing any implemntation details, and in future you can change the implementation freely.
The best design pattern for transparently caching HTTP requests is to use an HTTP cache.

Categories