I'm facing issue when try store HikariDataSource in Ignite cache, it can't be (de)serialized by Ignite. I like Ignite's features for caches, so want to reuse it also for local needs.
Caused by: org.apache.ignite.binary.BinaryInvalidTypeException: com.zaxxer.hikari.util.ConcurrentBag$$Lambda$2327/0x00000008010b9840
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:697)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1765)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1724)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1987)
at org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
at org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
... 70 common frames omitted
How to skip (de)serialization for CacheMode.LOCAL caches in Ignite?
Use HashMap if you need keep a reference to the data source locally. The map doesn’t serialize objects. Ignite’s local cache always serializes records.
Related
Has anyone tried invalidating an entire memcached namespace?
For example I have two reads methods having differents keys
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public List<UrlClientExclusion> list(#ParameterValueKeyProvider String idClient) {
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public UrlClientExclusion find(#ParameterValueKeyProvider int idUrlClientExclusion) {
and I want delete entire namespace UrlClientExclusion.TABLE_NAME on update/delete operation
I can't use a keylist method because there are many instances of the app
#InvalidateMultiCache(namespace = UrlClientExclusion.TABLE_NAME)
public int update(UrlClientExclusion urlClientExclusion, /*#ParameterValueKeyProvider List<Object> keylist*/ /* it is impossibile for me put all keys in this list*/) {
so the solution is to delete entire namespace.
What is the annototation to do this?
It is possibible to build custom annotation to delete entire namespace? How?
Many thanks
Memcached doesn't support namespaces, SSM provides namespaces as a logic abstraction. It is not possible to flush all keys from given namespaces as memcached doesn't group keys into namespaces. Memcached support only flushing/removing single key or all keys.
You can flush all your data from memcached instance or need to provide exact keys that should be removed.
I don't know how this can be handled with the simple-spring-memcached lib. But I would suggest you to use Spring's Cache Abstraction instead. In that way you can change a cache storage to one of your preference e.g. ConcurrentHashMap, Ehcache, Redis etc. It would be just a configuration change for your application. For the eviction of the namespace, you could do something like:
#CacheEvict(cacheNames="url_client_exclusion", allEntries=true)
public int update(...)
Unfortunately there is not an official support for Memcached offered by Pivotal, but if you really need to use Memcached, you could check out Memcached Spring Boot library which is compliant with the Spring Cache Abstraction.
There is a sample Java app where you could see how this lib is used. Over there you could also find an example of #CacheEvict usage (link).
Shortly, I have an entity mapped to view in DB (Oracle) with enabled 2nd level Cache (read only strategy) -- ehcache.
If I manually update some column in DB -- cache will not be updated.
I did not find any ways to do this. Only if updates will be done through Hibernate entity.
May I somehow implement this feature?
Maybe Job to monitor table (or view)? Or maybe there is some method to notify Hibernate about change in DB in concrete table.
Thanks for future answers!
According to Hibernate JavaDoc, you can use org.hibernate.Cache.evictAllRegions() :
evictAllRegions() Evict all data from the cache.
Using Session and SessionFactory:
Session session = sessionFactory.getCurrentSession();
if (session != null) {
session.clear(); // internal cache clear
}
Cache cache = sessionFactory.getCache();
if (cache != null) {
cache.evictAllRegions(); // Evict data from all query regions.
}
1) If you need update only one entity (if directly from db you will update only certain entities) not whole session, you can use
evictEntityRegion(Class entityClass) Evicts all entity data from the given region (i.e.
2) If you have a lot of entities, that can be updated directly from db you can use this method that evicts all entities from 2nd level cache (we can expose this method to admins through JMX or other admin tools):
/**
* Evicts all second level cache hibernate entites. This is generally only
* needed when an external application modifies the game databaase.
*/
public void evict2ndLevelCache() {
try {
Map<String, ClassMetadata> classesMetadata = sessionFactory.getAllClassMetadata();
Cache cache = sessionFactory.getCache();
for (String entityName : classesMetadata.keySet()) {
logger.info("Evicting Entity from 2nd level cache: " + entityName);
cache.evictEntityRegion(entityName);
}
} catch (Exception e) {
logger.logp(Level.SEVERE, "SessionController", "evict2ndLevelCache", "Error evicting 2nd level hibernate cache entities: ", e);
}
}
3) Another approach is described here for postgresql+hibernate, I think you can do something similar for Oracle like this
Use debezium for asynchronous cache updation from your database. You can know more by visiting https://debezium.io/
Also this article is very helpful as it gives direct implementation
https://debezium.io/blog/2018/12/05/automating-cache-invalidation-with-change-data-capture/
As mentioned, when you update DB from the back-end manually (not though application/ hibernate session), the cache is not updated. And your application remains ignorant about it.
To tell the app about the change, you need to refresh the entire cache or part of the cache related to the entity depending on the case. This can be one in two ways:
1- Restart the application - In this case the cache will be rebuild with updated DB change.
2- Trigger the update w/o restarting the app - You need not restart the app but you want to tell you application that the current cache in invalid and it should be refreshed.
You can give this external push to you app in many ways. Few are
listed below.
Through JMX.
Through a servlet with a published URL to refresh the cache. Hit the URL after you change tables in DB.
Implementing a trigger on the database that call a listener on the application.
While implementing the external push/ admin task, you can call a suitable cache related method based on your requirement to invalidate cache/ refresh cache. Examples: Session.refresh(), Cache.evictAllRegions(), Cache.evictEntityRegion(entityName) etc as described in other posts.
You will find here the way to control the second level cache:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-sessioncache
You can use session.refresh() method to reload the objects which are currently held in session.
Read object loading for more detail.
As of JEE 7.0:
myStatelessDaoBean.getSession().evict(MyEntity.class);
I am following spring batch admin.
I want to use database for saving meta data values.
My meta data tables are created, but data does not go to those tables. It still uses in memory for storing metadata.
I know spring uses MapJobRepositoryFactoryBean as the implementation class for jobRepository bean to store data in memory, and we have to change it to
JobRepositoryFactoryBean if we want to store meta data in database.
However, even after changing it, I see no effect. (I have cleaned and recompiled, no issues there)
I have wasted some time searching it, but with no success. Can any one help?
My batch-oracle.properties file is-
batch.jdbc.driver=oracle.jdbc.driver.OracleDriver
batch.jdbc.url=jdbc:oracle:thin:#192.168.2.45:1521:devdb
batch.jdbc.user=hsdndad
batch.jdbc.password=isjdsn
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-oracle10g.sql
batch.schema.script=classpath:/org/springframework/batch/core/schema-oracle10g.sql
batch.business.schema.script=classpath:oracle/initial-query.sql
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.OracleSequenceMaxValueIncrementer
batch.database.incrementer.parent=sequenceIncrementerParent
batch.lob.handler.class=org.springframework.jdbc.support.lob.OracleLobHandler
batch.grid.size=2
batch.jdbc.pool.size=6
batch.verify.cursor.position=true
batch.isolationlevel=ISOLATION_SERIALIZABLE
batch.table.prefix=BATCH_
After some digging I came to know about the particular convention of properties file(earlier I was giving it name batch-default.properties) . . so now I think it is trying to insert but throwing some SERIALIZABLE exceptions. – Nirbhay Mishra
Try changing the isolationLevelForCreate for the JobRepository to ISOLATION_READ_COMMITTED
I would like to fetch multiple Hibernate mapped objects from a database in a batch. As far as I know this is not currently supported by Hibernate (or any Java ORM I know of). So I wrote a driver using RMI that implements this API:
interface HibernateBatchDriver extends Remote
{
Serializable [] execute (String [] hqlQueries) throws RemoteException;
}
The implementation of this API opens a Hibernate session against the local database, issues the queries one by one, batches up the results, and returns them to the caller. The problem with this is that the fetched objects no longer have any Session attached to them after being sent back, and as a result accessing lazily-fetched fields from such objects later on ends up with a no session error. Is there a solution to this problem? I don't think Session objects are serializable otherwise I would have sent them over the wire as well.
As #dcernahoschi mentioned, Session object is Serializable, but the JDBC connection is not. Serializable means that you save something to a file, later you read it and it's the same object. You can't save a JDBC connection to a file, and restore it later from that file. You should have to open a new JDBC connection.
So, even though you could send the session via RMI, you would need JDBC connection in the remote computer as well. But if it was possible to setup a session in the remote computer, then why not execute the queries in that computer?
If you want to send the query results via RMI, then you need to do is fetch the whole objects without lazily fetching. In order to do that you must define all relationships as eagerly fetched in your mappings.
If you can't change the mappings to eager, then there is an alternative to get a "deep" copy of each object and send this object through RMI. Creating a deep copy of your objects will take some effort, but if you can't change the mapping to eager fetching it is the only solution.
This approach means that your interface method must change to something like:
List[] execute (String [] hqlQueries) throws RemoteException;
Each list in the method result will keep the results fetched by one query.
Hibernate Session objects are Serializable. The underlying JDBC connection is not. So you can disconnect() the session from the JDBC connection before serialization and reconnect() it after deserialization.
Unfortunately this won't help you very much if you need to send the session to a host where you can't obtain a new JDBC connection. So the only option is to fully load the objects, serialize and send them to the remote host.
Follow my code:
Company cc = em.find(Company.class, clientUser.getCompany().getId());
System.out.println(cc.getCompany_code());
HashMap findProperties = new HashMap();
findProperties.put(QueryHints.CACHE_RETRIEVE_MODE, CacheRetrieveMode.BYPASS);
Company oo = em.find(Company.class, clientUser.getCompany().getId(), findProperties);
System.out.println(oo.getCompany_code());
Just like the example "Used as EntityManager properties". here
But, there are nothing different between the two outputs.
What are you expecting to be different and why?
Note that CACHE_RETRIEVE_MODE only affects the shared (2nd level) cache, not the persistence context (1st level cache/transactional cache), object identity must always be maintained in the persistence context for objects that have already been read.
If you have changed your database, and expect the new data then try the BYPASS using a new EntityManager, or try using refresh().
EclipseLink also provides the query hint "eclipselink.maintain-cache"="false" to bypass the persistence context as well.
What version of EclipseLink are you using? I believe there was a bug in BYPASS in the 2.0 release that was fixed in 2.1. Try the latest release.