SQL equivalent of Javax Cache 'put' (INSERT or UPDATE) - java

I am using javax cache along with database. I uses cache's APIs to get/put/delete entities and the database is behind this cache. For this,I am using CacheLoader and CacheWriter.
So, following are SQL's construct equivalent to cache API
SELECT -> get
INSERT -> put
DELETE -> delete
If I have entry already present in cache and I updated it, then I will get that value 'write' method only. But, since the value is present in database, I need to use UPDATE query.
How to identify which database operation to perform in cache's 'put' operation ?
Note : UPSERT is not good option from performance point of view.

If you put the value in the cache you can first check if the key is already there, in that case you need an UPDATE. If the key was not present, you need an INSERT. It sounds like you could benefit from an ORM with an L2 cache, such as Hibernate, which handles all these scenarios (and many more) for you.

There are several ways I can think of. Basically these are variations of:
Metadata in the database
Within an entity I have typically additional fields which are timestamps for insert and update and a modification counter which are handled by the object to relational mapper (ORM). That is very useful for debugging. The CacheWriter can check whether the insert timestamp is set, if yes, it is an update, if no it is an insert.
It does not matter whether the value gets evicted meanwhile, if your application is reading the latest contents through the cache and writes a modified version of it.
If your application does not read the data before modifying or this happens very often, I suggest to cache a flag that like insertedAlready. That leads to three way logic: isnerted, not inserted, not in the cache = don't know yet. In the letter case you need to do a read before update or insert in the cache writer.
Metadata in the cache only
The cached object stores additional data whether the object was read from the database before. Like:
class CachedDbValue<V> {
boolean insertedAlready;
V databaseContent;
}
The code facing your application needs to wrap the database data into the cached value.
Side note 1: Don't read the object from the cache and modify the instance directly, always make a copy. Modifying the object directly may have different unwanted effects with different JCache implementations. Also check my explanation here: javax.cache store by reference vs. store by value
Side note 2: You are building a caching ORM layer by yourself. Maybe use an existing one.

Related

How to optimize one big insert with hibernate

For my website, I'm creating a book database. I have a catalog, with a root node, each node have subnodes, each subnode has documents, each document has versions, and each version is made of several paragraphs.
In order to create this database the fastest possible, I'm first creating the entire tree model, in memory, and then I call session.save(rootNode)
This single save will populate my entire database (at the end when I'm doing a mysqldump on the database it weights 1Go)
The save coasts a lot (more than an hour), and since the database grows with new books and new versions of existing books, it coasts more and more. I would like to optimize this save.
I've tried to increase the batch_size. But it changes nothing since it's a unique save. When I mysqldump a script, and I insert it back into mysql, the operation coast 2 minutes or less.
And when I'm doing a "htop" on the ubuntu machine, I can see the mysql is only using 2 or 3 % CPU. Which means that it's hibernate who's slow.
If someone could give me possible techniques that I could try, or possible leads, it would be great... I already know some of the reasons, why it takes time. If someone wants to discuss it with me, thanks for his help.
Here are some of my problems (I think): For exemple, I have self assigned ids for most of my entities. Because of that, hibernate is checking each time if the line exists before it saves it. I don't need this because, the batch I'm executing, is executed only one, when I create the databse from scratch. The best would be to tell hibernate to ignore the primaryKey rules (like mysqldump does) and reenabeling the key checking once the database has been created. It's just a one shot batch, to initialize my database.
Second problem would be again about the foreign keys. Hibernate inserts lines with null values, then, makes an update in order to make foreign keys work.
About using another technology : I would like to make this batch work with hibernate because after, all my website is working very well with hibernate, and if it's hibernate who creates the databse, I'm sure the naming rules, and every foreign keys will be well created.
Finally, it's a readonly database. (I have a user database, which is using innodb, where I do updates, and insert while my website is running, but the document database is readonly and mYisam)
Here is a exemple of what I'm doing
TreeNode rootNode = new TreeNode();
recursiveLoadSubNodes(rootNode); // This method creates my big tree, in memory only.
hibernateSession.beginTrasaction();
hibernateSession.save(rootNode); // during more than an hour, it saves 1Go of datas : hundreads of sub treeNodes, thousands of documents, tens of thousands paragraphs.
hibernateSession.getTransaction().commit();
It's a little hard to guess what could be the problem here but I could think of 3 things:
Increasing batch_size only might not help because - depending on your model - inserts might be interleaved (i.e. A B A B ...). You can allow Hibernate to reorder inserts and updates so that they can be batched (i.e. A A ... B B ...).Depending on your model this might not work because the inserts might not be batchable. The necessary properties would be hibernate.order_inserts and hibernate.order_updates and a blog post that describes the situation can be found here: https://vladmihalcea.com/how-to-batch-insert-and-update-statements-with-hibernate/
If the entities don't already exist (which seems to be the case) then the problem might be the first level cache. This cache will cause Hibernate to get slower and slower because each time it wants to flush changes it will check all entries in the cache by iterating over them and calling equals() (or something similar). As you can see that will take longer with each new entity that's created.To Fix that you could either try to disable the first level cache (I'd have to look up whether that's possible for write operations and how this is done - or you do that :) ) or try to keep the cache small, e.g. by inserting the books yourself and evicting each book from the first level cache after the insert (you could also go deeper and do that on the document or paragraph level).
It might not actually be Hibernate (or at least not alone) but your DB as well. Note that restoring dumps often removes/disables constraint checks and indices along with other optimizations so comparing that with Hibernate isn't that useful. What you'd need to do is create a bunch of insert statements and then just execute those - ideally via a JDBC batch - on an empty database but with all constraints and indices enabled. That would provide a more accurate benchmark.
Assuming that comparison shows that the plain SQL insert isn't that much faster then you could decide to either keep what you have so far or refactor your batch insert to temporarily disable (or remove and re-create) constraints and indices.
Alternatively you could try not to use Hibernate at all or change your model - if that's possible given your requirements which I don't know. That means you could try to generate and execute the SQL queries yourself, use a NoSQL database or NoSQL storage in a SQL database that supports it - like Postgres.
We're doing something similar, i.e. we have Hibernate entities that contain some complex data which is stored in a JSONB column. Hibernate can read and write that column via a custom usertype but it can't filter (Postgres would support that but we didn't manage to enable the necessary syntax in Hibernate).

Consequences of using use StepExecutionContext/JobExecutionContext to share Hashmap with large values

I have a requirement in which I am retrieving values in one Reader of the Step using SQL statements and doing the same request in next reader.
I do not want to make another request if the data is already fetched in the First reader and pass that collection (possibly a HashMap) to next step.
For this I have gone through the following link on SO :
How can we share data between the different steps of a Job in Spring Batch?
In many of the comments it is mentioned that 'data must be short'.
Also it is mentioned in one response that: these contexts are good to share strings or simple values, but not for sharing collections or huge amounts of data.
By passing that HashMap, I believe it automatically infers that the reference of the HashMap will be passed.
It would be good to know the possible consequences of passing it before hand and any better alternative approach.
Passing data between step is indeed done via the execution context. However, you should be careful about the size of data you put in the execution context as it is persisted between steps.
I do not want to make another request if the data is already fetched in the First reader and pass that collection (possibly a HashMap) to next step
You can read the data from the database only once and put it in a cache. The second reader can then get the data from the cache. This would be faster that reading the data from the database a second time.
Hope this helps.

Not able to clear hibernate cache

I am using broadleaf demo application which has hibernate configured with ECache. I also have a external application which is interacting with same db directly.
When I update db using external application, my broadleaf application unware of those changes throws duplicate primary key while creating new entities. I am trying to resolve this issue by clearing out the hibernate cache periodically which enables hibernate to build the cache from scratch and hence everything syncs up.
I am using following code to clear out the second level cache.
Cache cache = sessionFactory.getCache();
String entityName = "someName";
cache.evictEntityRegion(entityName);
But, this doesn't seem to work.
I even tried to clear the cahche manually using JMX listeners like visualvm. But this also doesn't work. I am still getting old primary key values in my API's. Is this because only second level cache is being cleared leaving first level cache? I am stuck here. Can any one please help with this issue?
UPDATED :
Let's say I have application A and B. A uses broadleaf and B uses raw SQL queries to insert into db. I create few orders using application A and then I insert few orders directly in db using application B along with I update the SEQUENCE_GENERATOR table with max(order_id) + 1. Afterward when I try to create order using application A, it throws duplicate primary key exception. I tried to debug into the issue where I found that IdOverrideTableGenerator is still giving my old primary key. This made me curious about the second level cache. Doesn't broadleaf uses SEQUENCE_GENERATOR for starting references for primary key generation and maintains current state in cache ? In my case even updating the SEQUENCE_GENERATOR doesn't ensure the fresh and unique primary key.
You're correct in that you need L2 cache invalidation for your external imports if you want your implementation to recognize your new entities at runtime. Otherwise, you would have to wait for the configured TTL on your cache region to expire for your application to see the new records.
However, L2 cache doesn't have any direct correlation to how Hibernate determines primary keys in the case of Broadleaf. Broadleaf utilizes a table generator strategy for grabbing a batch of ids in a performant and cluster-safe way. You probably notice a table entitled SEQUENCE_GENERATOR in your schema. This table contains various id ranges that have been acquired for different domain classes. Whenever Hibernate needs to grab a new batch of ids for insertions, it will interact with this table to register a new range of ids to check out. This should guarantee that no node in the cluster will try to insert an entity with a colliding id.
In your case, you need to guarantee that an external process can perform insertions in a non-colliding manner. To do so, I believe you need to create an API for the external process to call that will perform this same "id checkout" operation on behalf of that calling process. Then, your import code (presumably housed elsewhere) will have a range of ids it can safely use. The code backing the API you create should perform the same operation that Hibernate would normally perform to acquire a batch of ids for entity insertions. You can review org.hibernate.id.enhanced.TableGenerator for an example of what this looks like and create something similar for your own purposes.

how strong consistency and eventual consistency work in datastore

i'm no expert in Databases so what i know about queries is that they are the way to read or write in databases
in eventual consistency read will return stale data
in write query first data node will be updated but other node will need some time to be updated
in strong consistency read will be locked until data get modified to it latest version (really i'm not sure about what i said here so help if u got it wrong)
in write query all read operations for will be lock until data node get modified to its latest version
so if i write data as eventual and tried ancestors query to get that data will i get the latest version ?
if i used ancestors query to update would all eventual read operation get the latest version ?
update
i think Transactions is there so if there is multi modification request to the same data 1 will succeeded and other will fail after that the data the have been modified will take some time to be replicated in all datacenter so if transaction succeeded does not mean all read query will return the latest version (correct me if i'm right)
If you use what you call an "ancestor query", you're working in a transaction: either the transaction terminates successfully, in which case all subsequent reads will get the values as updated by the transaction, or else the transaction fails, in which case none of the changes made by the transaction will be seen (this all-or-nothing property is often referred to as a transaction being "atomic"). In particular, you do get strong consistency this way, not just eventual consistency.
The cost can be large, in terms of performance and scalability. In particular, an application should not update an entity group (any and all entities descending from a common ancestor) more than once a second, which can be a very constraining limit for a highly scalable application.
The online docs include a large variety of tips, tricks and advice on how to deal with this -- you could start at https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/ and continue with the "additional resources" this article lists at the end.
One simple idea that often suffices is that (differently from queries) getting a specific entity from its key is strongly consistent without needing transactions, and memcache is also strongly consistent; writing a modified entity gives you its new key, so you can stash that key into memcache and have other parts of your code fetch the modified entity from that key, rather than relying on queries. This has limits, of course, because memcache doesn't give you unbounded space -- but it's a useful idea to keep in mind, nevertheless, in many practical cases.
With GAE the only way to be consistante is to use transaction, into a transaction you can update, query the last update but it's slower.
For me using ancestors just compose the primary key and that's all.

is google appengine datastore.get(key) consistent?

I've read the consistency page on
https://cloud.google.com/appengine/docs/java/datastore/structuring_for_strong_consistency
now i know that for queries to be consistent you need to use ancestor queries.
What about single key? query for example:
Entity e = datastore.get(Key)
are they eventually consistent or strongly consistent?
please do cite references or links
Yes, a get with a specific key is always consistent.
The documentation isn't as clear about this as it could be, but a get is not a query: it's a simple lookup in what is basically a key-value store. That will always return the correct data. It is only queries that can be inconsistent, because they must be done against indexes and the index update can lag.
The only reference I can give you is to point out that get is discussed on the Entities, Properties and Keys page whereas data consistency is discussed on the Datastore Queries page.

Categories