We have a stateless ejb which persists some data in an object oriented database. Unfortunately, today our persistence object does not have a unique key due to some unknown reason and altering the PO is also not possible today.
So we decided to synchronize the code. Then we check if there is an object already persisted with the name(what we consider should be unique). Then we decide to persist or not.
Later we realized that the code is deployed on a cluster which has three jboss instances.
Can anyone please suggest an idea which does not allow to persist objects with the same name.
If you have a single database behind the JBoss cluster you can just apply a unique contraint to the column for example (I am assuming its an SQL database):
ALTER TABLE your_table ADD CONSTRAINT unique_name UNIQUE (column_name);
Then in the application code you may want to catch the SQL exception and let the user know they need to try again or whatever.
Update:
If you cannot alter the DB schema then you can achieve the same result by performing a SELECT query before insert to check for duplicate entries, if you are worried about 2 inserts happening at the same time you can look at applying a WRITE_LOCK to the row in question
Related
I am using Cassandra database integrated into a spring boot application.
My Question is around the schema actions. If I need to make structural changes to the DB, say add a column to a table, the database needs to be recreated, however this means all the existing data gets deleted:
schema-action: CREATE_IF_NOT_EXISTS
The only way I have managed to solve this is by using the RECREATE scheme action, but as mentioned earlier, this results in data-loss.
What would be the best approach to handle this? To add structural changes such as a column name with out having to recreate the database and lose all existing data?
Thanks
Cassandra does allow you to modify the schema of an existing table without recreating it from scratch, using the ALTER TABLE statement via cqlsh. However, as explained in that link, there are some important limitations on the kind of changes you can do. You cannot modify the primary key of the table at all, you can add or delete regular columns, and you can't change the type of a column to a non-compatible one.
The reason for most of these limitations is how Cassandra needs to deal with the old data that already exists in the table. For example, it doesn't make sense to say that a column A that until now contained strings - will now contain integers - how are we supposed to handle all the old values in column A which weren't integers?
As Aaron rightly said in a comment, it is unlikely you'll want to do these schema changes as part of your application. These are usually rare operations which are done manually, or via some management application - not your usual application.
I am using hibernate envers for making history of my data, it's working fine as well. The problem here is, it's creating duplicate data in history table i.e. creating data in history table whether there is any change in audited table or not. I want only changed fields stored in my history table. I am new to hibernate envers. What can I do?
If I understand your question correctly, Envers doesn't work that way, at least not out of the box.
Envers is a commit-snapshot auditing solution where just before commit, it examines audited entity state and determines whether any attributes have been modified or not and records a snapshot of all audited fields of that entity at that point in time. This means that the only time an audit entry isn't created is when no attributes have been modified.
But it also uses the snapshot approach because it fits really well with the Query API.
Consider the inefficiency that would occur if a query to find an entity at a given revision had to read all rows from that revision back to the beginning of time, iterating each row and merging the column state captured to just instantiate a single row result-set.
With the snapshot approach, it boils down to the following query, no loops or iterative work.
SELECT e FROM AuditedEntity e WHERE e.revisionNumber = :revisionNumber
This is far more efficient from a I/O perspective both with the database reading the data pages and the network for streaming a single row result-set rather than multi-row result-set to the client.
I'd say in this case, the saying "space is cheap" really holds true when you compare that against the cost and inefficiencies your application would face doing it any other way.
If this is something you'd like Envers to support, perhaps via some user configured strategy then you're welcomed to log a new feature request in JIRA for hibernate-envers and I can take a look at its feasibility.
I had similar problem.
In my case the error was that audited field had higher precision than the database field. Please see my reply to another thread: https://stackoverflow.com/a/65844949/13381019
I am using broadleaf demo application which has hibernate configured with ECache. I also have a external application which is interacting with same db directly.
When I update db using external application, my broadleaf application unware of those changes throws duplicate primary key while creating new entities. I am trying to resolve this issue by clearing out the hibernate cache periodically which enables hibernate to build the cache from scratch and hence everything syncs up.
I am using following code to clear out the second level cache.
Cache cache = sessionFactory.getCache();
String entityName = "someName";
cache.evictEntityRegion(entityName);
But, this doesn't seem to work.
I even tried to clear the cahche manually using JMX listeners like visualvm. But this also doesn't work. I am still getting old primary key values in my API's. Is this because only second level cache is being cleared leaving first level cache? I am stuck here. Can any one please help with this issue?
UPDATED :
Let's say I have application A and B. A uses broadleaf and B uses raw SQL queries to insert into db. I create few orders using application A and then I insert few orders directly in db using application B along with I update the SEQUENCE_GENERATOR table with max(order_id) + 1. Afterward when I try to create order using application A, it throws duplicate primary key exception. I tried to debug into the issue where I found that IdOverrideTableGenerator is still giving my old primary key. This made me curious about the second level cache. Doesn't broadleaf uses SEQUENCE_GENERATOR for starting references for primary key generation and maintains current state in cache ? In my case even updating the SEQUENCE_GENERATOR doesn't ensure the fresh and unique primary key.
You're correct in that you need L2 cache invalidation for your external imports if you want your implementation to recognize your new entities at runtime. Otherwise, you would have to wait for the configured TTL on your cache region to expire for your application to see the new records.
However, L2 cache doesn't have any direct correlation to how Hibernate determines primary keys in the case of Broadleaf. Broadleaf utilizes a table generator strategy for grabbing a batch of ids in a performant and cluster-safe way. You probably notice a table entitled SEQUENCE_GENERATOR in your schema. This table contains various id ranges that have been acquired for different domain classes. Whenever Hibernate needs to grab a new batch of ids for insertions, it will interact with this table to register a new range of ids to check out. This should guarantee that no node in the cluster will try to insert an entity with a colliding id.
In your case, you need to guarantee that an external process can perform insertions in a non-colliding manner. To do so, I believe you need to create an API for the external process to call that will perform this same "id checkout" operation on behalf of that calling process. Then, your import code (presumably housed elsewhere) will have a range of ids it can safely use. The code backing the API you create should perform the same operation that Hibernate would normally perform to acquire a batch of ids for entity insertions. You can review org.hibernate.id.enhanced.TableGenerator for an example of what this looks like and create something similar for your own purposes.
I am building a web application. which inserts records in database. I validate records before inserting them in database. If between the time of my validation check and insertion, another application put the db state such that a unique key violation constraints occur if I attempt to insert these records that I have just validated for insertion. How can I avoid this kind of problem. I am using an oracle database and my development language is java.
Basically, you can't unless you change your constraints. You have several ways to to:
You keep the unique constraint and deal with the database exception in your Java code. Race conditions can happen and you have to deal with it.
You lock the entire table as soon as someone enters "insertion mode" in your app, effectively limiting inserts to one at a time. This would mean blocking other users in your application from entering edit mode until the first one is done. Probably not a good idea, but can work when you have very few users.
Remove the constraint. Now this might seem difficult but think about it. Do you really need globally unique entries in some fields? Or can you work around that by including an additional column in your key. This could be an artificial counter, effectively making each row unique again or maybe just the UserID, so that the unique constraint is only checked within each user?
I need the sample program in Java for keeping the history of table if user inserted, updated and deleted on that table. Can anybody help in this?
Thanks in advance.
If you are working with Hibernate you can use Envers to solve this problem.
You have two options for this:
Let the database handle this automatically using triggers. I don't know what database you're using but all of them support triggers that you can use for this.
Write code in your program that does something similar when inserting, updating and deleting a user.
Personally, I prefer the first option. It probably requires less maintenance. There may be multiple places where you update a user, all those places need the code to update the other table. Besides, in the database you have more options for specifying required values and integrity constraints.
Well, we normally have our own history tables which (mostly) look like the original table. Since most of our tables already have the creation date, modification date and the respective users, all we need to do is copy the dataset from the live table to the history table with a creation date of now().
We're using Hibernate so this could be done in an interceptor, but there may be other options as well, e.g. some database trigger executing a script, etc.
How is this a Java question?
This should be moved in Database section.
You need to create a history table. Then create database triggers on the original table for "create or replace trigger before insert or update or delete on table for each row ...."
I think this can be achieved by creating a trigger in the sql-server.
you can create the TRIGGER as follows:
Syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER } {INSERT | UPDATE |
DELETE } ON table_name FOR EACH ROW
triggered_statement
you'll have to create 2 triggers one for before the operation is performed and another after the operation is performed.
otherwise it can be achieved through code also but it would be a bit tedious for the code to handle in case of batch processes.
You should try using triggers. You can have a separate table (exact replica of your table of which you need to maintain history) .
This table will then be updated by trigger after every insert/update/delete on your main table.
Then you can write your java code to get these changes from the second history table.
I think you can use the redo log of your underlying database to keep track of the operation performed. Is there any particular reason to go for the program?
You could try creating say a List of the objects from the table (Assuming you have objects for the data). Which will allow you to loop through the list and compare to the current data in the table? You will then be able to see if any changes occurred.
You can even create another list with a object that contains an enumerator that gives you the action (DELETE, UPDATE, CREATE) along with the new data.
Haven't done this before, just a idea.
Like #Ashish mentioned, triggers can be used to insert into a seperate table - this is commonly referred as Audit-Trail table or audit log table.
Below are columns generally defined in such audit trail table : 'Action' (insert,update,delete) , tablename (table into which it was inserted/deleted/updated), key (primary key of that table on need basis) , timestamp (the time at which this action was done)
It is better to audit-log after the entire transaction is through. If not, in case of exception being passed back to code-side, seperate call to update audit tables will be needed. Hope this helps.
If you are talking about db tables you may use either triggers in db or add some extra code within your application - probably using aspects. If you are using JPA you may use entity listeners or perform some extra logic adding some aspect to your DAO object and apply specific aspect to all DAOs which perform CRUD on entities that needs to sustain historical data. If your DAO object is stateless bean you may use Interceptor to achive that in other case use java proxy functionality, cglib or other lib that may provide aspect functionality for you. If you are using Spring instead of EJB you may advise your DAOs within application context config file.
Triggers are not suggestable, when I stored my audit data in file else I didn't use the database...my suggestion is create table "AUDIT" and write java code with help of servlets and store the data in file or DB or another DB also ...