I am trying to insert a list of rows(questions) to a table.(lets say 'Question_Table').
The whole process is performed in a single transaction. (ie. either i have to insert all questions or none). I am using Spring's declarative transaction.
I have customized the ID generation for Question_Table.(Ref : Custom id generation)
It works for the first question. But it wont work for the second question as the first row is un-committed and the table will be empty. I am not able to inject the DAO class into Id generator as it is not a spring managed bean(so i can have a method in DAO class that reads un-committed records).
What is the best approach to use in this situation.
Generator class
public class IdGenerator implements IdentifierGenerator, Configurable {
private String prefix = "";
private String queryKey = "";
#Override
public Serializable generate(SessionImplementor sessionImpl, Object arg1) throws HibernateException {
long count = (long)sessionImpl.getNamedQuery(queryKey).list().get(0);
System.out.println("COUNT >>> "+count);
long id = count + 1;
if(id == 4) throw new NullPointerException();
String generatedId = prefix + id;
return generatedId;
}
#Override
public void configure(Type arg0, Properties arg1, ServiceRegistry arg2) throws MappingException {
prefix=arg1.getProperty("PREFIX");
queryKey=arg1.getProperty("QUERY_KEY");
}
}
Query : select count(*) from Question_Table
As i stated in the comment, you maybe can use this approach if you did not have problem using combination of string and sequence. But the downside is the value will always increase even after you delete all record in that table.
If you insist of using count, then the solution is to define your entity id on save manually like. .save(question, "QSTN_"+(row_count + i)); but you will need to be able pass that row_count which i think is not a problem since it must be on one request.
I have no answer to your specific question but i'd like to share some considerations.
If your id generation depends on the database state, then it must be done at the database level (implementation is up to you, autoincrement, custom function or sequences, etc, etc)...
Otherwise if you do it at the application level you will necessary encounter concurrent access problems and have to mitigate it using some lock or dedicated transaction which will have a significant impact on the application performance and may become inconsistent later (when adding horizontal scalability or sharding for example).
However if you want to generate your ids in an applicative layer (which can be a very good idea) then you must have an unique, distributed system dedicated for this task which is not part of your current unit of work.
#Transactional(isolation = Isolation.READ_COMMITTED)
public AccountDto saveAccount(AccountDto accountDto) {
Long accountTypeId = accountDto.getAccountTypeId();
AccountTypes accountTypes = accountTypesDao.getById( accountTypeId ).orElseThrow( NotFoundAppException::new );
account.setAccountName( newAccountName );
account.setAccountType( accountTypes );
...
accountDao.save( account );
accountDao.flush();
// new inserted account id is in the transaction now
return createAccountDtoFrom( account );
}
Related
I have a data for candidate "likes", which I'd like to send to client every time the "like" number is changed. I think this is achievable using Spring Flux? But I can't find any example for this. Most flux example is based on specific interval (e.g. every second). This might be a waste, because the transaction is not that much, and a candidate might not get likes in many minutes.
I just want to create dashboard that subscribe to "likes" change, and get updated when certain candidate "likes" number changed.
What is the way to get this?
This is what I did, and it works, but it based on interval (5 seconds), not based on data change.
public Flux<Candidate> subscribeItemChange(String id) {
return Flux.interval(Duration.ofSeconds(5)).map(t -> candidateService.getCandidateDetail(id));
}
The candidateService.getCandidateDetail is basically query database for certain id, so this is more like polling instead of "update on change".
I think I must put something on candidateService.updateLikes() below, but what should I update?
public class CandidateService {
public Candidate getCandidateDetail(String id) {
// query candidate from database
// select * from candidates where id = :id
// and return it
}
public void updateLikes(String id, int likesCount) {
// update candidates set likes_count = :likesCount where id = :id
// ...
// I think I need to write something here, but what?
}
}
You could make use of a dynamic sink, adding a field similar to:
private Sinks.Many<Candidate> likesSink = Sinks.many().multicast().onBackpressureBuffer();
...then you can:
Use sink.tryEmitNext in your updateLikes() method to publish to the sink whenever likes are updated for a candidate;
Implement your subscribeItemChange() method which uses likesSink.asFlux(), which can then be filtered if necessary to only return the stream of "like updates" for a particular candidate.
Based on #Michael Berry guide.
public void updateLikes(String id, int likesCount) {
Candidate c = getCandidateDetail(id);
c.setLikesCount(likesCount);
CandidateDummyDatasource.likesSink.tryEmitNext(c);
}
On subscriber
public Flux<Candidate> subscribeItemChange(String id) {
return CandidateDummyDatasource.likesSink.asFlux()
.filter(c -> c.getId().equals(id))
.map(data -> candidateService.getCandidateDetail(id));
}
The situation:
I have a clearing table with multiple thousands of records. They are split into packages of e.g. 500 records. Then each packet is sent to the AS via Message Driven Beans. The AS calculates a key depending on the contents (e.g. currency, validStart, validEnd) of each record and needs to store this key in the database (together withe the combination of the contents).
The request:
To avoid duplicates i want a centralized "tool" which calculates the key and stores them and thus reduces communication with the database by caching those keys with the records.
Now I tried to use a local Infinispan cache accessed in a Utility-class-implementation for each package-processing-thread. This resulted in the fact, that multiple packages calculated the same key and thus duplicates were inserted in the database. Or sometimes I got deadlocks.
I tried to implement a "lock" via a static variable to block access for the cache during a database insert, but without success.
Next attempt was to use a replicated- respectively distributed-Infinispan cache. This did not change the results in AS behavior.
My last idea would be to implement as a bean managed singleton session bean to acquire a transaction lock during inserting into the database.
The AS currently runs in standalone mode, but will be moved to a cluster in near future, so a High Availability solution is preferred.
Resuming:
What's the correct way to lock Infinispan cache access during creation of (Key, Value) pairs to avoid duplicates?
Update:
#cruftex: My Request is: I have a set of (Key, Value) pairs, which shall be cached. If an insert of a new record should happen, then an algorithm is applied to it and the Key is calculated. Then the cache shall be checked if the key already exists and the Value will be appended to the new record. But if the Value does not exist, it shall be created and stored in the database.
The cache needs to be realized using Infinispan because the AS shall run in a cluster. The algorithm for creating the Keys exists. Inserting the Value in the database too (via JDBC or Entities). But i have the problem, that using Message Driven Beans (and thus multithreading in the AS) the same (Key, Value) Pair is calculated in different threads and thus each thread tries to insert the Values in the database (which i want to avoid!).
#Dave:
public class Cache {
private static final Logger log = Logger.getLogger(Cache.class);
private final Cache<Key, FullValueViewer> fullCache;
private HomeCache homes; // wraps EntityManager
private final Session session;
public Cache(Session session, EmbeddedCacheManager cacheContainer, HomeCache homes) {
this.session = session;
this.homes = homes;
fullCache = cacheContainer.getCache(Const.CACHE_CONDCOMBI);
}
public Long getId(FullValueViewer viewerWithoutId) {
Long result = null;
final Key key = new Key(viewerWithoutId);
FullValueViewer view = fullCache.get(key);
if(view == null) {
view = checkDatabase(viewerWithoutId);
if(view != null) {
fullCache.put(key, view);
}
}
if(view == null) {
view = createValue(viewerWithoutId);
// 1. Try
fullCache.put(key, view);
// 2. Try
// if(!fullCache.containsKey(key)) {
// fullCache.put(key, view);
// } else {
// try {
// homes.condCombi().remove(view.idnr);
// } catch (Exception e) {
// log.error("remove", e);
// }
// }
// 3. Try
// synchronized(fullCache) {
// view = createValue(viewerWithoutId);
// fullCache.put(key, view);
// }
}
result = view.idnr;
return result;
}
private FullValueViewer checkDatabase(FullValueViewer newView) {
FullValueViewer result = null;
try {
CondCombiBean bean = homes.condCombi().findByTypeAndKeys(_parameters_);
result = bean.getAsView();
} catch (FinderException e) {
}
return result;
}
private FullValueViewer createValue(FullValueViewer newView) {
FullValueViewer result = null;
try {
CondCombiBean bean = homes.condCombi().create(session.subpk);
bean.setFromView(newView);
result = bean.getAsView();
} catch (Exception e) {
log.error("createValue", e);
}
return result;
}
private class Key {
private final FullValueViewer view;
public Key(FullValueViewer v) {
this.view = v;
}
#Override
public int hashCode() {
_omitted_
}
#Override
public boolean equals(Object obj) {
_omitted_
}
}
}
The cache configurations i tried with Wildfly:
<cache-container name="server" default-cache="default" module="org.wildfly.clustering.server">
<local-cache name="default">
<transaction mode="BATCH"/>
</local-cache>
</cache-container>
<cache-container name="server" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<distributed-cache name="default" mode="ASYNC"/>
</cache-container>
I'll react only to the resume question:
You can't lock whole cache; that wouldn't scale. The best way would be to use cache.putIfAbsent(key, value) operation, and generate different key if the entry is already there (or use list as value and replace it using conditional cache.replace(key, oldValue, newValue)).
If you want to really prohibit writes to some key, you can use transactional cache with pessimistic locking strategy, and issue cache.getAdvancedCache().lock(key). Note that there's no unlock: all locks are released when the transaction is committed/rolled back through transaction manager.
You cannot generate your own key and use it to detect duplicates at the same time.
Either each data row is guaranteed to arrive only once, or it needs embodied a unique identifier from the external system that generates it.
If there is a unique identifier in the data, which, if all goes wrong, and no id is in there, is just all properties concatenated, then you need to use this to check for duplicates.
Now you can go with that unique identifier directly, or generate an own internal identifier. If you do the latter, you need a translation from the external id to the internal id.
If duplicates arrive, you need to lock based on the external id when you generate the internal id, and then record what internal id you assigned.
To generate a unique sequence of long values, in a cluster, you can use the CAS-operations of the cache. For example something like this:
#NotThreadSafe
class KeyGeneratorForOneThread {
final String KEY = "keySequenceForXyRecords";
final int INTERVAL = 100;
Cache<String,Long> cache = ...;
long nextKey = 0;
long upperBound = -1;
void requestNewInterval() {
do {
nextKey = cache.get(KEY);
upperBound = nextKey + INTERVAL;
} while (!cache.replace(KEY, nextKey, upperBound));
}
long generateKey() {
if (nextKey >= upperBound) {
requestNewInterval();
}
return nextKey++;
}
}
Every thread has its own key generator and would generate 100 keys without needing coordination.
You may need separate caches for:
locking by external id
lookup from external to internal id
sequence number, attention that is actually not a cache, since it must know the last number after a restart
internal id to data
We found a solution that works in our case and might be helpful for somebody else out there:
We have two main components, a cache-class and a singleton bean.
The cache contains a copy of all records currently present in the database and a lot of logic.
The singleton bean has access to the infinispan-cache and is used for creating new records.
Initialy the cache fetches a copy of the infinispan-cache from the singleton bean. Then, if we search a record in the cache, we first apply a kind of hash-method, which calculates a unqiue key for the record. Using this key we can identify, if the record needs to be added to the database.
If so, then the cache calls the singleton bean using a create-method with a #Lock(WRITE) Annotation. The create method first checks, if the value is contained in the infinispan-cache and if not, it creates a new record.
Using this approach we can guarantee, that even if the cache is used in multiple threads and each thread sends a request to create the same record in the database, the create process is locked and all following requests won't be proceeded because the value was already created in a previous request.
Simple question here :
If i've got an object with initialized and uninitialized values in it. Is there an easy way to find in my db all the Entities that fit this one with hibernate ? (without listing and checking every variable of the object)
Example :
I got this class :
public class User {
private int id;
private String name;
private String email;
private boolean activ;
}
I would like to be able to do that :
User user1 = new User();
user.setActive() = true;
User user2 = new User();
user.setActive(true);
user.setName("petter")
listUser1 = findAllUser(user1);
listUser2 = findAllUser(user2);
Here listUser1 will contain all the active users and listUser2 will contain all the active user that are named petter.
Thx guys !
Edit/Solution
So my here is my code (i used a class wich is similar at the one of my example).
It work just fine but the problem is that according to Eclipse : "The method createCriteria(Class) from the type SharedSessionContract is deprecated"...
public static List<Personne> findAllPersonne(Personne personne) {
List<Personne> listPersonne;
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("testhibernate0");
EntityManager entityManager = entityManagerFactory.createEntityManager();
Session session = entityManager.unwrap(Session.class);
Example personneExample = Example.create(personne);
Criteria criteria = session.createCriteria(Personne.class).add(personneExample);
listPersonne = criteria.list();
entityManager.close();
return listPersonne;
}
So .. How could i do that in a better way? I've looked into CriteriaQuery but i can't find how to use it with an example.
Yes it exists : the key word for google is "query by exemple" or "qbe".
https://dzone.com/articles/hibernate-query-example-qbe
In general, if an entity instance is already in your Persistence context, you can find it by primary key with EntityManager.find. Otherwise, you can pick up a result from your database by way of JPQL or native querying.
For your particular use case, it sounds like a querying solution would be the best fit; use one of the linked query creation methods from your entity, then use the Query.getResultList() method to pick up a list of objects that match the query criteria.
QueryByExample is also a good and valid solution, as Mr_Thorynque indicates, but as the article he linked mentions, that functionality is specific to certain JPA providers (Hibernate among them) and not JPA provider agnostic.
I have a legacy database and I'm developing a Spring MVC application with JPA/Hibernate. My problem comes with the generation of the composite primary keys. An example of primary key is composed like this:
Serial, Year, OrderID, LineId
LineId will be generated based on the max(LineId) for each tuple of Serial, Year and LineId.
I've thought about the following ways:
PrePersist Listener: It means the listener would have to access repositories and even maybe have references to other entities in order to get the next id. EDIT: Hibernate Docs say: A callback method must not invoke EntityManager or Query methods!. https://docs.jboss.org/hibernate/orm/4.0/hem/en-US/html/listeners.html#d0e3013
Custom Generator: I haven't found a single example that shows how to access the entity's instance to retrieve the properties I need to do a proper select.
Service Layer: Would be just too verbose.
Overriding the Spring's Data's JPA Repository save() method implmentation: In this case, Here we could access the entity's instance properties.
What is the correct way to achieve this purpose? Thanks
What I have often done to support this is to use a domain-driven design technique where I control this at the time I associate an OrderLine to the Order.
public class Order {
private List<OrderLine> lines;
// don't allow the external world to modify the lines collection.
// forces them to use domain driven API exposed below.
public List<OrderLine> getLines() {
return Collections.unmodifiableList( lines );
}
// avoid allowing external sources to set the lines collection
// Hibernate can set this despite the method being private.
private void setLines(List<OrderLine> lines) {
this.lines = lines;
}
public OrderLine addLine(String serial, Integer year) {
final OrderLine line = new OrderLine( this, serial, year );
lines.add( line );
return line;
}
public void removeLine(Integer lineId) {
lines.removeIf( l -> l.getId().getLineId().equals( lineId ) );
}
}
public OrderLine {
public OrderLine() {
}
OrderLine(Order order, String serial, Integer year) {
this.id = new OrderLineId( order.getLines().size() + 1, serial, year, order.getId() );
}
}
The only code which ever calls the special OrderLine constructor is called from Order and you make sure that you always delegate the addition and removal of OrderLine entities through the aggregate root, Order.
This also implies you only ever need to expose an Order repository and you manipulate the lines associated with an Order only through an Order and never directly.
I'm developing an application with some kind of 'facebook like' feature. Every time that a content published by a user is 'liked' he will have his punctuation increased. This app will be used by a large number of users around the company, so We are expecting a lot of concurrent updates to the same row.
simplified code
User punctuation table
Punctuation(
userId NVARCHAR2(32),
value NUMBER(10,0)
)/
Java code
public class Punctuation(){
private String userId;
private int value;
public Punctuation(final String userId, final int value){
this.userId = userId;
this.value = value;
}
public String getUserId();
public int getValue();
}
//simplified code
public final class PunctuationController{
private PunctuationController(){}
public static void addPunctuation(final Punctuation punctuation){
final Transaction transaction = TransactionFactory.createTransaction();
Connection conn = null;
PreparedStatment statment = null;
try{
synchronized(punctuation){
transaction.begin();
conn = transaction.getConnection();
statment = conn.preparedStatment("UPDATE Punctuation SET value = value + ? where userId = ?");
statment.setString('1', punctuation.getUserId());
statment.setInt('2', punctuation.getValue());
transaction.commit();
}
}catch (Exception e){
transaction.rollback();
}finally{
transaction.dispose();
if(statment !=null){
statment.close();
}
}
}
We are afraid of deadlocks during updates. Oracle allows to make the sum on a single query, I don't have to retrieve the value and make a second query to update with a new value, that's good. Also reading some other posts here, They said to create a synchronized block to lock an object, and let Java handle the synchronization between different threads. I choose the punctuation instance the method receives, this way I imagine that different combinations of user and value will allow concurrent acess to this methods, but will block an instance with same values (Do I have to implement equals() on Punctuation?)
Our database is Oracle 10g, Server Weblogic 11g, Java 6 and Linux (I dont know which flavor).
Thank you in advance!
You're wrong on your synchronization strategy. synchronized uses the intrinsic lock of the object between parentheses. If you have two Punctuation instances that you might consider equal because they refer to the same user_id, Java doesn't care: 2 objects, so 2 locks, so no mutual exclusion.
I really don't see why the above, without the synchronized, could generate deadlocks: you're updating a single row in the table. You could have a deadlock if you had two concurrent transaction with one updating user1, then user2, and the other one updating user2, then user1. But even then, the database would detect the deadlock and throw an exception for one of the transactions.
you need to use optimistic lock pattern. take a look here for more details http://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html
And probably this http://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html which is more low level details
After identification of concurrent issue using optimistic lock, you may want to prefer re-trying - you have a full control what to do