Hibernate with Multithreading in Swing application - java

I am facing a problem with hibernate with multithreading.
I am developing a swing based application where there are some number of POJO classes. The relations among the classes: Category has a set of Protocols, Protocol has a set of Step, Step has a set of Mode. All the collections are loaded lazily with fetch = FetchType.LAZY. I am maintaining a single session for the application. After getting the list of all Category, I need to start some threads to do some operations on the category list. Here I am getting LazyInitializationException. The test code is as follows:
final List<Category> cats = protocolDao.getCategoryList();
for (int i = 0; i < 10; i++) {
new Thread("THREAD_" + i) {
public void run() {
try {
for (Category category : cats) {
Set<Protocol> protocols = category.getProtocols();
for (Protocol protocol : protocols) {
Set<Step> steps = protocol.getStep();
for (Step step : steps) {
step.getModes());
}
}
}
System.out.println(Thread.currentThread().getName()+"SUCCESS" ;
} catch (Exception e) {
System.out.println("EXCEPTION ON " + Thread.currentThread().getName());
}
};
}.start();
}
The dao method is as follows:
public List<Category> getCategoryList() throws ProtocolException {
try {
Transaction transaction = session.beginTransaction();
List list = session.createCriteria(Category.class)
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)
.addOrder(Order.asc("categoryposition")).list();
transaction.commit();
return list;
} catch (Exception e) {
throw new ProtocolException(e);
}
}
When I try to run the above code I get the following exception for some of the threads:
SEVERE: illegal access to loading collection
org.hibernate.LazyInitializationException: illegal access to loading collection
at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:363)
at org.hibernate.collection.AbstractPersistentCollection.read(AbstractPersistentCollection.java:108)
at org.hibernate.collection.PersistentSet.toString(PersistentSet.java:332)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at com.mycomp.core.protocol.dao.test.TestLazyLoading$1.run(TestLazyLoading.java:76)
So some of the tasks are not completed. I cannot avoid multiple threads to work on the same category list(it works fine with single thread). Every thread requires to do its own task. The database is too big to avoid lazy loading. Can anyone help me how will I be able to work with multiple threads with the same category list?

You need to ensure that only the thread that gets your entities uses them. If you have a get ids and get by id method or similar:
final int[] catIds = protocolDao.getCategoryIds();
for (int i : catIds) {
new Thread("THREAD_" + i) {
public void run() {
Category category = protocolDao.getCategory(i);
Set<Protocol> protocols = category.getProtocols();
for (Protocol protocol : protocols) {
Set<Step> steps = protocol.getStep();
for (Step step : steps) {
step.getModes());
}
}
};
}.start();
}

Related

Modify HashMap when CompletableFuture is finished

I have a multi-module system, where one module handles my database storage. This is the method which saves a document:
public CompletableFuture<?> runTransaction() {
return CompletableFuture.runAsync(() -> {
TransactionBody txnBody = (TransactionBody<String>) () -> {
MongoCollection<Document> collection = transaction.getDatabase().getCollection(transaction.getCollection().toString());
collection.insertOne(session, Document.parse(json));
return "Completed";
};
try {
session.withTransaction(txnBody);
} catch (MongoException ex) {
throw new UncheckedMongoException(ex);
}
});
}
the json instance is passed down in the object constructor. However, since this will be used by several modules, with each individual caching system, I'm trying to figure out how the caller can modify data structure, if this method completed without any errors.
For example
public void createClan(Transaction transaction, int id, int maxPlayers) {
MongoTransaction mongoTransaction = (MongoTransaction) transaction;
Clan clan = new Clan(id, maxPlayers);
String json = gson.toJson(clan);
TransactionExecutor executor = new MongoTransactionExecutor(mongoTransaction, json);
executor.runTransaction(); //Returns the completableFuture instance generated by the method. Modify hashmap here.
}
I've tried reading the docs, however it was a bit confusing, any help is appreciated!
As given in the comments, two options can be considered.
First option is to convert the async nature into sync nature using CompletableFuture#get. This way, the code execution is in a blocking context.
public void createClan(Transaction transaction, int id, int maxPlayers) {
MongoTransaction mongoTransaction = (MongoTransaction) transaction;
Clan clan = new Clan(id, maxPlayers);
String json = gson.toJson(clan);
TransactionExecutor executor = new MongoTransactionExecutor(mongoTransaction, json);
try {
Object obj = executor.runTransaction().get();
// HashMap update here
} catch(Exception e) {
//handle exceptions
}
}
Second option is to keep the async nature as is and chain using thenRun (there are many then options available). This way is more a non-blocking context.
public void createClan(Transaction transaction, int id, int maxPlayers) {
MongoTransaction mongoTransaction = (MongoTransaction) transaction;
final Clan clan = new Clan(id, maxPlayers);
String json = gson.toJson(clan);
TransactionExecutor executor = new MongoTransactionExecutor(mongoTransaction, json);
try {
executor.runTransaction().thenRun(() -> updateHashMap(clan));
} catch(Exception e) {
//handle exceptions
}
}

why jpa not save data at a time

i want save data and check the data after call save method
but the value is not present in same request
i have two method depend each other
the two function communcation with each other by kafka
the first method save the data and after save using jpa call second method
find the recourd from database using jpa
and check the instanse using isPresent()
but in the second method i cant find the data save
but after this request i can find data
return exciption NoSuchElement
Try out several ways like:
1-use flush and saveAndFlush
2-sleep method 10000 milsec
3-use entityManger with #Transactional
but all of them not correct
i want showing you my two method from code:
i have producer and consumer
and this is SaveOrder method (first method):
note : where in the first method have all ways i used
#PersistenceContext
private EntityManager entityManager;
#Transactional
public void saveOrder(Long branchId,AscOrderDTO ascOrderDTO) throws Exception {
ascOrderDTO.validation();
if (ascOrderDTO.getId() == null) {
ascOrderDTO.setCreationDate(Instant.now());
ascOrderDTO.setCreatedBy(SecurityUtils.getCurrentUserLogin().get());
//add user
ascOrderDTO.setStoreId(null);
String currentUser=SecurityUtils.getCurrentUserLogin().get();
AppUser appUser=appUserRepository.findByLogin(currentUser);
ascOrderDTO.setAppUserId(appUser.getId());
}
log.debug("Request to save AscOrder : {}", ascOrderDTO);
AscOrder ascOrder = ascOrderMapper.toEntity(ascOrderDTO);
//send notify to branch
if(!branchService.orderOk())
{
throw new BadRequestAlertException("branch not accept order", "check order with branch", "branch");
}
ascOrder = ascOrderRepository.save(ascOrder);
/*
* log.debug("start sleep"); Thread.sleep(10000); log.debug("end sleep");
*/
entityManager.setFlushMode(FlushModeType.AUTO);
entityManager.flush();
entityManager.clear();
//ascOrderRepository.flush();
try {
producerOrder.addOrder(branchId,ascOrder.getId(),true);
stateMachineHandler.stateMachine(OrderEvent.EMPTY, ascOrder.getId());
stateMachineHandler.handling(ascOrder.getId());
//return ascOrderMapper.toDto(ascOrder);
}
catch (Exception e) {
// TODO: handle exception
ascOrderRepository.delete(ascOrder);
throw new BadRequestAlertException("cannot deliver order to Branch", "try agine", "Try!");
}
}
in this code go to producer :
producerOrder.addOrder(branchId,ascOrder.getId(),true);
and this is my producer:
public void addOrder(Long branchId, Long orderId, Boolean isAccept) throws Exception {
ObjectMapper obj = new ObjectMapper();
try {
Map<String, String> map = new HashMap<>();
map.put("branchId", branchId.toString());
map.put("orderId", orderId.toString());
map.put("isAccept", isAccept.toString());
kafkaTemplate.send("orderone", obj.writeValueAsString(map));
}
catch (Exception e) {
throw new Exception(e.getMessage());
}
}
and in this code go to consumer:
kafkaTemplate.send("orderone", obj.writeValueAsString(map));
this is my consumer:
#KafkaListener(topics = "orderone", groupId = "groupId")
public void processAddOrder(String mapping) throws Exception {
try {
log.debug("i am in consumer add Order");
ObjectMapper mapper = new ObjectMapper(); Map<String, String> result = mapper.readValue(mapping,
HashMap.class);
branchService.acceptOrder(Long.parseLong(result.get("branchId")),Long.parseLong(result.get("orderId")),
Boolean.parseBoolean(result.get("isAccept")));
log.debug(result.toString());
}
catch (Exception e) {
throw new Exception(e.getMessage());
}
}
**and this code go to AcceptOrder (second method) : **
branchService.acceptOrder(Long.parseLong(result.get("branchId")),Long.parseLong(result.get("orderId")),
Boolean.parseBoolean(result.get("isAccept")));
this is my second method :
public AscOrderDTO acceptOrder(Long branchId, Long orderId, boolean acceptable) throws Exception {
ascOrderRepository.flush();
try {
if (branchId == null || orderId == null || !acceptable) {
throw new BadRequestAlertException("URl invalid query", "URL", "Check your Input");
}
if (!branchRepository.findById(branchId).isPresent() || !ascOrderRepository.findById(orderId).isPresent()) {
throw new BadRequestAlertException("cannot find branch or Order", "URL", "Check your Input");
}
/*
* if (acceptable) { ascOrder.setStatus(OrderStatus.PREPARING); } else {
* ascOrder.setStatus(OrderStatus.PENDING); }
*/
Branch branch = branchRepository.findById(branchId).get();
AscOrder ascOrder = ascOrderRepository.findById(orderId).get();
ascOrder.setDiscount(50.0);
branch.addOrders(ascOrder);
branchRepository.save(branch);
log.debug("///////////////////////////////Add order sucess////////////////////////////////////////////////");
return ascOrderMapper.toDto(ascOrder);
} catch (Exception e) {
// TODO: handle exception
throw new Exception(e.getMessage());
}
}
Adding Thread.sleep() inside saveOrder makes no sense.
processAddOrder executes on a completely different thread, with a completely different persistence context. All the while, your transaction from saveOrder might still be ongoing, with none of the changes made visible to other transactions.
Try splitting saveOrder into a transactional method and sending the notification, making sure that the transaction ends before the event handling has a chance to take place.
(Note that this approach introduces at-most-once semantics. You have been warned)

Oracle database change notification not working when inserts qty exceeds 20

public class Register {
#Autowired
private DataSource dataSource;
#Autowired
private DCNListener listener;
private OracleConnection oracleConnection = null;
private DatabaseChangeRegistration dcr = null;
private Statement statement = null;
private ResultSet rs = null;
#PostConstruct
public void init() {
this.register();
}
private void register() {
Properties props = new Properties();
props.put(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
props.setProperty(OracleConnection.DCN_IGNORE_DELETEOP, "true");
props.setProperty(OracleConnection.DCN_IGNORE_UPDATEOP, "true");
try {
oracleConnection = (OracleConnection) dataSource.getConnection();
dcr = oracleConnection.registerDatabaseChangeNotification(props);
statement = oracleConnection.createStatement();
((OracleStatement) statement).setDatabaseChangeRegistration(dcr);
rs = statement.executeQuery(listenerQuery);
while (rs.next()) {
}
dcr.addListener(listener);
String[] tableNames = dcr.getTables();
Arrays.stream(tableNames)
.forEach(i -> log.debug("Table {}" + " registered.", i));
} catch (SQLException e) {
e.printStackTrace();
close();
}
}
}
My Listener:
public class DCNListener implements DatabaseChangeListener {
#Override
public void onDatabaseChangeNotification(DatabaseChangeEvent databaseChangeEvent) {
TableChangeDescription[] tableChanges = databaseChangeEvent.getTableChangeDescription();
for (TableChangeDescription tableChange : tableChanges) {
RowChangeDescription[] rcds = tableChange.getRowChangeDescription();
for (RowChangeDescription rcd : rcds) {
RowOperation op = rcd.getRowOperation();
String rowId = rcd.getRowid().stringValue();
switch (op) {
case INSERT:
//process
break;
case UPDATE:
//do nothing
break;
case DELETE:
//do nothing
break;
default:
//do nothing
}
}
}
}
}
In my Spring boot application, I have an Oracle DCN Register class that listens for INSERTS in an event table of my database. I am listening for insertion new records.
In this Event table, I have different types of events that my application supports, lets say EventA and EventB.
The application gui allows you to upload in bulk these type of events which translate into INSERT into the oracle database table I am listening to.
For one of the event types, my application is not capturing the INSERT ONLY when it is 20 or more events uploaded in bulk, but for the other event type, I do not experience this problem.
So lets say user inserts eventA any number < 20, my application captures the inserts. But if the number of eventA inserts exceeds 20, it does not capture.
This is not the case for eventB which works smoothly. I'd like to understand if I'm missing anything in term of registration and anything I can look out for maybe in the database or what the issue could be here?
You should also look for the ALL_ROWS event from:
EnumSet<TableChangeDescription.TableOperation> tableOps = tableChange.getTableOperations();
if(tableOps.contains(TableChangeDescription.TableOperation.ALL_ROWS)){
// Invalidate the cache
}
Quote fromt the JavaDoc:
The ALL_ROWS event is sent when the table is completely invalidated and row level information isn't available. If the DCN_NOTIFY_ROWIDS option hasn't been turned on during registration, then all events will have this OPERATION_ALL_ROWS flag on. It can also happen in situations where too many rows have changed and it would be too expensive for the server to send the list of them.
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jajdb/oracle/jdbc/dcn/TableChangeDescription.TableOperation.html#ALL_ROWS

Threads and Hibernate with Spring MVC

I am currently working on a web application which is basically a portfolio site for different vendors.
I was working on a thread which copies the details of a vendor and puts it against a new vendor, pretty straightforward.
The thread is intended to work fine but when selecting a particular Catalog object (this catalog object contains a Velocity template), the execution stops and it goes nowhere. Invoking the thread once again just hangs the whole application.
Here is my code.
public class CopySiteThread extends Thread {
public CopySiteThread(ComponentDTO componentDTO, long vendorid, int admin_id) {
/**Application specific business logic not exposed **/
}
public void run() {
/** Application based Business Logic Not Exposed **/
//Copy Catalog first
List<Catalog> catalog = catalogDAO.getCatalog(vendorid);
System.out.println(catalog);
List<Catalog> newCat = new ArrayList<Catalog>();
HashMap<String, Integer> catIdMapList = new HashMap<String, Integer>();
Iterator<Catalog> catIterator = catalog.iterator();
while (catIterator.hasNext()) {
Catalog cat = catIterator.next();
System.out.println(cat);
int catId = catalogDAO.addTemplate(admin_id, cat.getHtml(), cat.getName(), cat.getNickname(), cat.getTemplategroup(), vendor.getVendorid());
catIdMapList.put(cat.getName(), catId);
cat = null;
}
}
}
And the thread is invoked like this.
CopySiteThread thread = new CopySiteThread(componentDTO, baseVendor, admin_id);
thread.start();
After a certain number of iterations, it gets stuck on line Catalog cat = catIterator.next();
This issue is rather strange because I've developed many applications like this without any problem.
Any help appreciated.
The actual problem was in the addCatalog method in CatalogDAO
Session session = sf.openSession();
Transaction tx = null;
Integer templateID = null;
Date date = new Date();
try {
tx = session.beginTransaction();
Catalog catalog = new Catalog();
//Business Logic
templateID = (Integer) session.save(catalog);
} catch (HibernateException ex) {
if (tx != null) tx.rolback();
} finally {
session.close();
}
return templateID;
Fixed by adding a finally clause and closing all sessions.

google app engine: problem getting objects from datastore in a task

My application constructs a Parent object in a static factory, along with it's predetermined Children, and then starts up tasks to run some computation on the Children, like so:
public static Parent make(User owner, List<Integer> data, int size) {
Parent result = new Parent(owner,data,size);
PersistenceManager pm = PersistenceSource.get();
Transaction tx = pm.currentTransaction();
try {
tx.begin();
result = pm.makePersistent(result);
for (int i=0; i<size; pm.makePersistent(new Child(result,i++)));
pm.close();
tx.commit();
} finally {
if (tx.isActive()) { tx.rollback(); result=null; }
}
if (result!=null) {
Queue q = QueueFactory.getDefaultQueue();
for (Child c : result.getChild()) {
q.add(url("/task/child").param("key", KeyFactory.keyToString(c.getKey())).method(Method.PUT));
}
}
pm.close();
return result;
}
however in the actual task
public void doPut(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
PersistenceManager pm = PersistenceSource.get();
Child c = pm.getObjectById(Child.class, KeyFactory.stringToKey(request.getParameter("key"))); //...
It dies trying to find the object:
Could not retrieve entity of kind Child with key Child(24)
org.datanucleus.exceptions.NucleusObjectNotFoundException: Could not retrieve entity of kind Child with key Child(24)
Any insights? Also, if it matters, the Parent-child relationship is defined by the parent as a field in the child (hence construction with the parent as an arg).
After some plugging around, the following will work to properly retrieve the desired Child:
Key k = new KeyFactory
.Builder(Parent.class.getSimpleName(), Long.valueOf(request.getParameter("parent")))
.addChild(Child.class.getSimpleName(), Long.valueOf(request.getParameter("child")))
.getKey();
Child c = pm.getObjectById(Child.class, k);
I'm still a bit mystified coming from the non-DataStore world as to why the type + id is insufficient to fetch what I want. That seems equivalent to knowing table + primary key in SQL land, and the documentation seems to indicate that the key contains all the parent info, such that having it should be sufficient to do a direct pm.getObjectId(Child, KeyFactory./* etc */).
public Object retrieveChildByKey(Key parent, Class childClass, int idChild){
try{
pm = PMF.get().getPersistenceManager();
Key objectKey = KeyFactory.createKey(parent, childClass.getSimpleName(), idChild);
Object result = pm.getObjectById(childClass, objectKey);
return result;
}
catch (DatastoreTimeoutException ex) {
// Display a timeout-specific error page
message = "Timeout: An error has occurred: Was not possible to retrieve the Object";
System.out.println(message);
ex.printStackTrace();
}
catch(Exception ex){
message = "Timeout: An error has occurred: Was not possible to retrieve the Object";
System.out.println(message);
ex.printStackTrace();
}finally{
pm.close();
}
return null;
}

Categories