I am wondering if there is a possibility of hibernate delaying its writes to the DB. I have hibernate configured for mysql. Scenarios I hope to support are 80% reads and 20% writes. So I do not want to optimize my writes, I rather have the client wait until the DB has been written to, than to return a bit earlier. My tests currently have 100 client in parallel, the cpu does sometimes max out. I need this flush method to write to DB immediately and return only when the data is written.
On my client side, I send a write request and then a read request, but the read request sometimes returns null. I suspect hibernate is not writing to db immediately.
public final ThreadLocal session = new ThreadLocal();
public Session currentSession() {
Session s = (Session) session.get();
// Open a new Session, if this thread has none yet
if (s == null || !s.isOpen()) {
s = sessionFactory.openSession();
// Store it in the ThreadLocal variable
session.set(s);
}
return s;
}
public synchronized void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
txD = session.beginTransaction();
session.save(dataStore);
try {
txD.commit();
} catch (ConstraintViolationException e) {
e.printStackTrace();
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
log.debug("txD state isActive" + txD.isActive() + " txD is participating" + txD.isParticipating());
log.debug(e);
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
#Siddharth Hibernate does not really delay in writing the response , and your code also does not speaks the same. I have also faced similar issue earlier and doubt you might be facing the same that is , when there a numerous request for write into hibernate are there many threads share same instance of your db and even having consecutive commits by hibernate you really dont see any changes .
You may also catch this by simple looking at you MySQL logs during the transaction and see what exactly went wrong !
Thanks for your hint. Took me some time to debug. Mysql logs are amazing.
This is what I run to check the time stamp on my inserts and mysql writes. mysql logs all db operations in a binlog. To read it we need to use the tool called mysqlbinlog. my.cnf too needs to be http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpm7%2Fm7enablelogs.html
I check which is the lastest mysql bin log file, and run this to grep for 1 line above the log, to get the time stamp. Then in java, I call Calendar.getInstance().getTimeInMilli() to compare with the time stamp.
sudo mysqlbinlog mysql/mysql-bin.000004 | grep "mystring" -1
So I debugged my problem. It was a delayed write problem. So I implemented a sync write too instead of all async. In other words the server call wont return until db is flushed for this object.
Related
We’re trying to use dynamic java connector for JDE9.0 and facing issue of increased Handles count for the process.
Scenario:
Calling Dynamic JDE connector in parallel with multiple calls simultaneously.
The implementation process of executing BSFN is as follows:
1) Login method has all credentials and return sessionID
int sessionID =
Connector.getInstance().login(username.trim(), password.trim(), env.trim(), role.trim());
…
2) ExecuteBSFN has input parameters as module, bsfnName and inputfile (input data to bsfn)
…..
ExecutableMethod execMethod = bsfnMethod.createExecutable();
execMethod.resetValues();
Map<String, String> input = inputParams(moduleName, bsfnName, inputFile);
if(input != null)
execMethod.setValues(input);
CallObjectErrorList errorList = execMethod.executeBSFN(sessionID);
Map output = execMethod.getValues();
….
3) Logout:
Connector.getInstance().logoff(sessionID);
In this case we observed that Handles count for the process keeps on increasing even though we used
logoff() method and eventually leads to OutOfMemory.
In Order to resolve this issue in logout implementation, after doing logoff we called:
Connector.getInstance().shutDown();
In this case we observed that on it throws null pointer exception for subsequent calls.
Does anyone know how to overcome this situation?
You should review if the BSFN called from the user session prevented the logout by reviewing the enteprise server callobject kernel jde log files as the BSFN is still running asynchronously in enterprise server callobject kernel.
Connector.getInstance().shutDown(); will iterate through all the active user session and call Connector.getInstance().logoff(sessionID);.
So if there are other active session running business function , shutDown will logout the session in middle of BSFN execution and will cause null pointer exception for the logged out session.
If you have multiple sessions for JDE then Connector.getInstance().shutDown()
it not going to work because shutDown() will close all the active sessions that's why you are getting null pointer exception to the other active user.
For your Handles count and OutOfMemory, you can close the particular session like
Step 1: logoff the user by session id
Connector.getInstance().logoff(sessionID);
Step 2: Notify the JDE for shutting down the current session it will solve your handle count and OutOfMemory issue
NotificationManager.notifyEvent(new JdeEvent() {
#Override
public Object getSource() {
return Connector.getInstance();
}
#Override
public String getName() {
return "SHUTDOWN";
}
});
We are working for an ecommerce built with Hybris framework and currently we have an issue with database connection (I suppose) and no idea on how to solve it. It happens only on production environment and only on servers that are used by ESB (2 servers in a total of 40).
Basically, sometimes (1-3/day), we discover sessions waiting for some idle session (SEL*NET message from client). We can only manually kill the holder in order to free these sessions.
All the servers share the same application code and the main difference between ESB and Frontend servers is in the controllers that are called and in the requests count.
ESB Server: 10 requests per minute
Frontend Server: 300 requests per minute
In the application log I found a lot of Closed Connection errors on these 2 servers and I think that this is related to our problem but actually I don't know why.
In access.log I have this request:
[26/Mar/2019:09:04:39 +0100] "GET /blockorder?orderCode=XXXX&access_token=XXXX HTTP/1.1" 400 122 "-" "AHC/1.0"
and in the console.log I have this:
hybrisHTTP8 2019-03-26 09:04:39,184 ERROR [[10.125.31.2] ] () [de.hybris.platform.jdbcwrapper.ConnectionImpl] error resetting AutoCommit
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.setAutoCommit(PhysicalConnection.java:3763)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.doSetAutoCommit(ConnectionImpl.java:431)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.restoreAutoCommit(ConnectionImpl.java:185)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.unsetTxBound(ConnectionImpl.java:175)
at de.hybris.platform.tx.Transaction.unsetTxBoundConnection(Transaction.java:920)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotify(Transaction.java:897)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotifyRollback(Transaction.java:887)
at de.hybris.platform.tx.Transaction.rollbackOuter(Transaction.java:1084)
at de.hybris.platform.tx.Transaction.rollback(Transaction.java:1028)
at de.hybris.platform.tx.Transaction.commit(Transaction.java:690)
at de.hybris.platform.tx.Transaction.finishExecute(Transaction.java:1218)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1205)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1160)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2082)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2057)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.storeAttributes(ItemModelConverter.java:1503)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.save(ItemModelConverter.java:730)
at de.hybris.platform.servicelayer.internal.model.impl.wrapper.ModelWrapper.save(ModelWrapper.java:336)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.saveOthers(ResolvingModelPersister.java:64)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.persist(ResolvingModelPersister.java:49)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveViaJalo(DefaultModelService.java:1059)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.doJaloPersistence(DefaultModelService.java:648)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.persistWrappers(DefaultModelService.java:1002)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.performPersistenceOperations(DefaultModelService.java:626)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAllInternal(DefaultModelService.java:620)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAll(DefaultModelService.java:600)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.save(DefaultModelService.java:548)
at com.test.fulfilment.process.impl.DefaultOrderProcessService.requestForcedOrderCancellation(DefaultOrderProcessService.java:131)
at com.test.application.order.facades.impl.DefaultOrderFacade.forcedOrderCancel(DefaultOrderFacade.java:62)
at com.test.application.controllers.OrderController.blockOrder(OrderController.java:520)
Our pool config is the following:
{
"maxIdle": 90,
"minIdle": 2,
"maxActive": 90,
"maxWait": 10000,
"whenExhaustedAction": 1,
"testOnBorrow": true,
"testOnReturn": true,
"testWhileIdle": true,
"timeBetweenEvictionRunsMillis": 10000,
"numTestsPerEvictionRun": 100,
"minEvictableIdleTimeMillis": 300000,
"softMinEvictableIdleTimeMillis": -1,
"lifo": true
}
Our tomcat config is:
tomcat.generaloptions.JDBC=-Doracle.jdbc.ReadTimeout=60000
tomcat.generaloptions.TIMEOUT=-Dsun.net.client.defaultConnectTimeout\=60000 -Dsun.net.client.defaultReadTimeout\=60000
tomcat.ajp.acceptCount=100
tomcat.ajp.maxThreads=400
tomcat.maxthreads=400
tomcat.minsparethreads=50
tomcat.maxidletime=10000
tomcat.connectiontimeout=120000
tomcat.acceptcount=100
We tried to remove the oracle.jdbc.ReadTimeout but the result was that we started to see Closed Connections on the other servers.
The code that trigger this error is pretty simple (and it works in the 95% of time):
#Override
public boolean requestForcedOrderCancellation(final OrderModel order) {
Transaction.current().begin();
try {
modelService.lock(order.getPk());
modelService.refresh(order);
order.setForcedCancelled(true);
modelService.save(order);
Transaction.current().commit();
return true;
catch (Exception e) {
LOG.error(e.getMessage(), e);
Transaction.current().rollback();
return false;
}
}
We tried also without explicit locking and the problem is exactly the same.
It seems like the connection is already closed and we cannot rollback (or commit) the transactions that are still waiting in DB.
I expect to avoid this lock and these closed connection errors.
Your connection pool is probably fixing this already for you. Try in increase the logging to see whether it does.
Background: Databases hate long living connections because it can starve them. So they tend to close the connection after some time. Another culprit are firewalls which tend to delete idle connections from their tables. Connection pools know how to handle this by testing the connections (all those test* options in your config above).
Sometimes, you need to tell your pool how to test a connection. Check the documentation. For Oracle, a good test is select 1 from dual.
I think your real problem are those stuck sessions. Find out what they are waiting for by looking at a Java thread dump which you can create using the tool jstack which comes with the Java SDK.
We found that issue was due to uncatched exception/error in transactional code.
Server answer with error and Hybris did not rollback the transaction that is still open.
The same thread is reused sometime later (maybe some days) and old transaction is still open.
When this corrupted thread is used for locking some rows in database, even if we commit the transaction in the code, the same is not committed to database because internally Hybris has a transaction counter to handle inner transactions (maybe used in called methods). Transaction is commited/rollback to DB only when we use commit/rollback method and transaction counter is 1.
Request1:
Transaction.begin() // Hybris Counter = 1
doSomething() // This throws Exception, Application Exit, Hybris Counter is still 1
try {
Transaction.commit()
} catch (Exception e) {
Transaction.rollback();
}
Request2 on same thread:
Transaction.begin() // Hybris Counter now is 2
doSomething() // Works OK, Hybris Counter is still 2
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
} catch (Exception e) {
Transaction.rollback();
}
Request3 on same thread:
Transaction.begin() // Hybris Counter now is 2
lockRow()
// Row is locked for the whole transaction (the same opened in R1)
// Everything is OK
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
// Row is still locked
// Next requests to the same row will wait lock forever
} catch (Exception e) {
Transaction.rollback();
}
I have a requirement where I read a bunch of rows (thousands) from a SQL DB using Spring Batch and call a REST Service to enrich content before writing them on a Kafka topic.
When using the Spring Reactive webClient, how do I limit the number of active non-blocking service calls? Should I somehow introduce a Flux in the loop after I read data using Spring Batch?
(I understand the usage of delayElements and that it serves a different purpose, as when a single Get Service Call brings in lot of data and you want the server to slow down -- here though, my use case is a bit different in that I have many WebClient calls to make and would like to limit the number of calls to avoid out of memory issues but still gain the advantages of non-blocking invocations).
Very interesting question. I pondered about it and I thought of a couple of ideas on how this could be done. I will share my thoughts on it and hopefully there are some ideas here that perhaps help you with your investigation.
Unfortunately, I'm not familiar with Spring Batch. However, this sounds like a problem of rate limiting, or the classical producer-consumer problem.
So, we have a producer that produces so many messages that our consumer cannot keep up, and the buffering in the middle becomes unbearable.
The problem I see is that your Spring Batch process, as you describe it, is not working as a stream or pipeline, but your reactive Web client is.
So, if we were able to read the data as a stream, then as records start getting into the pipeline those would get processed by the reactive web client and, using back-pressure, we could control the flow of the stream from producer/database side.
The Producer Side
So, the first thing I would change is how records get extracted from the database. We need to control how many records get read from the database at the time, either by paging our data retrieval or by controlling the fetch size and then, with back pressure, control how many of those are sent downstream through the reactive pipeline.
So, consider the following (rudimentary) database data retrieval, wrapped in a Flux.
Flux<String> getData(DataSource ds) {
return Flux.create(sink -> {
try {
Connection con = ds.getConnection();
con.setAutoCommit(false);
PreparedStatement stm = con.prepareStatement("SELECT order_number FROM orders WHERE order_date >= '2018-08-12'", ResultSet.TYPE_FORWARD_ONLY);
stm.setFetchSize(1000);
ResultSet rs = stm.executeQuery();
sink.onRequest(batchSize -> {
try {
for (int i = 0; i < batchSize; i++) {
if (!rs.next()) {
//no more data, close resources!
rs.close();
stm.close();
con.close();
sink.complete();
break;
}
sink.next(rs.getString(1));
}
} catch (SQLException e) {
//TODO: close resources here
sink.error(e);
}
});
}
catch (SQLException e) {
//TODO: close resources here
sink.error(e);
}
});
}
In the example above:
I control the amount of records we read per batch to be 1000 by setting a fetch size.
The sink will send the amount of records requested by the subscriber (i.e. batchSize) and then wait for it to request more using back pressure.
When there are no more records in the result set, then we complete the sink and close resources.
If an error occurs at any point, we send back the error and close resources.
Alternatively I could have used paging to read the data, probably simplifying the handling of resources by having to reissue a query at every request cycle.
You may consider also doing something if subscription is cancelled or disposed (sink.onCancel, sink.onDispose) since closing the connection and other resources is fundamental here.
The Consumer Side
At the consumer side you register a subscriber that only requests messages at a speed of 1000 at the time and it will only request more once it has processed that batch.
getData(source).subscribe(new BaseSubscriber<String>() {
private int messages = 0;
#Override
protected void hookOnSubscribe(Subscription subscription) {
subscription.request(1000);
}
#Override
protected void hookOnNext(String value) {
//make http request
System.out.println(value);
messages++;
if(messages % 1000 == 0) {
//when we're done with a batch
//then we're ready to request for more
upstream().request(1000);
}
}
});
In the example above, when subscription starts it requests the first batch of 1000 messages. In the onNext we process that first batch, making http requests using the Web client.
Once the batch is complete, then we request another batch of 1000 from the publisher, and so on and so on.
And there your have it! Using back pressure you control how many open HTTP requests you have at the time.
My example is very rudimentary and it will require some extra work to make it production ready, but I believe this hopefully offers some ideas that can be adapted to your Spring Batch scenario.
I am currently working on a Java Swing application in NetBeans with Hibernate guided with this wonderful repo from GitHub.
From the example code found here, it basically urges new programmers to open and close SessionFactory connection every time certain queries have been executed:
try {
HibernateSessionFactory.Builder.configureFromDefaultHibernateCfgXml()
.createSessionFactory();
new MySqlExample().doSomeDatabaseStuff();
} catch (Throwable th) {
th.printStackTrace();
} finally {
HibernateSessionFactory.closeSessionFactory();
}
private void doSomeDatabaseStuff() {
deleteAllUsers();
insertUsers();
countUsers();
User user = findUser(USER_LOGIN_A);
LOG.info("User A: " + user);
}
Is this a good programming exercise? Isn't it more efficient to open the SessionFactory on app startup and close it on WindowClosing event? What are the drawbacks of each method?
Thanks.
Using a persistent connection means you are going to have as many opened connections on your database as opened clients, plus you'll have to make sure it stays open (very often it will be closed if it stays idle for a long time).
On the other hand, executing a query will be significantly faster if the connection is already opened.
So it really depends on often your clients will use the database. If they use it very rarely, a persistent connection is useless.
I have a swing Desktop application that uses Hibernate. I can have more than one computer using the application and accesing the same MySQL database concurrently.
I thought that using Hibernate Transaction would prevent the case of two different computers accessing the same data in the same time. Apparently it does not.
The problem i'm facing is similar to this:
//Suppose I'm persisting a product sale
Session session = getSession();
Transaction transaction = session.getTransaction();
try {
transaction.begin();
MyProduct product = getMyProduct();
product.setActualInventory(product.getActualInventory() - amountSold);
session.merge(product);
transaction.commit();
} catch (Exception ex) {
ex.printStackTrace();
transaction.rollback();
logger.error(ex);
throw new DefaultException("Erro ao salvar a venda.");
}
The problem is that sometimes I get this "ActualInventory" not correct, because of multiple access of two machines in the same time.
I thought that with this code, when one computer asked for transaction.begin(), it would have to wait the another one to finish its with transaction.commit().
But now I realized that this does not occur, because I have two different computers with a separate Hibernate in each one that just communicates with the same database.
How could I solve this problem? Maybe is there a way to perform this "transaction control" via MySQL directly?