I am attempting to use synchronize method in spring controller. Because our Payment gateway hits method [#RequestMapping(value="/pay",method=RequestMethod.POST)] different transactions [txn id : txn01 & txn02] at a time. But these 2 different transaction processing one by one than parallel due to using synchronize block.
Problem -> Why i am using synchronize block in controller is that say Transaction [txn01] hits [#RequestMapping(value="/pay",method=RequestMethod.POST)] twice like duplicate call from payment gateway. before finishing first call [backend processing] i get second call from payment gateway for same tran id.
Is there any way to process two different transaction parallel with using transaction id in synchronize block other than duplicate call i mean same tran id. Please advice me.
Please let me know if my question is unclear.
#RequestMapping(value="/pay",method=RequestMethod.POST)
public String payAck(HttpServletRequest httpRequest,HttpServletResponse httpResponse,HttpSession session){
synchronized (this) {
return this.processPayAck(httpRequest, httpResponse, session);
}
}
public synchronized String processPayAck(HttpServletRequest httpRequest,HttpServletResponse httpResponse,HttpSession session){
// Payment Acknowledgment process here
if (sametranIDNotExists) {
// first call here
callWS(); - processing business logic.
return someURL;
} else {
// Gets second call here before first call completed
return someURL;
}
}
Modified code :
Is it correct way to use intern inside synchronize block.
#RequestMapping(value="/pay",method=RequestMethod.POST)
public String payAck(HttpServletRequest httpRequest,HttpServletResponse httpResponse,HttpSession session){
String tranID = httpRequest.getParameter("tranID");
synchronized (String.valueOf(tranID).intern()) {
return processPayAck(httpRequest, httpResponse, session);
}
}
I'm not sure if you are working in a distributed environment.
If there is only one machine, you can remove the syncronized keyword and create name-based locks with your transation id instead.
If this program is working in a cluster and there are multiple machines, which means the request might be assigned to different machine, I think you need to aquaire distribution-lock with Redis or other frameworks.
Synchronized block is used to provide thread safety. Also when multiple threads are trying to access same object, thread only with object level lock can acces synchronized(this) block. While one among a group of threads get object level lock, rest of the threads wait (Threads access synchronised block one by one but not in parallel).
Appropriate use : Use synchronized block when threads are trying modifying same resource(to avoid data inconsistancy). In this case threads are trying to modify same database resource. But as mentioned modifications are done on 2 different transactions(rows).
If modifying one row doesn't harm the other one then it is not required to use the line
return this.processPayAck(httpRequest, httpResponse, session);
within synchronised block. Instead it could be written as:
#RequestMapping(value="/pay",method=RequestMethod.POST)
public String payAck(HttpServletRequest httpRequest,HttpServletResponse httpResponse,HttpSession session){
return this.processPayAck(httpRequest, httpResponse, session);
}
Suggestion : Use CopyOnWriteArrayList (as an instance variable not local variable) to store transaction id at the end of payAck method and use contains("textId") method to check whether the given transaction id is using payAck method again.
Related
Let's say that we have the following entities: Project and Release, which is a one to many relationship.
Upon an event consumption from an SQS queue where a release id is sent as part of the event, there might be scenarios where we might have to create thousands of releases in our DB, where for each release we have to make a rest call to a 3rd party service in order to get some information for each release.
That means that we might have to make thousands of calls, in some cases more than 20k calls just to retrieve the information for the different releases and store it in the DB.
Obviously this is not scalable, so I'm not really sure what's the way to go in this scenario.
I know I might use a CompletableFuture, but I'm not sure how to use that with spring.
The http client that I am using is WebClient.
Any ideas?
You can make the save queries in a method transactional by adding the annotation #Transactional above the method signature. The method should also be public, or else this annotation is ignored.
As for using CompletableFuture in spring; You could make a http call method asynchronous by adding the #Async annotation above its signature and by letting it return a CompletableFuture as a return type. You should return a completed future holding the response value from the http call. You can easily make a completed future with the method CompletableFuture.completedFuture(yourValue). Spring will only return the completed future once the asynchronous method is done executing everything int its code block. For #Async to work you must also add the #EnableAsync annotation to one of your configuration classes. On top of that the #Async annotated method must be public and cannot be called by a method from within the same class. If the method is private or is called from within the same class then the #Async annotation will be ignored and instead the method will be executed in the same thread as the calling method is executed.
Next to an #Async annotated method you could also use a parallelStream to execute all 20K http calls in parallel. For example:
List<Long> releaseIds = new ArrayList<>();
Map<Long,ReleaseInfo> releaseInfo = releaseIds.parallelStream().map(releaseId -> new AbstractMap.SimpleEntry<>(releaseId, webClient.getReleaseInfo(releaseId)).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
Lastly you could also use a ThreadPoolExecutor to execute the http calls in parallel. An example:
List<Long> releaseIds = new ArrayList<>();
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()); //I've made the amount of threads in the pool equal to the amount of available CPU processors on the machine.
//Submit tasks to the executor
List<Future<ReleaseInfo>> releaseInfoFutures = releaseIds.stream().map(releaseId -> executor.submit(() -> webClient.getReleaseInfo(releaseId)).collect(Collectors.toList());
//Wait for all futures to complete and map all non-null values to ReleaseInfo list.
List<ReleaseInfo> releaseInfo = releaseInfoFutures.stream().map(this::getValueAfterFutureCompletion).filter(releaseInfo -> releaseInfo != null).collect(Collectors.toList());
private ReleaseInfo getValueAfterFutureCompletion(Future<ReleaseInfo> future){
ReleaseInfo releaseInfo = null;
try {
releaseInfo = future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} finally {
return releaseInfo;
}
}
Make sure to call shutdownNow() on ThreadPoolExecutor after you're done with it to avoid memory leaks.
Summarize
Goal
I have an application that is written in Java using the Spring framework. There is a service that is being used as the handler for grabbing and releasing locks in the database (InnoDB). My goal is to be able to log the grabbing and releasing of the locks to create a lock history. For each lock interaction, I would like to know not only the name of the lock involved, but also where this request is coming from in the code (if possible, class name, method name, and line number).
My expected database entry will look something like this:
id
lock_name
clazz
method
line
lock_date
unlock_date
unlock_type
0
tb_member
MemberTools
createMember
123
2021-12-23 10:16:00
2021-12-23 10:16:01
COMMIT
1
tb_member
MemberTools
editMember
234
2021-12-23 10:16:01
2021-12-23 10:16:02
COMMIT
I would like to know if there is an easy way to obtain this given that I am using the Spring framework.
Describe
So far, I have tried two things:
Forcing the caller to pass a reference to itself or its current StackTraceElement (using Thread.currentThread().getStackTrace()[1]). This is not only extremely repetitive, but it also is prone to human error, as a developer might not realize that they need to pass in some reference to themselves.
Inside of the lock service, use the getStackTrace method and walk through the elements to find the "correct" one. This is made very hard by Spring and the fact that before a call actually reaches the inside of a class with the #Service annotation, the call stack is muddled by numbers of calls between proxies and generated classes and such. Unless there is a deterministic way to find the number of calls in between the Service and the caller, then this doesn't seem like a good way either.
I have referenced this stack overflow question while working, but these do not take into account the usage of the Spring framework.
Show
A reproducible example will look something like this. First, the structure:
root\
LockService.java
getLock()
MemberTools.java
createMember()
LockService.java:
#Service
public class LockService {
#Transactional
public Lock getLock(String key) {
Lock searchLock = new Lock();
searchLock.setKey(key);
lockMapper.getLock(searchLock);
LockHistory lockHistory = new LockHistory();
// Fill out lockHistory object...
lockMapper.markAsLocked(lockHistory);
attachTransactionCompletedListener(lockHistory);
}
private void attachTransactionCompletedListener(LockHistory lockHistory) {
/* Attach a listener onto the current spring transaction so that we
* can update the database entry when the transaction finishes and
* the lock is released.
*/
}
}
MemberTools.java:
public class MemberTools {
#Autowired
LockService lockService;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void createMember() {
lockService.getLock("tb_member");
/* Do create member stuff...
* When this returns, the lock will be released
* (either from COMMIT, ROLLBACK, or UNKNOWN Spring error)
*/
}
}
By the time the getLock() method is reached, the stack trace is muddled with many calls that Spring inserts (proxies, reflections, etc.). Putting a breakpoint in this function and examining Thread.currentThread().getStackTrace() will show this.
Background
I am using Realm within my app. When data is loaded it then undergoes intense processing therefore the processing occurs on a background thread.
The coding pattern in use is the Unit of Work pattern and Realm only exists within a repository under a DataManager. The idea here is that each repository can have a different database/file storage solution.
What I have tried
Below is an example of some similar code to what I have in my FooRespository class.
The idea here is that an instance of Realm is obtained, used to query the realm for objects of interest, return them and close the realm instance. Note that this is synchronous and at the end copies the objects from Realm to an unmanaged state.
public Observable<List<Foo>> getFoosById(List<String> fooIds) {
Realm realm = Realm.getInstance(fooRealmConfiguration);
RealmQuery<Foo> findFoosByIdQuery = realm.where(Foo.class);
for(String id : fooIds) {
findFoosByIdQuery.equalTo(Foo.FOO_ID_FIELD_NAME, id);
findFoosByIdQuery.or();
}
return findFoosByIdQuery
.findAll()
.asObservable()
.doOnUnsubscribe(realm::close)
.filter(RealmResults::isLoaded)
.flatMap(foos -> Observable.just(new ArrayList<>(realm.copyFromRealm(foos))));
}
This code is later used in conjunction with the heavy processing code via RxJava:
dataManager.getFoosById(foo)
.flatMap(this::processtheFoosInALongRunningProcess)
.subscribeOn(Schedulers.io()) //could be Schedulers.computation() etc
.subscribe(tileChannelSubscriber);
After reading the docs, my belief is that the above should work, as it is NOT asynchronous and therefore does not need a looper thread. I obtain the instance of realm within the same thread therefore it is not being passed between threads and neither are the objects.
The problem
When the above is executed I get
Realm access from incorrect thread. Realm objects can only be accessed
on the thread they were created.
This doesn't seem right. The only thing I can think of is that the pool of Realm instances is getting me an existing instance created from another process using the main thread.
Kay so
return findFoosByIdQuery
.findAll()
.asObservable()
This happens on UI thread, because that's where you're calling it from initially
.subscribeOn(Schedulers.io())
Aaaaand then you're tinkering with them on Schedulers.io().
Nope, that's not the same thread!
As much as I dislike the approach of copying from a zero-copy database, your current approach is riddled with issues due to misuse of realmResults.asObservable(), so here's a spoiler for what your code should be:
public Observable<List<Foo>> getFoosById(List<String> fooIds) {
return Observable.defer(() -> {
try(Realm realm = Realm.getInstance(fooRealmConfiguration)) { //try-finally also works
RealmQuery<Foo> findFoosByIdQuery = realm.where(Foo.class);
for(String id : fooIds) {
findFoosByIdQuery.equalTo(FooFields.ID, id);
findFoosByIdQuery.or(); // please guarantee this works?
}
RealmResults<Foo> results = findFoosByIdQuery.findAll();
return Observable.just(realm.copyFromRealm(results));
}
}).subscribeOn(Schedulers.io());
}
Note that you are creating the instance outside of all your RxJava processing pipeline. Thus on the main thread (or whichever thread you are on, when calling getFoosById().
Just because the method returns an Observable doesn't mean that it runs on another thread. Only the processing pipeline of the Observable created by the last statement of your getFoosById() method runs on the correct thread (the filter(), the flatMap() and all the processing done by the caller).
You thus have to ensure that the call of getFoosById()is already done on the thread used by Schedulers.io().
One way to achieve this is by using Observable.defer():
Observable.defer(() -> dataManager.getFoosById(foo))
.flatMap(this::processtheFoosInALongRunningProcess)
.subscribeOn(Schedulers.io()) //could be Schedulers.computation() etc
.subscribe(tileChannelSubscriber);
In my service code, I am trying to create or update a Person domain object:
#Transactional
def someServiceMethod(some params....) {
try{
def person = Person.findByEmail(nperson.email.toLowerCase())
if (!person) {
person = new Person()
person.properties = nperson.properties
} else {
// update the person parameters (first/last name)
person.firstName = nperson.firstName
person.lastName = nperson.lastName
person.phone = nperson.phone
}
if (person.validate()) {
person.save(flush: true)
//... rest of code
}
// rest of other code....
} catch(e) {
log.error("Unknown error: ${e.getMessage()}", e)
e.printStackTrace()
return(null)
}
Now above code OCCASIONALLY when trying to save a Person object with already existing email throws following exception:
Hibernate operation: could not execute statement; SQL [n/a]; Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'
This is very strange because I am already finding the person by email and hence the save() should try to update the record instead of creating the new one.
I was wondering why is this happening!
EDIT:
I am on grails 2.4.5 and Hibernate plugin in BuildConfig is:
runtime ':hibernate4:4.3.8.1'
EDIT2:
My application is on multiple servers hence synchronized block won't work
If this is concurrency issue, here is what we do in such case. We have a lot of concurrent background processes which work on the same tables. If there is such operation it indeed is in synchronized block, so code may look like:
class SomeService {
static transactional = false //service cannot be transactional
private Object someLock = new Object() //synchronized block on some object must be used
def someConcurrentSafeMethod(){
synchronized(someLock){
def person = Person.findByEmail(nperson.email.toLowerCase())
...
person.save(flush: true) // flush is very important, must be done in synchronized block
}
}
}
There are few important points to make this working (from our experience, not official):
Service cannot be transactional - if service is transactional, transaction is commited after the method returns value and synchronization inside method will not be enough. Programmatic transactions may be another way
synchronized method is not enough synchronized def someConcurrentSafeMethod() will not work - probably because service is wrapped in proxy
Session MUST be flushed inside synchronized block
every object which will be saved, should be read in synchronized block, if you pass it from external method, you may run into optimistic locking failed exception
UPDATED
Because application is deployed on distributed system, above will not solve the issue here (still may help others). After discussion we had on Slack, I just summarize potential ways to do that:
pessimistic locking of updated objects and lock of whole table for inserts (if possible)
moving 'dangerous' database related methods to single server with some API like REST and calling it from other deployments (and using synchronized approach from above)
using multiple save approach - if operation fails, catch exception and try again. This is supported by integration libraries like Spring Integration or Apache Camel and is one of enterprise patterns. See request-handler-advice-chain for Spring Integration as an example
use something to queue operations, for example JMS server
If anyone has more ideas please share them.
If two threads are accessing this method on server, will it be thread safe? The threads are coming from GWT timer.
public UserDTO getUserFromSession()
{
UserDTO user = null;
HttpServletRequest httpServletRequest = this.getThreadLocalRequest();
HttpSession session = httpServletRequest.getSession();
Object userObj = session.getAttribute("user");
if (userObj != null && userObj instanceof UserDTO)
{
user = (UserDTO) userObj;
}
return user;
}
A method is thread safe if it doesn't access to external (to the method) shared variables.
The problem in your code could be on that line of code:
HttpServletRequest httpServletRequest = this.getThreadLocalRequest();
because this.getThreadLocalRequest() seems to access a shared variable.
To be sure post the whole class, but for what I can see it is not thread safe.
Also after the comment that explain what getThreadLocalRequest method returns a HttpServletRequest safely the code remains not thread safe.
Infact HttpSession is not thread safe according to this article: basically the session can change during the code execution.
For example you can return the user also after an invalidation of the session.
Imagine this steps:
thread 1 thread 2
---------------------------------------------- --------------
Object userObj = session.getAttribute("user");
session.invalidate();
if (userObj != null && userObj instanceof UserDTO) {
user = (UserDTO) userObj;
}
return user;
At the end you return a user also if the session was invalidated by another thread.
This method in of itself is harmless. It would be harmless even if you did not have a thread local request. The only problem with it I see is the off case in which you retrieve attribute "user" while it is instantiated, and another thread wipes attribute "user" clean before the first thread can exit the method. You'd be dealing with a user instance in one thread and in the other, you might be performing logic differently due to the fact that "user" attribute is no longer defined.
That said, I sincerely doubt that any problems would arise since these are all methods that read and don't write with no side effects. Just be mindful of the fact that several threads could be (and probably will be) handling the same instance of user so you'll want to keep thread-sensitive operations on user under a synchronized block in that case.
Yes, it is threadsafe as far as only your given method is concerned.
getThreadLocalRequest() is local to your current thread and getSession() is threadsafe as well.
Even getting the userObj from the session should not cause issues.
But after all multiple calls could access the same UserDTO object.
Therefore you need to make sure that either possible changes in this object are done in a threadsafe way or that the object is immutable.
the method looks threadsafe but it isn't, but in a more subtile way:
While getSession() and Session is safe, the session and its contents are not.
The Session you were looking for can go away anytime. It is not enough to examine only this method, but all other session dependent objects as well.
In a high load situation, you need to take care, that your getuser function will not recreate the session on the fly.
getSession(false) will take care of this. You will have a null check on the Session returned and abort your call in that case.
The user object as stated by others before is another responsibility.