Alternative to Thread.sleep() for for better performance - java

I have developed a REST service. The service have one Api endpoint: v1/customer. This Api
does two things:
It executes the business logic in main thread
Spawns a child thread to perform the non critical DB writes. The main thread returns response to client immediately, whereas the child thread write to DB asynchronously.
As both of the these operations Step 1 and 2 are not synchronous, it is becoming increasingly challenging to test both of these scenario.
Let's say when I try to test the API. I am testing two things (api response and DB writes)
As the DB writes happen async fashion. I have to use a Thread.sleep(2000). This process is not scalable and doesn't yield right result. Soon I might have 1000 test cases to run and the total time to run all these testcases will increase enormously.
What design technique shall I use to test the DB writes keeping performance and execution time in mind.

I would suggest to change your api design if possible. One possible solution could be to have your first api call respond with http 202 accepted and return some kind of job ID to the client. With this job ID the client could check the progress via a GET on another endpoint. This would allow you to have polling in your test without hardcoding some sleep values.
Here is a example that shows the process in a bit more detail.
https://restfulapi.net/http-status-202-accepted/

Related

How to make multiple call of #Transactional method to a single transaction

I have a method
#Transactional
public void updateSharedStateByCommunity(List[]idList)
This method is called from the following REST API:
#RequestMapping(method = RequestMethod.POST)
public ret_type updateUser(param) {
// call updateSharedStateByCommunity
}
Now the ID lists are very large, like 200000, When I try to process it, then it takes lots of time and on client side timeout error occurred.
So, I want to split it to two calls with list size of 100000 each.
But, the problem is, it is considered as 2 independent transactions.
NB: The 2 calls is an example, it can be divided to many times, if number ids are more larger.
I need to ensure two separate call to a single transaction. If any one of the 2 calls fails, then it should rollback to all operation.
Also, in the client side, we need to show progress dialog, so I can't use only timeout.
The most obvious direct answer to your question IMO is to slightly change the code:
#RequestMapping(method = RequestMethod.POST)
public ret_type updateUser(param) {
updateSharedStateByCommunityBlocks(resolveIds);
}
...
And in Service introduce a new method (if you can't change the code of the service provide an intermediate class that you'll call from controller with the following functionality):
#Transactional
public updateSharedStatedByCommunityBlocks(resolveIds) {
List<String> [] blocks = split(resolveIds, 100000); // 100000 - bulk size
for(List<String> block :blocks) {
updateSharedStateByCommunity(block);
}
}
If this method is in the same service, the #Transactional in the original updateSharedStateByCommunity won't do anything so it will work. If you'll put this code into some other class, then it will work since the default propagation level of spring transaction is "Required"
So it addresses harsh requirements: you wanted to have a single transaction - you've got it. Now all the code runs in the same transaction. Each method now runs with 100000 and not with all the ids, everything is synchronous :)
However, this design is problematic for many different reasons.
It doesn't allow to track the progress (show it to the user) as you've stated by yourself in the last sentence of the question. REST is synchronous.
It assumes that network is reliable and waiting for 30 minutes is technically not a problem (leaving alone the UX and 'nervous' user that will have to wait :) )
In addition to that, the network equipment can force closing the connection (like load balancers with pre-configured request timeout).
That's why people suggest some kind of asyncrhonous flow.
I can say that you still can use the async flow, spawn the task, and after each bulk update some shared state (in-memory in the case of a single instance) and persistent (like database in the case of cluster).
So that the interaction with the client will change:
Client calls "updateUser" with 200000 ids
Service responds "immediately" with something like "I've got your request, here is a request Id, ping me once in a while to see what happens.
Service starts an async task and process the data chunk by chunk in a single transaction
Client calls "get" method with that id and server reads the progress from the shared state.
Once ready, the "Get" methods will respond "done".
If something fails during the transaction execution, the rollback is done, and the process updates the database status with "failure".
You can also use more modern technologies to notify the server (web sockets for example), but it's kind of out of scope for this question.
Another thing to consider here: from what I know, processing 200000 objects should be done in much less then 30 minutes, its not that much for modern RDBMSs.
Of course, without knowing your use case its hard to tell what happens there, but maybe you can optimize the flow itself (using bulk operations, reducing the number of requests to db, caching and so forth).
My preferred approach in those scenarios is make the call asynchronous (Spring Boot allow this using the #Async annotation), hence the client won't expect for any HTTP response. The notification could be done via a WebSocket that will push a message to the client with the progress each X items processed.
Surely it will add more complexity to your application, but if you design the mechanism properly, you'll be able to reuse it for any other similar operation you may face in the future.
The #Transactional annotation accepts a timeout (although not all underlying implementations will support it). I would argue against trying to split the IDs into two calls, and instead try to fix the timeout (after all, what you really want is a single, all-or-nothing transaction). You can set timeouts for the whole application instead of on a per-method basis.
From technical point, it can be done with the org.springframework.transaction.annotation.Propagation#NESTED Propagation, The NESTED behavior makes nested Spring transactions to use the same physical transaction but sets savepoints between nested invocations so inner transactions may also rollback independently of outer transactions, or let them propagate. But the limitation is only works with org.springframework.jdbc.datasource.DataSourceTransactionManager datasource.
But for really large dataset, it still need more time to processing and make the client waiting, so from solution point of view, maybe using async approach will be more better but it depends on your requirement.

Serving single HTTP request with multiple threads

Angular 4 application sends a list of records to a Java spring MVC application that has been deployed in Websphere 8 Servlet container. The list is then inserted into to a temp table. After the batch insert, a procedure call is made in order to do some calculations and return results. Depending on the size of the list that was inserted into temp table it may take anywhere between: 3000ms( N ~ 500 ), 6000ms( N ~ 1000 ), 50,000+ms ( N > 2000 ).
My asendach would be to create chunks of data and simultaneously send them to database for processing. After threads (Futures) return results I would aggregate them and return back to the client. To sum up, I would split a synchronous call into multiple asynchronous processes(simultaneously executed) and return back to the client over the same thread that initiated HTTP call - landed into my controller.
Everything would be fine and I would not be asking this questions if a more experienced colleague of mine was not strongly disagreeing with this approach. His reasoning is that using this approach is prone to exceptions due to thread interrupts / timeouts / semaphores and so on. Hi is going as far as saying that multithreading should be avoided within a web container because it can crash the Servlet container in case it runs out of threads.
He proposes that we should have the browser send multiple AJAX requests and aggregates/present data in chunks.
Can you please help me understand which approach is better and why?
I would say that your approach is much better.
Threads created by application logic aren't application container threads and limited only by operating system. While each AJAX request uses a thread from application container. So the second approach reduces throughput and increases the possibility of reaching application container limit while and the first one not. Performance also should be considered because it's much cheaper to create a thread than to send a request over network. Plus each network requests uses additional resources for authentication/authorization/encryption etc.
It's definetely harder to write correct multithread code and it can easily prone to errors. However it shouldn't stop you from doing it because concurrency can significantly increase your performance. It's pretty straightforward to handle interrupts and timeouts using Future and you for sure don't need semaphores here.
Exposing this logic to client looks like breaking of encapsulation. Imagine that you use rest api which forces you to send multiple request by splitting you data in chunks. What chunk size should i use? How to deal with timeouts/interrupts? How many requests should i sent? etc. You will have almost the same challenges in both approaches, but it's much easier to deal with them using specially designed for this libraries like ExecutorService and Future.

how to process multiple API calls from the same client one by one in a scalable, highly concurrent and fault tolerant system

We have web service APIs to support clients running on ten millions devices. Normally clients call server once a day. That is about 116 clients seen per second. For each client (each with unique ID), it may make several APIs calls concurrently. However, Server can only process those API calls one by one from the same client. Because, those API calls will update the same document of that client in the backend Mongodb database. For example: need to update last seen time and other embedded documents in the document of this client.
One solution I have is to put synchronized block on an "intern" object representing this client's unique ID. That will allow only one request from the same client obtains the lock and be processed at the same time. In addition, requests from other clients can be processed at the same time too. But, this solution requires to turn on load balancer's "stickiness". That means load balancer will route all requests from the same ip address to a specific server within a preset time interval (e.g. 15 minute). I am not sure if this has any impact to the robustness in the whole system design. One thing I can think of is that some clients may make more requests and make the load not balanced (create hotspots).
Solution #1:
Interner<Key> myIdInterner = Interners.newWeakInterner();
public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
You can see my other question for this solution in detail: Should I use Java String Pool for synchronization based on unique customer id?
The second solution I am thinking is to somehow lock the document (Mongodb) of that client (I have not found a good example to do that yet). Then, I don't need to touch load balancer setting. But, I have concerns on this approach as I think the performance (round trips to Mongodb server and busy waiting?) will be much worse compared to solution #1.
Solution #2:
public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
try {
obtainDocumentLock(new Key(clientUniqueId));
// code to process request
} finally {
releaseDocumentLock(new Key(clientUniqueId));
}
}
public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
try {
obtainDocumentLock(new Key(clientUniqueId));
// code to process request
} finally {
releaseDocumentLock(new Key(clientUniqueId));
}
}
I believe this is very common issue in a scalable and high concurrent system. How do you solve this issue? Is there any other option? What I want to achieve is to be able to process one request at a time for those requests from the same client. Please be noted that just controlling the read/write access to database does not work. The solution need to control the exclusive processing of the whole request.
For example, there are two requests: request #1 and request #2. Request #1 read the document of the client, update one field of a sub-document #5, and save the whole document back. Request #2 read the same document, update one field of sub-document #8, and save the whole document back. At this moment, we will get an OptimisticLockingFailureException because we use #Version annotation from spring-data-mongodb to detect version conflict. So, it is imperative to process only one request from the same client at any time.
P.S. Any suggestion in selecting solution #1 (lock on single process/instance with load balancer stickiness turned on) or solution #2 (distributed lock) for a scalable, and high concurrent system design. The goal is to support tens of millions clients with concurrently hundreds of clients access the system per second.
In your solution, you are doing a lock split based on customer id so two customers can process the service same time. The only problem is the sticky session. One solution can be to use distributed lock so you can dispatch any request to any server and the server gets the lock process. Only one consideration is it involves remote calls. We are using hazelcast/Ignite and it is working very well for average number of nodes.
Hazelcast
Why not just create a processing queue in Mongodb whereby you submit client request documents, and then another server process that consumes them, produces a resulting document, that the client waits for... synchronize the data with clientId, and avoid that activity in the API submission step. The 2nd part of the client submission activity (when finished) just polls Mongodb for consumed records looking for their API / ClientID and some job tag. That way, you can scale out the API submission, and separately the API consumption activities on separate servers etc.
One obvious approach is simply to implement the full optimistic locking algorithm on your end.
That is, you get sometimes get OptimisticLockingFailureException when there are concurrent modifications, but that's fine: just re-read the document and start the modification that failed over again. You'll get the same effect as if you had used locking. Essentially you are leveraging the concurrency control already built-in to MongoDB. This also has the advantage of getting several transactions go through from the same client if they don't conflict (e.g., one is a read, or they write to different documents), potentially increasing the concurrency of your system. On other hand, you have to implement the re-try logic.
If you do want to lock on a per-client basis (or per-document or whatever else) and your server is a single process (which is implied by your suggested approach) you just need a lock manager that works on arbitrary String keys, which has several reasonable solutions including the Interner one your mentioned.

Designing Orchestrator In Java using disruptor and Executors

We are designing an Orchestrator System in java that will host a web service and on a request to it will invoke a flow written in XML which are nothing but steps that are executed one after another but the XML tell the user what the flow is and he can also change the flow by changing the XML. He can add a new service and add it to the XML. But while designing I am now confused with things like.
Should I make a service a runnable with a blocking queue and keep it alive all the time by scheduling it to the executor so when the new request arrives I will push the request in the blocking queue and the service will process it. And create a Mailbox which will carry the message passing task between different services.
Instead of Making service runnable I should use spring IOC that will make the class singleton thus only one instance will be there and I will send a request directly to the service methods thus there will be no hassle that I have to do as there are no threads and also didn't need to implement any mailbox.
I read about how event driven system is faster like nodejs and ngnix so I was thinking to use disuptor framework to push all the incoming request to the ringbuffer and then write a handler that will emit the event to the orchestrator that will start processing the request and also send a callback with the request so that when the orchestrator is done it will send back the response back to the caller using callback. But as the request is not of the same type it so I would not be able to take advantage of disruptor object allocations.
The system needs to provide maximum throughput with low latency, the system will add new services/flows to XML in future so it should adopt the new flows without hitting the performance. I am allowed to only use java 7, no Scala so I am bounded.
Answer #1 is a terrible idea. You will effectively tie up a thread per service. If the number of services exceeds the number of threads backing the executor service you have an instant, automatic DOS. Plus, if the services are inter-dependent on each other... all the ways in which you can dead lock. Plus, the inefficiency of using N threads if only M (< N) are actually required.
Answer #2: if the proposed flow is Request Parsing -> Dispatch -> Service Processing -> Callbacks you rely on the actual 'services' not to foul up because that will prevent callbacks from being called and/or DOS the application. Essentially: what happens if an exception occurs in a service? Will that also impact future requests to the same service and/or other services?
Also the opportunities for parallelism are limited to the framework's way of handling incoming requests. Meaning if you have X requests and the framework inherently processes them serially, you get a backlog of X requests. Your latency requirements may be hard to meet in such a scenario.
Answer #3: an event driven system is indeed the better approach: have a scheduler farm out jobs to an executor service to allow the system to distribute the total load of all services dynamically and have a mechanism to generate and react on events to handle the control flow. This scales better if the number of services become very large and each 'job' is reasonably substantial (so the overhead of scheduling/dispatch is low compared to the actual work being performed).

Why .now()? (Objectify)

Why would I ever want to load an Objectify entity asynchronously? And what does asynchronous loading actually mean?
According to Objectify documentation about loading, the following way of loading an Entity is asynchronous:
// Simple key fetch, always asynchronous
Result<Thing> th = ofy().load().key(thingKey);
And if I want a load to perform synchronously then I should do this:
Thing th = ofy().load().key(thingKey).now(); // added .now()
To me, asynchronous means that the action will take place later at some unspecified time. For saving, asynchronous makes sense because the datastore operation may need some time to finish on its own without blocking the application code.
But with loading, does asynchronous mean the load will take place at another time? How is that even possible in Java? I thought the variable Result<Thing> th had to be updated when the line of code Result<Thing> th = ofy().load().key(thingKey); finishes executing.
As a novice it's taken me a long time to figure this out (see for instance Objectify error "You cannot create a Key for an object with a null #Id" in JUnit).
So I have a few questions:
1] Why would I ever want to load an Objectify entity asynchronously?
2] What does asynchronous loading actually mean?
3] What is the conceptual link between now() for loading and now() for saving?
Synchronous Load (source)
Thing th = ofy().load().key(thingKey).now();
Synchronous Save (source)
ofy().save().entity(thing1).now();
4] Why isn't synchronous the default behavior for saving and loading?
Response from Google Cloud Support to support case 05483551:
“Asynchronous” in the context of Java means the use of “Futures” or Future-like constructs. A Future in java[1] is an object that represents an operation that doesn’t necessarily need to be performed and completed by the time the next line begins executing in the current thread.
A call to an asynchronous function in Java will return a Future immediately, representing the promise that a background “thread” will work on the computation/network call while the next line of the code continues to execute, not needing that result yet. When the method .get() is called on the Future object, either the result is returned, having been obtained in time, or the thread will wait until the result is obtained, passing execution to the next line after the .get() call only once this happens.
In Objectify, Futures were avoided, and instead the Result interface was defined[2], for reasons related to exceptions being thrown that made it painful to develop on the basis of Futures. They work in almost identical fashion, however. Where a regular Future has the method .get(), the Result interface (implemented by several different concrete classes depending what kind of Objectify call you’re doing) has .now(), which retrieves the result or waits the thread until it’s available.
The reason why you might want to load an entity asynchronously is when you have a request handler or API method that needs an Entity later in the function, but has some other computation to do as well, unrelated to the Entity. You can kick off the load for the entity in the first line, obtaining a Result, and then only call .now() on the Result once your other unrelated code has finished its execution. If you waited for the point when you call .now() to actually initiate the load, you might have your response handler/API method just waiting around for the result, instead of doing useful computations.
Finally, the conceptual link between .now() for loading and .now() for saving is that both operations happen in the background, and are only finally forced, waiting the execution thread, when .now() is called on the Result-interface-implementing object that is returned by the call to save() or load().
I hope this has helped explain the asynchronous constructs in Java Objectify for you. If you have any further questions or issues, feel free to include these in your reply, and I'll be happy to help.
Sincerely,
Nick
Technical Solutions Representative
Cloud Platform Support
[1] http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Future.html
[2] http://objectify-appengine.googlecode.com/svn/trunk/javadoc/com/googlecode/objectify/Result.html
Asynchronous operations start a network fetch to the backend and then let your code continue executing. The advantage of async operations is that you can run several of them in parallel:
Result<Thing> th1 = ofy().load().key(thingKey1);
Result<Thing> th2 = ofy().load().key(thingKey2);
Result<Thing> th3 = ofy().load().key(thingKey3);
th1.now();
th2.now();
th3.now();
This executes significantly faster than calling now() immediately each time. Note this is a bad example because you probably would use a batch get (which also parallelizes the operation) but you can have several queries, saves, deletes, etc running simultaneously.
now() always forces synchronous completion, blocking until done.
The Google Cloud Datasore was designed to give the user a positive relational and non-relational experience, essentially the best of both worlds. Google datastore is a NoSQL database which offers eventual consistency for improved scalability but also gives you the option to choose strong consistency.
This article by Google, Balancing Strong and Eventual Consistency with Google Cloud Datastore, will go a long way to answering some of your questions. It explains the eventual consistency model which is key to understanding how the datastore works under the hood in relation to your question.

Categories