Hold thread in spring rest request for long-polling - java

As I wrote in title we need in project notify or execute method of some thread by another. This implementation is part of long polling. In following text describe and show my implementation.
So requirements are that:
UserX send request from client to server (poll action) immediately when he got response from previous. In service is executed spring async method where thread immediately check cache if there are some new data in database. I know that cache is usually used for methods where for specific input is expected specific output. This is not that case, because I use cache to reduce database calls and output of my method is always different. So cache help me store notification if I should check database or not. This checking is running in while loop which end when thread find notification to read database in cache or time expired.
Assume that UserX thread (poll action) is currently in while loop and checking cache.
In that moment UserY (push action) send some data to server, data are stored in database in separated thread, and also in cache is stored userId of recipient.
So when UserX is checking cache he found id of recipient (id of recipient == his id in this case), and then break loop and fetch these data.
So in my implementation I use google guava cache which provide manually write.
private static Cache<Long, Long> cache = CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build();
In create method I store id of user which should read these data.
public void create(Data data) {
dataRepository.save(data);
cache.save(data.getRecipient(), null);
System.out.println("SAVED " + userId + " in " + Thread.currentThread().getName());
}
and here is method of polling data:
#Async
public CompletableFuture<List<Data>> pollData(Long previousMessageId, Long userId) throws InterruptedException {
// check db at first, if there are new data no need go to loop and waiting
List<Data> data = findRecent(dataId, userId));
data not found so jump to loop for some time
if (data.size() == 0) {
short c = 0;
while (c < 100) {
// check if some new data added or not, if yes break loop
if (cache.getIfPresent(userId) != null) {
break;
}
c++;
Thread.sleep(1000);
System.out.println("SEQUENCE: " + c + " in " + Thread.currentThread().getName());
}
// check database on the end of loop or after break from loop
data = findRecent(dataId, userId);
}
// clear data for that recipient and return result
cache.clear(userId);
return CompletableFuture.completedFuture(data);
}
After User X got response he send poll request again and whole process is repeated.
Can you tell me if is this application design for long polling in java (spring) is correct or exists some better way? Key point is that when user call poll request, this request should be holded for new data for some time and not response immediately. This solution which I show above works, but question is if it will be works also for many users (1000+). I worry about it because of pausing threads which should make slower another requests when no threads will be available in pool. Thanks in advice for your effort.

Check Web Sockets. Spring supports it from version 4 on wards. It doesn't require client to initiate a polling, instead server pushes the data to client in real time.
Check the below:
https://spring.io/guides/gs/messaging-stomp-websocket/
http://www.baeldung.com/websockets-spring
Note - web sockets open a persistent connection between client and server and thus may result in more resource usage in case of large number of users. So, if you are not looking for real time updates and is fine with some delay then polling might be a better approach. Also, not all browsers support web sockets.
Web Sockets vs Interval Polling
Longpolling vs Websockets
In what situations would AJAX long/short polling be preferred over HTML5 WebSockets?
In your current approach, if you are having a concern with large number of threads running on server for multiple users then you can trigger the polling from front-end every time instead. This way only short lived request threads will be triggered from UI looking for any update in the cache. If there is an update, another call can be made to retrieve the data. However don't hit the server every other second as you are doing otherwise you will have high CPU utilization and user request threads may also suffer. You should do some optimization on your timing.
Instead of hitting the cache after a delay of 1 sec for 100 times, you can apply an intelligent algorithm by analyzing the pattern of cache/DB update over a period of time.
By knowing the pattern, you can trigger the polling in an exponential back off manner to hit the cache when the update is most likely expected. This way you will be hitting the cache less frequently and more accurately.

Related

MongoDB: How to stop and resume change stream

How to stop a mongodb changestream temporarily and resume it again?
public Flux<Example> watch() {
final ChangeStreamOptions changeStreamOptions = ChangeStreamOptions.builder().returnFullDocumentOnUpdate().build();
return reactiveMongoTemplate.changeStream("collection", changeStreamOptions, Example.class)
.filter(e -> e.getOperationType() != null)
.mapNotNull(ChangeStreamEvent::getBody);
}
I'm trying to create a rest endpoint that should be able to stop the changestream for sometime while we do some database maintenance and then invoke the endpoint again to resume the stream from where it left off using resume token.
I found the solution to unsubscribe/stop changestream
Disposable subscription = service.watch()
.subscribe(exampleService::doSomething)
// cancel the subscription
subscription.dispose();
I am not a MongoDB expert but this is what I understood from one, hope I got it right and I am using the plain Java driver API for easier readability:
// 1. Open, consume, close, save token
MongoCursor<ChangeStreamDocument<Document>> cursor = inventoryCollection.watch().iterator();
ChangeStreamDocument<Document> next = cursor.next()
BsonDocument resumeToken = next.getResumeToken();
cursor.close();
// 2. Save the resume token in the database, in case your process goes down for any reason during your pause. Otherwise, you will not know where to start resuming.
...
// 3. When you want to reopen again start from the DB saved resumeToken
cursor = inventoryCollection.watch().resumeAfter(resumeToken).iterator();
The time window from the moment you receive the event until you save it should be very small but it may happen the process crashes before you save the continuation _id. If you have operations that are sensitive to that time window, then those operations should be idempotent so that in case you replay an already received event your data will not be affected.
It would have been nice for the Mongo server to keep track of the current offset for all change streams and uniquely identify clients. This is not possible now and this is why Mongo provides and asks for the resume token.

How to scale more than 1 instance and deal with scheduled task in spring?

I am having a push notifications being send to android and ios application through spring boot every day at 8am Europe/Paris.
If I run multiple instances, the notifications will send multiple times. I am thinking to send every day notifications send on the database, and check them but I am worried it still run multiple times, this is what I am doing:
#Component
public class ScheduledTasks {
private static final Logger log = LoggerFactory.getLogger(ScheduledTasks.class);
private static final SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
#Autowired
private ExpoPushTokenRepository expoPushTokenRepository;
#Autowired
private ExpoPushNotificationService expoPushNotificationService;
#Autowired
private MessageSource messageSource;
// TODO: if instances > 1, this will run multiple times, save to database the notifications send and prevent multiple sending.
#Scheduled(cron = "${cron.promotions.notification}", zone = "Europe/Paris")
public void sendNewPromotionsNotification() {
List<ExpoPushToken> expoPushTokenList = expoPushTokenRepository.findAll();
ArrayList<NotifyRequest> notifyRequestList = new ArrayList<>();
for (ExpoPushToken expoPushToken : expoPushTokenList) {
NotifyRequest notifyRequest = new NotifyRequest(
expoPushToken.getToken(),
"This is a test title",
"This is a test subtitle",
"This is a test body"
);
notifyRequestList.add(notifyRequest);
}
expoPushNotificationService.sendPushNotificationToList(notifyRequestList);
log.info("{} Send push notification to " + expoPushTokenList.size() + " userse", dateFormat.format(new Date()));
}
}
Does anybody have an idea on how I can prevent that safely?
Quartz would be my mostly database-agnostic solution for the task at hand, but was ruled out, so we are not going to discuss it.
The solution we are going to explore instead makes the following assumptions:
Postgres >= 9.5 is used (because we are going to use SKIP LOCKED, which was introduced in Postgresl 9.5).
It is okay to run a native query.
Under this conditions, we can retrieve batches of notifications from multiple instances of the application running through the following query:
SELECT * FROM expo_push_token FOR UPDATE SKIP LOCKED LIMIT 100;
This will retrieve and lock up to 100 entries from the table expo_push_token. If two instances of the application execute this query simultaneously, the received results will be disjoint. 100 is just some sample value. We may want to fine-tune this value for our use case. The locks stay active until the current transaction ends.
After an instance has fetched a batch of notifications, it has to also delete the entries it locked from the table or otherwise mark that this entry has been processed (if we go down this route, we have to modify the query above to filter-out already processed entires) and close the current transaction to release the locks. Each instance of the application would then repeat this query until the query returns zero entries.
There is also an alternative approach: an instance first fetches a batch size of notifications to send, keeps the transaction to the database open (thus continues holding the lock on the database), sends out its notification and then deletes/updates the entries and closes the transactions.
The two solutions have different strengths/weaknesses:
the first solutions keeps the transaction short. But if the application crashes in the middle of sending out notificatiosn, the part of its batch that was not send out is lost in this run.
the second solution keeps the transaction open, for possibly a long time. If it crashes in the middle fo sending out notifications, all entries will be unlocked and its batch would be re-processed, possibly resulting in some notifications being sent out twice.
For this solution to work, we also need some kind of job that fills table expo_push_token with the data we need. This job should run beforehand, i.e. its execution should not overlap with the notification sending process.

How to limit the number of active Spring WebClient calls

I have a requirement where I read a bunch of rows (thousands) from a SQL DB using Spring Batch and call a REST Service to enrich content before writing them on a Kafka topic.
When using the Spring Reactive webClient, how do I limit the number of active non-blocking service calls? Should I somehow introduce a Flux in the loop after I read data using Spring Batch?
(I understand the usage of delayElements and that it serves a different purpose, as when a single Get Service Call brings in lot of data and you want the server to slow down -- here though, my use case is a bit different in that I have many WebClient calls to make and would like to limit the number of calls to avoid out of memory issues but still gain the advantages of non-blocking invocations).
Very interesting question. I pondered about it and I thought of a couple of ideas on how this could be done. I will share my thoughts on it and hopefully there are some ideas here that perhaps help you with your investigation.
Unfortunately, I'm not familiar with Spring Batch. However, this sounds like a problem of rate limiting, or the classical producer-consumer problem.
So, we have a producer that produces so many messages that our consumer cannot keep up, and the buffering in the middle becomes unbearable.
The problem I see is that your Spring Batch process, as you describe it, is not working as a stream or pipeline, but your reactive Web client is.
So, if we were able to read the data as a stream, then as records start getting into the pipeline those would get processed by the reactive web client and, using back-pressure, we could control the flow of the stream from producer/database side.
The Producer Side
So, the first thing I would change is how records get extracted from the database. We need to control how many records get read from the database at the time, either by paging our data retrieval or by controlling the fetch size and then, with back pressure, control how many of those are sent downstream through the reactive pipeline.
So, consider the following (rudimentary) database data retrieval, wrapped in a Flux.
Flux<String> getData(DataSource ds) {
return Flux.create(sink -> {
try {
Connection con = ds.getConnection();
con.setAutoCommit(false);
PreparedStatement stm = con.prepareStatement("SELECT order_number FROM orders WHERE order_date >= '2018-08-12'", ResultSet.TYPE_FORWARD_ONLY);
stm.setFetchSize(1000);
ResultSet rs = stm.executeQuery();
sink.onRequest(batchSize -> {
try {
for (int i = 0; i < batchSize; i++) {
if (!rs.next()) {
//no more data, close resources!
rs.close();
stm.close();
con.close();
sink.complete();
break;
}
sink.next(rs.getString(1));
}
} catch (SQLException e) {
//TODO: close resources here
sink.error(e);
}
});
}
catch (SQLException e) {
//TODO: close resources here
sink.error(e);
}
});
}
In the example above:
I control the amount of records we read per batch to be 1000 by setting a fetch size.
The sink will send the amount of records requested by the subscriber (i.e. batchSize) and then wait for it to request more using back pressure.
When there are no more records in the result set, then we complete the sink and close resources.
If an error occurs at any point, we send back the error and close resources.
Alternatively I could have used paging to read the data, probably simplifying the handling of resources by having to reissue a query at every request cycle.
You may consider also doing something if subscription is cancelled or disposed (sink.onCancel, sink.onDispose) since closing the connection and other resources is fundamental here.
The Consumer Side
At the consumer side you register a subscriber that only requests messages at a speed of 1000 at the time and it will only request more once it has processed that batch.
getData(source).subscribe(new BaseSubscriber<String>() {
private int messages = 0;
#Override
protected void hookOnSubscribe(Subscription subscription) {
subscription.request(1000);
}
#Override
protected void hookOnNext(String value) {
//make http request
System.out.println(value);
messages++;
if(messages % 1000 == 0) {
//when we're done with a batch
//then we're ready to request for more
upstream().request(1000);
}
}
});
In the example above, when subscription starts it requests the first batch of 1000 messages. In the onNext we process that first batch, making http requests using the Web client.
Once the batch is complete, then we request another batch of 1000 from the publisher, and so on and so on.
And there your have it! Using back pressure you control how many open HTTP requests you have at the time.
My example is very rudimentary and it will require some extra work to make it production ready, but I believe this hopefully offers some ideas that can be adapted to your Spring Batch scenario.

How to send emails from a Java EE Batch Job

I have a requirement to process a list of large number of users daily to send them email and SMS notifications based on some scenario. I am using Java EE batch processing model for this. My Job xml is as follows:
<step id="sendNotification">
<chunk item-count="10" retry-limit="3">
<reader ref="myItemReader"></reader>
<processor ref="myItemProcessor"></processor>
<writer ref="myItemWriter"></writer>
<retryable-exception-classes>
<include class="java.lang.IllegalArgumentException"/>
</retryable-exception-classes>
</chunk>
</step>
MyItemReader's onOpen method reads all users from database, and readItem() reads one user at a time using list iterator. In myItemProcessor, the actual email notification is sent to user, and then the users are persisted in database in myItemWriter class for that chunk.
#Named
public class MyItemReader extends AbstractItemReader {
private Iterator<User> iterator = null;
private User lastUser;
#Inject
private MyService service;
#Override
public void open(Serializable checkpoint) throws Exception {
super.open(checkpoint);
List<User> users = service.getUsers();
iterator = users.iterator();
if(checkpoint != null) {
User checkpointUser = (User) checkpoint;
System.out.println("Checkpoint Found: " + checkpointUser.getUserId());
while(iterator.hasNext() && !iterator.next().getUserId().equals(checkpointUser.getUserId())) {
System.out.println("skipping already read users ... ");
}
}
}
#Override
public Object readItem() throws Exception {
User user=null;
if(iterator.hasNext()) {
user = iterator.next();
lastUser = user;
}
return user;
}
#Override
public Serializable checkpointInfo() throws Exception {
return lastUser;
}
}
My problem is that checkpoint stores the last record that was executed in the previous chunk. If I have a chunk with next 10 users, and exception is thrown in myItemProcessor of the 5th user, then on retry the whole chunck will be executed and all 10 users will be processed again. I don't want notification to be sent again to the already processed users.
Is there a way to handle this? How should this be done efficiently?
Any help would be highly appreciated.
Thanks.
I'm going to build on the comments from #cheng. My credit to him here, and hopefully my answer provides additional value in organizing and presenting the options usefully.
Answer: Queue up messages for another MDB to get dispatched to send emails
Background:
As #cheng pointed out, a failure means the entire transaction is rolled back, and the checkpoint doesn't advance.
So how to deal with the fact that your chunk has sent emails to some users but not all? (You might say it rolled back but with "side effects".)
So we could restate your question then as: How to send email from a batch chunk step?
Well, assuming you had a way to send emails through an transactional API (implementing XAResource, etc.) you could use that API.
Assuming you don't, I would do a transactional write to a JMS queue, and then send the emails with a separate MDB (as #cheng suggested in one of his comments).
Suggested Alternative: Use ItemWriter to send messages to JMS queue, then use separate MDB to actually send the emails
With this approach you still gain efficiency by batching the processing and the updates to your DB (you were only sending the emails one at a time anyway), and you can benefit from simple checkpointing and restart without having to write complicated error handling.
This is also likely to be reusable as a pattern across batch jobs and outside of batch even.
Other alternatives
Some other ideas that I don't think are as good, listed for the sake of discussion:
Add batch application logic tracking users emailed (with ItemProcessListener)
You could build your own list of either/both successful/failed emails using the ItemProcessListener methods: afterProcess and onProcessError.
On restart, then, you could know which users had been emailed in the current chunk, which we are re-positioned to since the entire chunk rolled back, even though some emails have already been sent.
This certainly complicates your batch logic, and you also have to persist this success or failure list somehow. Plus this approach is probably highly specific to this job (as opposed to queuing up for an MDB to process).
But it's simpler in that you have a single batch job without the need for a messaging provider and a separate app component.
If you go this route you might want to use a combination of both a skippable and a "no-rollback" retryable exception.
single-item chunk
If you define your chunk with item-count="1", then you avoid complicated checkpointing and error handling code. You sacrifice efficiency though, so this would only make sense if the other aspects of batch were very compelling: e.g. scheduling and management of jobs through a common interface, the ability to restart at the failing step within a job
If you were to go this route, you might want to consider defining socket and timeout exceptions as "no-rollback" exceptions (using ) since there's nothing to be gained from rolling back, and you might want to retry on a network timeout issue.
Since you specifically mentioned efficiency, I'm guessing this is a bad fit for you.
use a Transaction Synchronization
This could work perhaps, but the batch API doesn't especially make this easy, and you still could have a case where the chunk completes but one or more email sends fail.
Your current item processor is doing something outside the chunk transaction scope, which has caused the application state to be out of sync. If your requirement is to send out emails only after all items in a chunk have successfully completed, then you can move the emailing part to a ItemWriterListener.afterWrite(items).

How to call Elastic Search for current queue load?

While querying ES extensively, I get
Failed to execute [org.elasticsearch.action.search.SearchRequest#59e634e2] lastShard [true]
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.search.
action.SearchServiceTransportAction$23#75bd024b
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:79)
at org.elasticsearch.search.action.SearchServiceTransportAction.execute(SearchServiceTransportAction.java:551)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:228)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:83)
on a quite regular basis.
My plan, now, is to pause the query requests until the queue load is lower than x. You can query the client for its stats
client.admin().cluster().threadPool().stats().iterator();
But since my client is not a data node (I presume that's why), I get queue=0 returned, while the server node throw the above error.
I know why this gets thrown, and I know how to update the setting, but that just postpones this error, and creates others...
How do I ask the cluster nodes what their queue load is?
PS: I'm using the Java Api
What I've tried, without requested result, blank line indicative of another try, unless otherwise specifiied
//Nodes stats
final NodesStatsResponse nodesStatsResponse = client.admin().cluster().prepareNodesStats().execute().actionGet();
final NodeStats nodeStats = nodesStatsResponse.getNodes()[0];
final String nodeId = nodeStats.getNode().getId(); // need this later on
// same as before, but with explicit NodesStatsRequest (with id)
final NodesStatsResponse response = client.admin().cluster().nodesStats(new NodesStatsRequest(nodeId)).actionGet();
final NodeStats[] nodeStats2 = response.getNodes();
for (NodeStats nodeStats3 : nodeStats2) {
Stats stats = nodeStats3.getThreadPool().iterator().next();
}
// Cluster?
final ClusterStatsRequest clusterStatsRequest = new ClusterStatsRequestBuilder(client.admin().cluster()).request();
final ClusterStatsResponse clusterStatsResponse = client.admin().cluster().clusterStats(clusterStatsRequest).actionGet();
final ClusterStatsNodes clusterStatsNodes = clusterStatsResponse.getNodesStats();
// Nodes info?
final NodesInfoResponse infoResponse = client.admin().cluster().nodesInfo(new NodesInfoRequest(nodeId)).actionGet();// here
final NodeInfo[] nodeInfos = infoResponse.getNodes();
for (final NodeInfo nodeInfo : nodeInfos) {
final ThreadPoolInfo info = nodeInfo.getThreadPool();
final Iterator<Info> infoIterator = info.iterator();
while (infoIterator.hasNext()) {
final Info realInfo = infoIterator.next();
SizeValue sizeValue = realInfo.getQueueSize();
// is no == null, then (¿happens?, was expecting a nullpointer, but Thread disappeared)
if (sizeValue == null)
continue;
// normal queue size, no load (oddly found 1000 (expected), and one of 200 in one node?)
final long queueSize = sizeValue.getSingles();
}
}
The issue is that some of the processes need to be called instantly (e.g. user requests), whereas others may wait if the database is too busy (background processes). Preferably, I'd assign a certain amount of the queue to processes that stand on immediate requests, and the other part on background processes (but I haven't seen this option).
Update
It appears, which I didn't expect, that you can get a query overload with a single bulk query, when the total amount of separate searches exceed 1000 (when x shards, or x indices, divide by 1000/x for the number of searches). So bulking,,, not an option, unless you can make a single query. So when you target on 700 search results at once (taking in account the above statement), you'll need to know whether more than 300 items reside in the queue, for then it'll throw stuff.
To sum up:
Assume the load, per call, is the maximum bulkrequest so I cannot combine requests. How, then, can I start pausing requests before elasticsearch starts throwing the above stated exception. So I can pause a part of my application, but not the other? If I know the queue is filled, say, half way, the background process must sleep some time. How do I know the (approximated) queue load?
The way you are trying to look at the queue usage is wrong, as you are not looking at the correct statistics.
Have a look at this piece of code:
final NodesStatsResponse response = client.admin().cluster().prepareNodesStats().setThreadPool(true).execute().actionGet();
final NodeStats[] nodeStats2 = response.getNodes();
for (NodeStats nodeStats3 : nodeStats2) {
ThreadPoolStats stats = nodeStats3.getThreadPool();
if (stats != null)
for (ThreadPoolStats.Stats threadPoolStat : stats) {
System.out.println("node `" + nodeStats3.getNode().getName() + "`" + " has pool `" + threadPoolStat.getName() + "` with current queue size " + threadPoolStat.getQueue());
}
}
First of all you need setThreadPool(true) to be able to get back the thread pool statistics otherwise it will be null.
Secondly, you need ThreadPoolStats not ThreadPoolInfo which is for thread pool settings.
So, it's your second attempt, but incomplete. The 1000 you were seeing was the setting itself (the max queue size), not the actual load.
I'm hoping this is not the answer, source https://www.elastic.co/guide/en/elasticsearch/guide/current/_monitoring_individual_nodes.html#_threadpool_section:
Bulk Rejections
If you are going to encounter queue rejections, it will most likely be
caused by bulk indexing requests. It is easy to send many bulk
requests to Elasticsearch by using concurrent import processes. More
is better, right?
In reality, each cluster has a certain limit at which it can not keep
up with ingestion. Once this threshold is crossed, the queue will
quickly fill up, and new bulks will be rejected.
This is a good thing. Queue rejections are a useful form of back
pressure. They let you know that your cluster is at maximum capacity,
which is much better than sticking data into an in-memory queue.
Increasing the queue size doesn’t increase performance; it just hides
the problem. If your cluster can process only 10,000 docs per second,
it doesn’t matter whether the queue is 100 or 10,000,000—your cluster
can still process only 10,000 docs per second.
The queue simply hides the performance problem and carries a real risk
of data-loss. Anything sitting in a queue is by definition not
processed yet. If the node goes down, all those requests are lost
forever. Furthermore, the queue eats up a lot of memory, which is not
ideal.
It is much better to handle queuing in your application by gracefully
handling the back pressure from a full queue. When you receive bulk
rejections, you should take these steps:
Pause the import thread for 3–5 seconds. Extract the rejected actions
from the bulk response, since it is probable that many of the actions
were successful. The bulk response will tell you which succeeded and
which were rejected. Send a new bulk request with just the rejected
actions. Repeat from step 1 if rejections are encountered again. Using
this procedure, your code naturally adapts to the load of your cluster
and naturally backs off.
Rejections are not errors: they just mean you should try again later.
Particularly this When you receive bulk rejections, you should take these steps I don't like. We should be able to handle oncoming problems on forehand.

Categories