Running java server cluster with local up to date cache - java

I need to create a clustered micro-service (means it is a service which runs on different nodes, replicated). This service needs a "super fast" for "GET" operation e.g: when receiving username then responding with a user token.
this service can be slow on other token operations like save/update/delete.
problems and assumpsions:
logged in users (which has tokens) can be something like 10,000 - 20,000 users.
speed: The service needs to be super fast for the "getToken" operation, so I need some way to store it in memory locally e.g: HashMap.
stale: Since there are replicas, I have micro services which also suppose to have the same cache somehow.
slow: slow operation can be perform when executing token update/delete/save, no problem in that, as long as after update/delete/save, all caches are up to date.
I have started to look at Redis as a buzzword for Caching and also for storing user session, BUT, I have found out that people are confused, and doesn't have the right answer for the above problem.
Redis is a Cached database, which means it runs in memory, but it can't used in my case because I am a clusterd service which has replicas of local memory, REDIS can't synchronize my local cache (or is it). Redis can't give me solution for this problem, REDIS driver is not synchronizing my HashMap, although I thought about some pub/sub of Redus to solve this issue, but still, its stale data!
I think the "super fast" is to use local Java HashMap, and every micro-service on startup will take the Tokens data from some DB (who cares which DB), and, when READ is made, just give data from local memory, but, when update/delete/save made, then send a message to a message queue, and wait for message queue to fan out all messages to the subscribers (other replicas).
I'm not sure if I'm inventing the wheel here, I mean, this is a common problem of distributed cache, is there any other known solution for this? remember, only GET operation need to be fast.

Related

How can I discard pending DocumentReference writes when offline?

I'm using Firebase Firestore documents to publish the location of my users on a map, so they see each other. This works fine when all of them have good connectivity, but sometimes their mobiles can't connect to the Firebase servers and it seems that the writes are cached: whenever they recover connectivity all the pending location writes are sent in bulk.
The effect for other users is that they see the position of a person stop, and after a while they start moving really quick until the map position catches the real value. This is annoying and a waste of bandwidth.
I have tried disabling the persistence cache but this doesn't help (it would only help if the transmisor app would die, but as long as it lives the positions are cached in memory).
Maybe the problem is that I shouldn't be using documents for this purpose and there is another Firebase mechanism which allows discarding stale write data for the purposes of real time communication?
All write operations that are performed while the device is offline are queued until the connection with the Firebase servers is reestablished. Unfortunately, there is no API that can help you control which write operation are queued and which are not.
The simplest solution I can think of is to use Firestore transactions, which are currently not persisted to disk and thus will be lost when the application is offline.
So, transactions are not supported for offline use, they can't be cached or saved for later. This is because a transaction absolutely requires round trip communications with server in order to ensure that the code inside the transaction completes successfully. So you can use transaction only while online because the transactions are network dependent.
You can work around this by only making requests if you can connect to firestore. Here's a helper function that will determine if you're connected or not. It's similar to using a transaction since both methods involve making a read request.
If you don't plan on invoking the function that much, the read costs will probably be negligible. However, to save the cost of reads, you could also consider pinging some server or a cloud function instead of firestore itself. Doing so might be a less accurate way of testing connection to firestore though.
import {
doc,
getDocFromServer,
} from "firebase/firestore"
async function canConnectToFirestore(){
//navigator.onLine can only say for certain if you're disconnected
//For more info on navigator.onLine: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/onLine
if (!navigator.onLine)
return false
//db is initialized from getFirestore()
try{
await getDocFromServer(doc(db, "IrrelevantText","IrrelevantText"))
return true
}
catch(e){
return false
}
}
async function example(){
if(await canConnectToFirestore()) console.log("Do something with firestore")
}

how to process multiple API calls from the same client one by one in a scalable, highly concurrent and fault tolerant system

We have web service APIs to support clients running on ten millions devices. Normally clients call server once a day. That is about 116 clients seen per second. For each client (each with unique ID), it may make several APIs calls concurrently. However, Server can only process those API calls one by one from the same client. Because, those API calls will update the same document of that client in the backend Mongodb database. For example: need to update last seen time and other embedded documents in the document of this client.
One solution I have is to put synchronized block on an "intern" object representing this client's unique ID. That will allow only one request from the same client obtains the lock and be processed at the same time. In addition, requests from other clients can be processed at the same time too. But, this solution requires to turn on load balancer's "stickiness". That means load balancer will route all requests from the same ip address to a specific server within a preset time interval (e.g. 15 minute). I am not sure if this has any impact to the robustness in the whole system design. One thing I can think of is that some clients may make more requests and make the load not balanced (create hotspots).
Solution #1:
Interner<Key> myIdInterner = Interners.newWeakInterner();
public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
// code to process request
}
}
You can see my other question for this solution in detail: Should I use Java String Pool for synchronization based on unique customer id?
The second solution I am thinking is to somehow lock the document (Mongodb) of that client (I have not found a good example to do that yet). Then, I don't need to touch load balancer setting. But, I have concerns on this approach as I think the performance (round trips to Mongodb server and busy waiting?) will be much worse compared to solution #1.
Solution #2:
public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
try {
obtainDocumentLock(new Key(clientUniqueId));
// code to process request
} finally {
releaseDocumentLock(new Key(clientUniqueId));
}
}
public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
try {
obtainDocumentLock(new Key(clientUniqueId));
// code to process request
} finally {
releaseDocumentLock(new Key(clientUniqueId));
}
}
I believe this is very common issue in a scalable and high concurrent system. How do you solve this issue? Is there any other option? What I want to achieve is to be able to process one request at a time for those requests from the same client. Please be noted that just controlling the read/write access to database does not work. The solution need to control the exclusive processing of the whole request.
For example, there are two requests: request #1 and request #2. Request #1 read the document of the client, update one field of a sub-document #5, and save the whole document back. Request #2 read the same document, update one field of sub-document #8, and save the whole document back. At this moment, we will get an OptimisticLockingFailureException because we use #Version annotation from spring-data-mongodb to detect version conflict. So, it is imperative to process only one request from the same client at any time.
P.S. Any suggestion in selecting solution #1 (lock on single process/instance with load balancer stickiness turned on) or solution #2 (distributed lock) for a scalable, and high concurrent system design. The goal is to support tens of millions clients with concurrently hundreds of clients access the system per second.
In your solution, you are doing a lock split based on customer id so two customers can process the service same time. The only problem is the sticky session. One solution can be to use distributed lock so you can dispatch any request to any server and the server gets the lock process. Only one consideration is it involves remote calls. We are using hazelcast/Ignite and it is working very well for average number of nodes.
Hazelcast
Why not just create a processing queue in Mongodb whereby you submit client request documents, and then another server process that consumes them, produces a resulting document, that the client waits for... synchronize the data with clientId, and avoid that activity in the API submission step. The 2nd part of the client submission activity (when finished) just polls Mongodb for consumed records looking for their API / ClientID and some job tag. That way, you can scale out the API submission, and separately the API consumption activities on separate servers etc.
One obvious approach is simply to implement the full optimistic locking algorithm on your end.
That is, you get sometimes get OptimisticLockingFailureException when there are concurrent modifications, but that's fine: just re-read the document and start the modification that failed over again. You'll get the same effect as if you had used locking. Essentially you are leveraging the concurrency control already built-in to MongoDB. This also has the advantage of getting several transactions go through from the same client if they don't conflict (e.g., one is a read, or they write to different documents), potentially increasing the concurrency of your system. On other hand, you have to implement the re-try logic.
If you do want to lock on a per-client basis (or per-document or whatever else) and your server is a single process (which is implied by your suggested approach) you just need a lock manager that works on arbitrary String keys, which has several reasonable solutions including the Interner one your mentioned.

Large number of single threaded task queues

At our company we have a server which is distributed into few instances. Server handles users requests. Requests from different users can be processed in parallel. Requests from same users should be executed strongly sequentionally. But they can arrive to different instances due to balancing. Currently we use Redis-based distributed locks but this is error-prone and requires more work around concurrency than business logic.
What I want is something like this (more like a concept):
Distinct queue for each user
Queue is named after user id
Each requests identified by request id
Imagine two requests from the same user arriving at two different instances concurrently:
Each instance put their request id into this user queue.
Additionaly, they both store their request ids locally.
Then some broker takes request id from the top of "some_user_queue" and moves it into "some_user_queue_processing"
Both instances listen for "some_user_queue_processing". They peek into it and see if this is request id they stored locally. If yes, then do processing. If not, then ignore and wait.
When work is done server deletes this id from "some_user_queue_processing".
Then step 3 again.
And all of this happens concurrently for a lot (thousands of them) of different users (and their queues).
Now, I know this sounds a lot like actors, but:
We need solution requiring as small changes as possible to make fast transition from locks. Akka will force us to rewrite almost everything from scratch.
We need production ready solution. Quasar sounds good, but is not production ready yet (more correctly, their Galaxy cluster).
Tops at my work are very conservative, they simply don't want another dependency which we'll need to support. But we already use Redis (for distributed locks), so I thought maybe it could help with this too.
Thanks
The best solution that matches the description of your problem is Redis Cluster.
Basically, the cluster solves your concurrency problem, in the following way:
Two (or more) requests from the same user, will always go to the same instance, assuming that you use the user-id as a key and the request as a value. The value must be actually a list of requests. When you receive one, you will append it to that list. In other words, that is your queue of requests (a single one for every user).
That matching is being possible by the design of the cluster implementation. It is based on a range of hash-slots spread over all the instances.
When a set command is executed, the cluster performs a hashing operation, which results in a value (the hash-slot that we are going to write on), which is located on a specific instance. The cluster finds the instance that contains the right range, and then performs the writing procedure.
Also, when a get is performed, the cluster does the same procedure: it finds the instance that contains the key, and then it gets the value.
The transition from locks is very easy to perform because you only need to have the instances ready (with the cluster-enabled directive set on "yes") and then to run the cluster-create command from redis-trib.rb script.
I've worked last summer with the cluster in a production environment and it behaved very well.

How to effectively process lot of objects on a list on server side

I have a List which contains a lot of objects.
The problem is that i have to process these objects (process includes cloning, deep copy, and making DB calls, running business logic etc etc.
Doing this in a normal fashion, first come first serve is really time consuming and in a web application , this generally results in transaction timeouts at the server side (as this processing is anync from client perspective).
How do i process those objects so as to take minimal time and not overload the DB.
I'm using java 7 on server environment.
I'm already using a messaging solution , rabbitmq, which gets me the item and its quantity. problem occurs when i try to deep copy items to mimic real items (business logic every item should be uniquely processed) and save them to DB.
After some discussions, the viable solution is using a ABQ (array blocking queues) which is processed by a pool of threads.
Following are the thought out benefits:
1) we wont have to manage the 3rd party queues created e.g. rabbitmq
2) At any point in time the blocking queue wont have all the items to be processed as the consumer threads will be simultaneously processing them, so it will leave lesser memory footprint.
#cody123 i'm using spring batch for retry mechanisms in this case.
After another round of profiling i found that the bottle neck was the DB connection pool having low number of max connections.
I deduced this by running the same transaction without db thread pool and it went perfectly well and completed without any exception.
However combining the previous approach i.e. managing an ABQ and light commits with HA DB will be the best solution.

Design for scalable periodic queue message batching

We currently have a distributed setup where we are publishing events to SQS and we have an application which has multiple hosts that drains messages from the queue and does some transformation over it and transmits to interested parties. I have a use case where the receiving end point has scalability concerns with the message volume and hence we would like to batch these messages periodically (say every 15 mins) in the application before sending it.
The incoming message rate is around 200 messages per second and each message is no more than 10 KB. This system need not be real time, but would definitely be a good to have and also the order is not important (its okay if a batch containing older messages gets sent first).
One approach that I can think of is maintaining an embedded database within the application (each host) that batches the events and another thread that runs periodically and clears the data.
Another approach could be to create timestamped buckets in a a distributed key-value store (s3, dynamo etc.) where we write the message to the correct bucket based the messages time stamp and we periodically clear the buckets.
We can run into several issues here, since the messages would be out of order a bucket might have already been cleared (can be solved by having a default bucket though), would need to accurately decide when to clear a bucket etc.
The way I see it, at least two components would be required one which does the batching into a temporary storage and another that clears it.
Any feedback on the above approaches would help, also it looks like a common problem are they any existing solutions that I can leverage ?
Thanks

Categories