I've built a server application in java, where clients can connect . I've implemented a heartbeat system where the client is sending every x seconds a small message.
On the server side I save in a HashMap the time the client has sent the message , and I use a TimerTask for every client to check every x seconds if I received any message from the client.
Everything works ok for a small amount of client, but after the number of clients increase (2k+) the memory amount is very big, plus the Times has to deal with a lot of TimerTask and the program start to eat a lot of CPU.
Is there a better way to implement this? I thought about using a database and make a select the clients that didn't sent any update in a certain amount of time.
Do you think this will work better, or is a better way of doing this.
Few random suggestions:
Instead of one timer per each client, have only one global timer that examines the map of received heartbeats quite often (say 10 times per second). Iterate over that map and find dead clients. Remember about thread-safety of shared data structure!
If you want to use database, use a lightweight in-memory DB like h2. But still sounds like an overkill.
Use cache or some other expiring map and be notified every time something is evicted. This way you basically put something in the map when a client sends a heartbeat and if nothing happened with that entry within given amount of time, the map implementation will remove it, calling some sort of listener.
Use actor-based system like Akka (has Java API). You can have one actor on the server side that handles one client. It's much more efficient than one thread/timer.
Use a different data structure, e.g. a queue. Every time you receive a heartbeat, you remove client from the queue and put it back at the end. Now periodically check only the head of the queue, which should always contain the client with oldest heartbeat.
Related
I have a class witch is responsible for sending data to a client and all others classes use this when they need to send data. Let's call it 'DataSender.class'.
Now the client is asking us to control the throughput to a maximum of 50 calls per second.
I need to create an algoritm on this class (if possible) to keep the number of calls in the current second and if it reaches the maximum of 50, hold the process either with a sleep or something and continue without losing data.
Maybe I have to implement a queue or something better than a simple sleep. I need sugestions or a direction to follow.
For the sake of simplicity, just imagine that everyone is using something like this and I cannot change how they call me now. post() return is syncronous but that maybe I can possibly change (not sure yet):
DataSender ds = new DataSender();
ds.setdata(mydata);
if (ds.post()) {
//data send succesfull
}
If I am not mistaken, what you are looking for is throttling or rate limiting.
As Andrew S pointed out, you will need a Queue, to hold the extra requests and a sender algorithm.
The main point is that because you are not sending the data right away, the callers need to be aware that the data is not necessarily sent when their call returns. Usually senders will not be happy, if their call returns, they assume data is sent, and then the data is lost. There are many reasons why data can be lost in this scenario. As Andrew S pointed out, making senders aware that it will be an asynchronous send queue, maybe with confirmations upon successful send, will be a safer or proper approach.
You will need to decide on the size of the queue, you have to limit the size or you can run out of memory. And you need to decide what happens with the request when queue is full. Also what happens when the end point is not accessible (server down, network issues, solar flares), keep accepting data to queue or reject / throw exception.
Hint : if you have 50 requests limit, don't blast 50 requests and then sleep for 1 second. Figure out the interval between sends, send one request, make a short interval sleep.
Pro hint : if new data sent invalidates the data that was previously requested to be sent, but not sent yet, you can optimize the data sent by removing the invalidated data from the queue. This is called Conflation. Usual example is stock market prices. Say you got price of ACME 100 ten seconds ago and for whatever reason that data was not sent. If you get a new price for ACME 101 now, it is usually not useful to send the 100 price record, just send the 101 price.
I have one Server and multiple clients. With some period, clients sends an alive packet to Server. (At this moment, Server doesn't respond alive packets). The period may change device to device and configurable at runtime, for both Server and Clients. I want to generate an alert when one or more clients doesn't send the alive packet. (One packet or two in row etc.). This aliveness is used other parts of application so the quicker notice is the better. I came up some ideas but I couldn't select one.
Create a task that checks every clients last alive packet timestamps with current time and generate alert or alerts. Call this method in some period which should be smaller than minimum client-period.
Actually that seems better to me, however this way unnecessarily I check some clients alive. (Ex: If clients period are change 1-5 minute, task should be run in every minute at least, so I check all clients above 2 minute period is redundant). Also if the minimum of client periods is decrease, I should decrease the tasks period also.
Create a task for each clients, and check the last alive packet timestamps with current time, sleep for one client's period time.
In this way, if clients number goes very high, there will be dozens of task. Since they will sleep most of the time, I still doubt this is more elegant.
Is there any idiom or pattern for this kind of situation? I think watchdog kind implementation is suite well, however I didn't see something like in Java.
Approach 2 is not very useful as it is vague idea to write 100 task for 100 clients.
Approach 1 can be optimized if you use average client-period instead of minimum.
It depends on your needs.
Is it critical if alert is generated few seconds later (or earlier) than it should be?
If not then maybe it's worth grouping clients with nearby heartbeat intervals and run the check against not a single client but the group of clients? This will allow to decrease number of tasks (100 -> 10) and increase number of clients handled by single task (1 -> 10).
First approach is fine.
Only thing I can suggest you is that create an independent service to do this control. If you set this task as a thread in your server, it wouldn't be that manageable. Imagine your control thread is broken, killed etc, how would you notice? So, build an independent OS service, another java program, to check last alive timestamps periodically.
In this way you can easily modify and restart your service and see its logs separately. According to its importance, you may even built a "watchdog of watchdog" service too.
I have a web service for which I need to limit the number of transactions a client can perform. A transaction is hitting the URL with correct parameters. Every client will have different number of transactions it can perform per second. Client will be identified based on the IP address or a parameter in the URL.
The maximum TPS a client can perform will be kept in database or any other configurable manner. I understand that it would be possible to write servlet filter to do this. The filter would calculate requests per second and make database connection to get maximum TPS of client and reject the request when TPS reached as it will further slow down the application response. But that will not be helpful during a DOS attack. Is there a better way?
I had to do the same thing. This is how I did it.
1) I had a data model for tracking an IP's requests. It mainly tracked the rate of requests by using some math that allowed me to add a new request and the new rate of requests for that IP would quickly be recalculated. Lets call this class IpRequestRate.
2) For each unique IP that made a request an instance of IpRequestRate was instantiated. Only one instance was required per IP. They were put into a HashMap for fast retrieval. If a new IP came in, then a new instance of IpRequestRate was created for it.
3) When a request came in, if there was already an instance of IpRequestRate in the HashMap, then I would add the new request to that instance and get the new rate. If the rate was above a certain threshold, then the request would not be processed.
4) If the requester accidentally went above that threshold, then the rate would quickly dip below the threshold. But if it was a real DOS, or in my case too many attempts to access an account (due to hackers), then it would take much longer for their rate to dip below the threshold; which is what I wanted.
5) I do not recall if I had a cleanup thread to remove old IP's but that's something that would be needed at some point. You can use EhCache as your HashMap so it could do this for you automatically.
6) It worked really well and I thought about open sourcing it. But it was really simple and easily reproducible. You just have to get that math down right. The math for getting the rate is easy to get it accurate, but a little tricky if you want it to be fast, so that not a lot fo CPU's are spent calculating the new rate when a new request is added to the IpRequestRate.
Does that answer your question or would you need more info on how to setup the Filter in your server?
Edit: WRT DOS, during a DOS attack we want to waste as little resources as possible. If it all possible DOS detection should be done in a load balancer or reverse proxy or gateway or firewall.
If we want to do per IP max transmission rate, which is stored in a database then I would just cache the max transmission rates. This can be done without doing DB lookup for a request. I would instead load the table into a HashMap.
1) At start of application, say in the init() method, I would load the table into a HashMap that maps IP to maxTransmissionRate.
2) When request comes in, try to get the maxTransmissionRate from the HashMap. If its not there then use a default maxTransmissionRate.
3) During the init(), kickoff a ScheduleExecutorService to update the HashMap at some desired interval, to keep the HashMap fresh. Here is the link to ScheduleExecutorService, its not that hard. http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledExecutorService.html
4) Which HashMap implementation should we use? If we use a regular HashMap then we will have problems when it gets updated by the ScheduledExecutorService. We can use a synchronized HashMap, but this locks the HashMap and hurts performance during concurrent requests. So I would go with ConcurrentHashMap, which was designed with speed and multithreaded environment. You can safely update a ConcurrentHashMap on separate thread without worry.
If you apply this technique then its still a viable solution for DOS prevention and while supporting per client maxTransmissionRate.
We currently have a distributed setup where we are publishing events to SQS and we have an application which has multiple hosts that drains messages from the queue and does some transformation over it and transmits to interested parties. I have a use case where the receiving end point has scalability concerns with the message volume and hence we would like to batch these messages periodically (say every 15 mins) in the application before sending it.
The incoming message rate is around 200 messages per second and each message is no more than 10 KB. This system need not be real time, but would definitely be a good to have and also the order is not important (its okay if a batch containing older messages gets sent first).
One approach that I can think of is maintaining an embedded database within the application (each host) that batches the events and another thread that runs periodically and clears the data.
Another approach could be to create timestamped buckets in a a distributed key-value store (s3, dynamo etc.) where we write the message to the correct bucket based the messages time stamp and we periodically clear the buckets.
We can run into several issues here, since the messages would be out of order a bucket might have already been cleared (can be solved by having a default bucket though), would need to accurately decide when to clear a bucket etc.
The way I see it, at least two components would be required one which does the batching into a temporary storage and another that clears it.
Any feedback on the above approaches would help, also it looks like a common problem are they any existing solutions that I can leverage ?
Thanks
I'm currently developing a simple P2P network as an exercise. Each node in the network sends heartbeats to a subset of the other nodes to be able to detect nodes that have left the network. Beside the heartbeat packets I send packets when new nodes join/leave the network, when they want to locate a resource (small text files), etc. All packets are UDP packets.
Whenever I receive a packet I start a new thread that handles that specific packet. I am however concerned with the amount of threads I start during one applications lifetime which adds up to quite a lot (Especially because of the heartbeats). (There is also the risk of deadlocks and the like I would like to avoid).
I thought about having a queue or something where I put all incoming packets and have a single thread handling all packets one at a time from that queue (something like the producer-consumer pattern). I would like the packets to be handled rapidly so the sender doesn't think the packet is lost.
What is the best way to handle a lot of different incoming packets without having to start a new thread for each of them? Should I go with what I have, the producer-consuming or something different?
How long does it take to your application to process one packet?
For the ping ones it is probably faster to just process them as they are received, you can put the others in a shared data structure such as a particular blocking queue, so when the queue is empty the worker threads wait for new jobs, and when a new jobs is added, a thread is awaken and will do the job.
Probably starting one thread per packet makes you consume more time on starting and stopping the threads than in actually doing the job.
If the things to do in response of a packet aren't so time consuming for all the type of packets, it might be the case that the extra time spent with the locks of the queue and scheduling threads makes your program slower rather than faster.
In any case use thread pool and start the workers in the beginning. If you want you could increase or reduce the number of working threads dynamically depending on the load of the past minutes.
I would use an event driven architecture. Creating a new thread for every packet is not scalable, so this will work to a certain amount of workload, but there is a point it won't work anymore. You could compare that to e.g. a chat program like the Facebook chat where messages are the packets.
An event driven architecture would be scalable and IMHO exactly what your looking for. Just do some googling, there libraries for many programming languages, so just pick the right one for you (I like to do that in Erlang, Scala, C or Python).
edit: ok, didn't see the java tag. But the language does not matter.
Take a look at this link for example:
http://www.nightmare.com/medusa/async_sockets.html
I find it a quite good one to get the idea of event driven programming.