I am new to threads. I want to communicate with multiple sensors at one time after every minute continuously 24/7.
Senario:
I have a method to talk to the sensors which takes 3 arguments
public String perform(String command, String ip, String port)
{
//talk to the sensor and then
returns reply;
}
I have a database that contains the details of the sensor.
What I'm doing right now
while(true)
{
//get sensors from database
//run perform method for all instruments
for(int i=0;i<sensors.length-1;i++)
{
//call perform method and save the reply
}
Thread.sleep('one minute');
}
Problem:
The problem is if I have 100 sensors and each sensor takes 1 second to reply then after that I will be waiting for 1 minute, in this case I may lose some information. And to be honest sometime It takes more than a second to respond.
What I want to do is, get the information from the database for all the sensors
then create one thread for each sensor. Then run all the threads at one time which will return me some information. After that wait for one minute then do it again.
Any help is appreciated.
Thanks
Have you looked at the ScheduledThreadPoolExecutor ?
A simple usage would be to create a Callable for each of your sensors, and configure the thread pool to contain as many threads as you have sensors. Then submit each Callable, specifying an appropriate schedule.
Note that this approach doesn't guarantee particularly accurate timings (Java's not by any means a real-time platform). The other issue is that creating a lot of threads can be relatively memory-hungry (IIRC the standard heap allocation per thread is 512k, but it's configurable) and this approach wouldn't scale if you had 1000s of sensors.
Personally I would take a different approach. I would have the server always listening via a RESTful API and then have the sensors POST their state every minute (or other interval you decide). This way the server and the sensors don't need to be within the same JVM and IMHO is more scalable. Also, this way any sensor can also query for the state of any other sensor via another RESTful API on the server.
Additionally the server can start a thread to handle each POST and if one sensor is taking very long, the others are not blocked.
Related
I have this scenario, where data from my sensor is sent to my server every 5 minutes. The server stores received data in the database. Now when a server receives the data, I want to start 6 minutes timer for this specific sensor. If I receive data earlier than those 6 minutes, this timer must be canceled and started again. If the timer happens to finish, onFinish() must be called, which then I will send a notification to the user about a possible connection issue with the sensor. I have this list of sensors and a method that gets called when the server receives data:
// list to save sensors and their timers
private val sensors: MutableMap<Long, CountDownTimer> = HashMap()
// this method gets called after server receives data from the sensor
// must keep in mind that CountDownTimer is undefined
fun initiateTimer(sensorId: Long) {
sensors[sensorId] = object : CountDownTimer(360000, 1000) {
fun onTick(duration: Long) {
// Blank
}
fun onFinish() {
// Send notification to user about possible connection issue
}
}.start()
}
I've been looking into scheduleWithFixedDelay, but this requires ExecutorService with specific threads count. Now what if I have thousands of sensors and I need those thousand timers running at once?
Now in Android, I can simply use CountDownTimer, but since it's not Android, I am seeking advice for the best possible approach to solve this problem using Kotlin in Spring Boot environment (or in simple terms, not in Android environment). Thank you.
ScheduledExecutorService that you found is a standard way of scheduling tasks in the future. You can schedule thousands of timers and execute them all using a single thread - no problem with that.
Alternatively, if you use coroutines, you can use utils like delay() or withTimeout().
Also, if you really plan to have a big number of such timers, then it could be easier to implement and maintain alternative solution where we only have a single "ticking" thread/coroutine that checks all sensors once per e.g. 10 seconds. It iterates over all sensors and checks when the data was received the last time. When new data arrives, we only update the time, but we don't need to cancel and restart any timers.
I have one Server and multiple clients. With some period, clients sends an alive packet to Server. (At this moment, Server doesn't respond alive packets). The period may change device to device and configurable at runtime, for both Server and Clients. I want to generate an alert when one or more clients doesn't send the alive packet. (One packet or two in row etc.). This aliveness is used other parts of application so the quicker notice is the better. I came up some ideas but I couldn't select one.
Create a task that checks every clients last alive packet timestamps with current time and generate alert or alerts. Call this method in some period which should be smaller than minimum client-period.
Actually that seems better to me, however this way unnecessarily I check some clients alive. (Ex: If clients period are change 1-5 minute, task should be run in every minute at least, so I check all clients above 2 minute period is redundant). Also if the minimum of client periods is decrease, I should decrease the tasks period also.
Create a task for each clients, and check the last alive packet timestamps with current time, sleep for one client's period time.
In this way, if clients number goes very high, there will be dozens of task. Since they will sleep most of the time, I still doubt this is more elegant.
Is there any idiom or pattern for this kind of situation? I think watchdog kind implementation is suite well, however I didn't see something like in Java.
Approach 2 is not very useful as it is vague idea to write 100 task for 100 clients.
Approach 1 can be optimized if you use average client-period instead of minimum.
It depends on your needs.
Is it critical if alert is generated few seconds later (or earlier) than it should be?
If not then maybe it's worth grouping clients with nearby heartbeat intervals and run the check against not a single client but the group of clients? This will allow to decrease number of tasks (100 -> 10) and increase number of clients handled by single task (1 -> 10).
First approach is fine.
Only thing I can suggest you is that create an independent service to do this control. If you set this task as a thread in your server, it wouldn't be that manageable. Imagine your control thread is broken, killed etc, how would you notice? So, build an independent OS service, another java program, to check last alive timestamps periodically.
In this way you can easily modify and restart your service and see its logs separately. According to its importance, you may even built a "watchdog of watchdog" service too.
I am trying to roll out my own SMS verification system for my app. I don’t want to start paying for a service and then have them jack up the price on me (Urban Airship did that to me for push notification: lesson learned). During development and beta testing I have been using Twilio with a very basic setup: 1 phone number. It worked well for over a year, but right now for whatever reason the messages aren’t always delivered. In any case I need to create a better system for production. So I have the following specs in mind:
600 delivered SMS per minute
zero misses
save money
Right now my Twilio phone number can send one SMS per second; which means the best I can handle is 60 happy users per minute. So how do I get 600 happy users per minute?
So the obvious solution is to use 10 phone numbers. But how would I implement the system? My server is App Engine, DataStore, Java. So say I purchase 10 phone numbers from Twilio (fewer would of course be better). How do I implement the array so that it can handle concurrent calls from users? Will the following be sufficient?
public static final String[] phoneBank = {“1234567890”,”2345678901”,”3456789012”,”4567890123”,…};
public static volatile nextIndex;
public void sendSMSUsingTwilio(String message, String userPhone){
nextIndex = (nextIndex+1)%phoneBank.length;
String toPhone = phoneBank[nextIndex];
// boilerplate for sending sms with twilio goes here
//…
}
Now imagine 1000 users calling this function at the very same time. Would nextIndex run from 0,1,2…9,0,1…9,0,… successively until all requests are sent?
So really this is a concurrency problem. How will this concurrency issue work on Java AppEngine? Will there be interleaving? bottlenecking? I want this to be fast on a low budget: At least 600 per minute. So I definitely don’t want synchronization in the code itself to waste precious time. So how do I best synchronize calls to increment nextIndex so that the phone numbers are each called equally and in a periodic fashion? Again, this is for Google App Engine.
You need to use Task API. Every message is a new task, and you can assign phone numbers using round-robin or random assignments. As a task is completed, App Engine will automatically pull and execute the next task. You can configure the desired throughput rate (for example, 10 per second), and App Engine will manage the required capacity for you.
You can try to implement something similar on your own, but it's much more difficult than you think - you have to handle concurrency, retries, instance shutdowns, memory limits, etc. Task API does all of that for you.
I've built a server application in java, where clients can connect . I've implemented a heartbeat system where the client is sending every x seconds a small message.
On the server side I save in a HashMap the time the client has sent the message , and I use a TimerTask for every client to check every x seconds if I received any message from the client.
Everything works ok for a small amount of client, but after the number of clients increase (2k+) the memory amount is very big, plus the Times has to deal with a lot of TimerTask and the program start to eat a lot of CPU.
Is there a better way to implement this? I thought about using a database and make a select the clients that didn't sent any update in a certain amount of time.
Do you think this will work better, or is a better way of doing this.
Few random suggestions:
Instead of one timer per each client, have only one global timer that examines the map of received heartbeats quite often (say 10 times per second). Iterate over that map and find dead clients. Remember about thread-safety of shared data structure!
If you want to use database, use a lightweight in-memory DB like h2. But still sounds like an overkill.
Use cache or some other expiring map and be notified every time something is evicted. This way you basically put something in the map when a client sends a heartbeat and if nothing happened with that entry within given amount of time, the map implementation will remove it, calling some sort of listener.
Use actor-based system like Akka (has Java API). You can have one actor on the server side that handles one client. It's much more efficient than one thread/timer.
Use a different data structure, e.g. a queue. Every time you receive a heartbeat, you remove client from the queue and put it back at the end. Now periodically check only the head of the queue, which should always contain the client with oldest heartbeat.
I have a Spring-MVC, Hibernate, (Postgres 9 db) Web app. An admin user can send in a request to process nearly 200,000 records (each record collected from various tables via joins). Such operation is requested on a weekly or monthly basis (OR whenever the data reaches to a limit of around 200,000/100,000 records). On the database end, i am correctly implementing batching.
PROBLEM: Such a long running request holds up the server thread and that causes the the normal users to suffer.
REQUIREMENT: The high response time of this request is not an issue. Whats required is not make other users suffer because of this time consuming process.
MY SOLUTION:
Implementing threadpool using Spring taskExecutor abstraction. So i can initialize my threadpool with say 5 or 6 threads and break the 200,000 records into smaller chunks say of size 1000 each. I can queue in these chunks. To further allow the normal users to have a faster db access, maybe I can make every runnable thread sleep for 2 or 3 secs.
Advantages of this approach i see is: Instead of executing a huge db interacting request in one go, we have a asynchronous design spanning over a larger time. Thus behaving like multiple normal user requests.
Can some experienced people please give their opinion on this?
I have also read about implementing the same beahviour with a Message Oriented Middleware like JMS/AMQP OR Quartz Scheduling. But frankly speaking, i think internally they are also gonna do the same thing i.e making a thread pool and queueing in the jobs. So why not go with the Spring taskexecutors instead of adding a completely new infrastructure in my web app just for this feature?
Please share your views on this and let me know if there is other better ways to do this?
Once again: the time to completely process all the records in not a concern, whats required is that normal users accessing the web app during that time should not suffer in any way.
You can parallelize the tasks and wait for all of them to finish before returning the call. For this, you want to use ExecutorCompletionService which is available in Java standard since 5.0
In short, you use your container's service locator to create an instance of ExecutorCompletionService
ExecutorCompletionService<List<MyResult>> queue = new ExecutorCompletionService<List<MyResult>>(executor);
// do this in a loop
queue.submit(aCallable);
//after looping
queue.take().get(); //take will block till all threads finish
If you do not want to wait then, you can process the jobs in the background without blocking the current thread but then you will need some mechanism to inform the client when the job has finished. That can be through JMS or if you have an ajax client then, it can poll for updates.
Quartz also has a job scheduling mechanism but, Java provides a standard way.
EDIT:
I might have misunderstood the question. If you do not want a faster response but rather you want to throttle the CPU, use this approach
You can make an inner class like this PollingThread where batches containing java.util.UUID for each job and the number of PollingThreads are defined in the outer class. This will keep going forever and can be tuned to keep your CPUs free to handle other requests
class PollingThread implements Runnable {
#SuppressWarnings("unchecked")
public void run(){
Thread.currentThread().setName("MyPollingThread");
while (!Thread.interrupted()) {
try {
synchronized (incomingList) {
if (incomingList.size() == 0) {
// incoming is empty, wait for some time
} else {
//clear the original
list = (LinkedHashSet<UUID>)
incomingList.clone();
incomingList.clear();
}
}
if (list != null && list.size() > 0) {
processJobs(list);
}
// Sleep for some time
try {
Thread.sleep(seconds * 1000);
} catch (InterruptedException e) {
//ignore
}
} catch (Throwable e) {
//ignore
}
}
}
}
Huge-db-operations are usually triggered at wee hours, where user traffic is pretty less. (Say something like 1 Am to 2 Am.. ) Once you find that out, you can simply schedule a job to run at that time. Quartz can come in handy here, with time based triggers. (Note: Manually triggering a job is also possible.)
The processed result could now be stored in different table(s). (I'll refer to it as result tables) Later when a user wants this result, the db operations would be against these result tables which have minimal records and hardly any joins would be involved.
instead of adding a completely new infrastructure in my web app just for this feature?
Quartz.jar is ~ 350 kb and adding this dependency shouldn't be a problem. Also note that there's no reason this need to be as a web-app. These few classes that do ETL could be placed in a standalone module.The request from the web-app needs to only fetch from the result tables
All these apart, if you already had a master-slave db model(discuss on that with your dba) then you could do the huge-db operations with the slave-db rather than the master, which normal users would be pointed to.