On JavaFx a reusable task is usually implemented on a javafx.concurrent.Service<>.
The question is: how to manage multiple UI interactions that triggers the Service multiple times?
Approach 1 - restart():
I could use service.restart(), but it cancels the running task and starts a new one. This is not the desired result, as I do not wish to cancel the first one.
Approach 2 - start():
To be able to use start() more than once, I would have to do this:
if(!isRunning()) {
reset();
start();
}
But if isRunning() is true, the second run is ignored.
I want to block the second run until the first one finishes, so no UI interaction is lost. So I wish to block or enqueue the tasks.
How would this be accomplished ?
If you want to stop users from trying to run the Service two or more times at once, simply disable all the UI nodes that launch the Service while it's running. One way of doing this is to bind the disable property of the Node to the running property of the Service.
If you want to queue up executions then it depends on what the Service requires. For instance, does it need any input? If not, just have some requests variable and increment/decrement it as needed.
public class Controller {
private int requests;
private Service<Object> service = new Service<>() {
#Override
protected Task<Object> createTask() {
// create and return Task...
}
#Override
protected void succeeded() {
if (requests > 0) {
requests--;
restart();
}
}
};
#FXML
private void startService() {
if (service.isRunning()) {
requests++;
} else {
service.start();
}
}
}
If the Service (or more specifically, the Task) does need input you'd still do something similar. Instead of using an int tracking the number of requests, however, you'd use a Queue (or some other similar object) that contains the needed information for each Task. When the Service completes and the Queue is not empty, restart the Service and grab the next element.
Related
For instance consider the below scenario.
App1: I have a multiple-threaded java app, which enters a lot of files in DB.
App2: when i access the DB using some other app, its slow in fetching results.
So when both apps work simultaneously, it takes great time for DB fetching results on the front-end app2.
Here, i want to pause all transactions(threads) on App1 for some 'x min' time. Considering a trigger has already been installed when app 2 is being used. So when App2 is idle, App1 will resume as if nothing happened. Please list some or one best approach to achieve this
Map<Thread, StackTraceElement[]> threads = Thread.getAllStackTraces();
for (Map.Entry<Thread, StackTraceElement[]> entry : threads.entrySet()) {
entry.getKey().sleep();
}
This didn't worked well.
Just to try:
private List<PausableThread> threads = new ArrayList<PausableThread>();
private void pauseAllThreads()
{
for(PausableThread thread : this.threads)
{
thread.pause();
}
}
And your Thread class will be something like this:
public class MyThread extends Thread implements PausableThread
{
private boolean isPaused = false;
#Override
public void pause()
{
this.isPaused = true;
}
#Override
public void run()
{
while(!Thread.currentThread().isInterrupted())
{
// Do your work...
// Check if paused
if(this.isPaused)
{
try
{
Thread.sleep(10 * 1000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
}
And the PausableThread interface:
public interface PausableThread
{
void pause();
}
Posting a solution answer, for my scenario.
I created a global flag and used it as a switch.
SO now, before DB interaction i just added a condition [in various functions where threads were performing variety of jobs, this solved the instance issue i was worried about]
if(isFlagChecked){thread.sleep(someDefinedTime);}
wait here if flag is true
continue with business logic...[db transacts here]
So, my issue was solved with just this, although it wouldn't pause thread running in intermediate state, which is kind of a good thing - one less trouble.
Parallel, in my trigger function - i checked for the elapsed time and changed the flag to false after desired time has passed. Check code skeleton below.
#async
void pause() // triggered by parallel running app when required
{
isFlagChecked=true;
resumeTime=new Date(timeInMillis + (someDefinedTime)) // resume time to switch flag condition
while (true) {
if (new Date().compareTo(resumeTime) > 0)
isFlagChecked=false;
}
}
Tried and tested, all running well, the performance improved significantly [least for my scenario].
I'm trying to solve a problem similar to downloading new mails from mail servers by mail client. I have a task, which is performed regularly (next iteration is 10 minutes after the last one ends for example) but there is also a possibility to run the task manually.
When I am trying to run the job manually and job is not running at this moment (is appointed for later), I cancel the appointment and schedule it for now. When the job is already running I do not cancel it, but wait until it finishes and run it again. But only one task can wait this way.
My problem is that I do not know how to synchronize the jobs to make it thread safe and make sure that job never runs twice at the same time.
To make it more clear. The main problem is that simple asking if the job is running and deciding based on what I get is not enough, because between the question and action the situation can change. It is a short span but the probability is not zero. And the same problem is with deciding if I should run the job again at the end of his run. If my decision is based on the value of some variable or some other if clause, then between testing its value and performing some action, some other thread can change it and I can end up with two scheduled jobs or none at all.
Have you considered using a DelayQueue?
To make your job run now you just need to persuade it to return 0 form getDelay(TimeUnit unit).
The main way to do that check you are telling about is to check, to lock and after that to repeat the same check:
public class OneQueuedThread extends Thread {
static int numberRunning =0;
public void run() {
if (numberRunning<2) {
synchronized (OneQueuedThread.class){
if (numberRunning<2) {
numberRunning++;
// ---------------your process runs here
numberRunning--;
}
}
}
}
}
So, only one attempt to run the thread while it is already running will wait until the end of the running thread and start after it.
As for scheduling, let's use the TimerTask features:
public class ScheduledTask extends TimerTask {
ScheduledTask instance;
/**
* this constructor is to be used for manual launching
*/
public void ScheduledTask(){
if (instance == null){
instance = this;
} else {
instance.cancel();
}
instance.run();
}
/**
* This constructor is to be used for scheduled launching
* #param deltaTime
*/
public ScheduledTask(long deltaTime){
instance = this;
Timer timer = new Timer();
timer.schedule(instance, deltaTime);
}
public void run() {
OneQueuedThread currentTread;
currentTread = new OneQueuedThread();
currentTread.start();
}
}
I had been making a game, and was using Threads in my program to carry out tasks. So let me explain the scenario a bit. I have a BattleManager class which implements Runnable and keep looping in the battle queue for battles, if there are any.
#Override
public void run() {
while(serverRunning){
synchronized (battleQueue) {
for(Battle battle : battleQueue){
if(battle != null){
if (battle instanceof WildBattle) {
if(!((WildBattle) battle).isBattleOver()){
((WildBattle) battle).tryExecuteBattleTurn();
}else{
battleQueue.remove(battle);
battle = null;
}
}
}
}
}
try {
Thread.sleep(3);
} catch (InterruptedException e)
e.printStackTrace();
}
}
currentThread = null;
}
Then I check if battle is not over, and if not I try to execute the battle turn. Since there can be more than 100 battles running at the same time and there are complex calculations inside every battle, I inside WildBattle class spawn a child thread to execute the task, so that the battles run in parallel.
Here is the method which is invoked inside wild battle class, which spawns a new thread.
public void tryExecuteBattleTurn() {
if (!isBattleTurnRunning && battleThread == null) {
battleThread = new Thread(new Runnable() {
#Override
public void run() {
//long startTime = System.currentTimeMillis();
executeBattle();
battleLog.setBattleLog("");
battleThread = null;
//System.err.println("Total execution time : " +(System.currentTimeMillis() - startTime));
}
}, "Battle thread");
battleThread.start();
}
}
Now the main question is, I want to learn about executor service and I read at few places that it is always better to use executor service rather than spawning new child threads. How can I change this to use executor service.
I am not sure though. I am not a java expert and still learning the language so spare me if you see something is wrong, and please let me know if I can change anything to make it more efficient.
Let me know if you are not clear about anything.
I'll show you a basic example and you'll manage how to integrate it with your code
First you create ExecutorService somewhere in your application.
ExecutorService executorService = Executors.newFixedThreadPool(NUMBER_OF_THREADS);
You should choose NUMBER_OF_THREADS based on your application needs. Threads are not created immediately - only when you submit a task to service and there are no available threads for it. If all NUMBER_OF_THREADS are busy, task will wait in queue until one of the threads will be able to handle it. ExecutorService will reuse threads, this will save time on thread instantiation and is a generally good concept to work with threads.
Then you manage how to access executor service from your battles. Then, when you need to perform an asynchronous work you submit task to service:
executorService.submit(new Runnable() {
#Override public void run() {
// your code here
}
}
If your application has a lifecycle and can be somehow shutdown, you'd like to shutdown ExecutorService as well. There are two options - shutdown() and shutdownNow(), first one waits for all current tasks to be executed, second one performs shutdown immediately and returns list of tasks that were not completed.
As was mentioned in comments, you should figure out how to preserve model state and organize thread synchronization based on your real situation.
I have a use-case coming from a GUI problem I would like to submit to your sagacity.
Use case
I have a GUI that displays a computation result depending on some parameters the user set in a GUI. For instance, when the user moves a slider, several events are fired, that all trigger a new computation. When the user adjust the slider value from A to B, a dozens of events are fired.
But the computation can take up to several seconds, whereas the slider adjustment can fire an event every few 100 ms.
How to write a proper Thread that would listen to these events, and kind of filter them so that the repaint of the results is lively? Ideally you would like something like
start a new computation as soon as first change event is received;
cancel the first computation if a new event is received, and start a new one with the new parameters;
but ensure that the last event will not be lost, because the last completed computation needs to be the one with last updated parameters.
What I have tried
A friend of mine (A. Cardona) proposed this low level approach of an Updater thread that prevents too many events to trigger a computation. I copy-paste it here (GPL):
He puts this in a class that extends Thread:
public void doUpdate() {
if (isInterrupted())
return;
synchronized (this) {
request++;
notify();
}
}
public void quit() {
interrupt();
synchronized (this) {
notify();
}
}
public void run() {
while (!isInterrupted()) {
try {
final long r;
synchronized (this) {
r = request;
}
// Call refreshable update from this thread
if (r > 0)
refresh(); // Will trigger re-computation
synchronized (this) {
if (r == request) {
request = 0; // reset
wait();
}
// else loop through to update again
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void refresh() {
// Execute computation and paint it
...
}
Every-time an event is sent by the GUI stating that parameters have been changed, we call updater.doUpdate(). This causes the method refresh() to be called much less.
But I have no control on this.
Another way?
I was wondering if there is another way to do that, that would use the jaca.concurrent classes. But I could not sort in the Executors framework what would be the one I should start with.
Does any of you have some experience with a similar use case?
Thanks
If you're using Swing, the SwingWorker provides capabilities for this, and you don't have to deal with the thread pool yourself.
Fire off a SwingWorker for each request. If a new request comes in and the worker is not done, you can cancel() it, and just start a new SwingWorker. Regarding what the other poster said, I don't think publish() and process() are what you are looking for (although they are also very useful), since they are meant for a case where the worker might fire off events faster than the GUI can process it.
ThingyWorker worker;
public void actionPerformed(ActionEvent e) {
if( worker != null ) worker.cancel();
worker = new ThingyWorker();
worker.execute();
}
class ThingyWorker extends SwingWorker<YOURCLASS, Object> {
#Override protected YOURCLASS doInBackground() throws Exception {
return doSomeComputation(); // Should be interruptible
}
#Override protected void done() {
worker = null; // Reset the reference to worker
YOURCLASS data;
try {
data = get();
} catch (Exception e) {
// May be InterruptedException or ExecutionException
e.printStackTrace();
return;
}
// Do something with data
}
}
Both the action and the done() method are executed on the same thread, so they can effectively check the reference to whether there is an existing worker.
Note that effectively this is doing the same thing that allows a GUI to cancel an existing operation, except the cancel is done automatically when a new request is fired.
I would provide a further degree of disconnect between the GUI and the controls by using a queue.
If you use a BlockingQueue between the two processes. Whenever the controls change you can post the new settings to the queue.
Your graphics component can read the queue whenever it likes and act on the arriving events or discard them as necessary.
I would look into SwingWorker.publish() (http://docs.oracle.com/javase/6/docs/api/javax/swing/SwingWorker.html)
Publish allows the background thread of a SwingWorker object to cause calls to the process() method, but not every publish() call results in a process() call. If multiple process calls are made before process() returns and can be called again, SwingWorker concatenates the parameters used for multiple publish calls into one call to process.
I had a progress dialog which displayed files being processed; the files were processed faster than the UI could keep up with them, and I didn't want the processing to slow down to display the file names; I used this and had process display only the final filename sent to process(); all I wanted in this case was to indicate to the user where the current processing was, they weren't going to read all the filenames anyway. My UI worked very smoothly with this.
Take a look at the implementation of javax.swing.SwingWorker (source code in the Java JDK),
with a focus on the handshaking between two methods: publish and process.
These won't be directly applicable, as-is, to your problem - however they demonstrate how you might queue (publish) updates to a worker thread and then service them in your worker thread (process).
Since you only need the last work request, you don't even need a queue for your situation: keep only the last work request. Sample that "last request" over some small period (1 second), to avoid stopping/restarting many many times every 1 second, and if it's changed THEN stop the work and restart.
The reason you don't want to use publish / process as-is is that process always runs on the Swing Event Dispatch Thread - not at all suitable for long running calculations.
The key here is that you want to be able to cancel an ongoing computation. The computation must frequently check a condition to see if it needs to abort.
volatile Param newParam;
Result compute(Param param)
{
loop
compute a small sub problem
if(newParam!=null) // abort
return null;
return result
}
To handover param from event thread to compute thread
synchronized void put(Param param) // invoked by event thread
newParam = param;
notify();
synchronized Param take()
while(newParam==null)
wait();
Param param = newParam;
newParam=null;
return param;
And the compute thread does
public void run()
while(true)
Param param = take();
Result result = compute(param);
if(result!=null)
paint result in event thread
In most cases when you create your thread you can prepare the data beforehand and pass it into the constructor or method.
However in cases like an open socket connection you will typically already have a thread created but wish to tell it to perform some action.
Basic idea:
C#
private Thread _MyThread = new Thread(MyMethod);
this._MyThread.Start(param);
Java
private Thread _MyThread = new Thread(new MyRunnableClass(param));
this._MyThread.start();
Now what?
So what is the correct way to pass data to a running thread in C# and Java?
One way to pass data to a running thread is by implementing Message Queues. The thread that wants to tell the listening thread to do something would add an item to the queue of the listening thread. The listening thread reads from this thread in a blocking fashion. Causing it to wait when there are no actions to perform. Whenever another thread puts a message in the queue it will fetch the message, depending on the item and it's content you can then do something with it.
This is some Java / pseudo code:
class Listener
{
private Queue queue;
public SendMessage(Message m)
{
// This will be executed in the calling thread.
// The locking will be done either in this function or in the Add below
// depending on your Queue implementation.
synchronize(this.queue)
{
this.queue.put(m);
}
}
public Loop()
{
// This function should be called from the Listener thread.
while(true)
{
Message m = this.queue.take();
doAction(m);
}
}
public doAction(Message m)
{
if (m is StopMessage)
{
...
}
}
}
And the caller:
class Caller
{
private Listener listener;
LetItStop()
{
listener.SendMessage(new StopMessage());
}
}
Of course, there are a lot of best practices when programming paralllel/concurrent code. For example, instead of while(true) you should at the least add a field like run :: Bool that you can set to false when you receive a StopMessage. Depending on the language in which you want to implement this you will have other primitives and behaviour to deal with.
In Java for example you might want to use the java.util.Concurrent package to keep things simple for you.
Java
You could basically have a LinkedList (a LIFO) and proceed (with something) like this (untested) :
class MyRunnable<T> implements Runnable {
private LinkedList<T> queue;
private boolean stopped;
public MyRunnable(LinkedList<T> queue) {
this.queue = queue;
this.stopped = false;
}
public void stopRunning() {
stopped = true;
synchronized (queue) {
queue.notifyAll();
}
}
public void run() {
T current;
while (!stopped) {
synchronized (queue) {
queue.wait();
}
if (queue.isEmpty()) {
try { Thread.sleep(1); } catch (InterruptedException e) {}
} else {
current = queue.removeFirst();
// do something with the data from the queue
}
Thread.yield();
}
}
}
As you keep a reference to the instance of the LinkedList given in argument, somewhere else, all you have to do is :
synchronized (queue) {
queue.addLast(T); // add your T element here. You could even handle some
// sort of priority queue by adding at a given index
queue.notifyAll();
}
Edit: Misread question,
C#
What I normally do is create a Global Static Class and then set the values there. That way you can access it from both threads. Not sure if this is the preferred method and there could be cases where locking occurs (correct me if I'm wrong) which should be handled.
I haven't tried it but It should work for for the threadpool/backgroundworker as well.
One way I can think of is through property files.
Well, it depends a lot on the work that the thread is supposed to do.
For example, you can have a thread waiting for a Event (e.g. ManualResetEvent) and a shared queue where you put work items (can be data structures to be processed, or more clever commands following a Command pattern). Somebody adds new work to the queue ad signals the event, so the trhread awakes, gets work from the queue and start performing its task.
You can encapsulate this code inside a custom queue, where any thread that calls the Deque methods stops until somebody calls Add(item).
On the other hand, maybe you want to rely on .NET ThreadPool class to issue tasks to execute by the threads on the pool.
Does this example help a bit?
You can use delegate pattern where child threads subscribes to an event and main thread raises an event, passing the parameters.
You could run your worker thread within a loop (if that makes sense for your requirement) and check a flag on each execution of the loop. The flag would be set by the other thread to signal the worker thread that some state had changed, it could also set a field at the same time to pass the new state.
Additionally, you could use monitor.wait and monitor.pulse to signal the state changes between the threads.
Obviously, the above would need synchronization.