I use org.eclipse.core.runtime.jobs.Job to execute stored procedure which deletes data and to update user interface according to the new data. Thus it is important that this job will be completed even if user closes eclipse application.
final Job execStoredProcJob = new Job(taskName) {
protected IStatus run(IProgressMonitor monitor) {
monitor.beginTask(taskName,
// execute stored procedure
// update user interface
monitor.done();
return Status.OK_STATUS;
}
};
execStoredProcJob.schedule();
When I close eclipse app while the Job still running it seems to kill the Job. How to complete the job after user has closed eclipse app? Is it possible?
I think you might want to take a look at scheduling rules
http://www.eclipse.org/articles/Article-Concurrency/jobs-api.html
execStoredProcJob.setRule([Workspace root]);
execStoredProcJob.schedule();
[Workspace root] can be attained like project.getWorkspace().getRoot() if you have a reference to your project.
That will block all jobs that require the same rule. The shutdown job is one of them..
It's also possible to:
IWorkspace myWorkspace = org.eclipse.core.resources.ResourcesPlugin.getWorkspace();
Then use:
myWorkspace.getRoot();
An alternative to scheduling rules is to add code in WorkbenchAdvisor.preShutdown() that will join any outstanding job you have...
Related
Suppose you have Spark + Standalone cluster manager. You opened spark session with some configs and want to launch SomeSparkJob 40 times in parallel with different arguments.
Questions
How to set reties amount on job failures?
How to restart jobs programmatically on failure? This could be useful if jobs failure due lack of resources. Than I can launch one by one all jobs that require extra resources.
How to restart spark application on job failure? This could be useful if job lack resources even when it's launched simultaneously. Than to change cores, CPU etc configs I need to relaunch application in Standalone cluster manager.
My workarounds
1) I pretty sure the 1st point is possible, since it's possible at spark local mode. I just don't know how to do that in standalone mode.
2-3) It's possible to hand listener on spark context like spark.sparkContext().addSparkListener(new SparkListener() {. But seems SparkListener lacks failure callbacks.
Also there is a bunch of methods with very poor documentation. I've never used them, but perhaps they could help to solve my problem.
spark.sparkContext().dagScheduler().runJob();
spark.sparkContext().runJob()
spark.sparkContext().submitJob()
spark.sparkContext().taskScheduler().submitTasks();
spark.sparkContext().dagScheduler().handleJobCancellation();
spark.sparkContext().statusTracker()
You can use SparkLauncher and control the flow.
import org.apache.spark.launcher.SparkLauncher;
public class MyLauncher {
public static void main(String[] args) throws Exception {
Process spark = new SparkLauncher()
.setAppResource("/my/app.jar")
.setMainClass("my.spark.app.Main")
.setMaster("local")
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.launch();
spark.waitFor();
}
}
See API for more details.
Since it creates process you can check the Process status and retry e.g. try following:
public boolean isAlive()
If Process is not live start again, see API for more details.
Hoping this gives high level idea of how we can achieve what you mentioned in your question. There could be more ways to do same thing but thought to share this approach.
Cheers !
check your spark.sql.broadcastTimeout and spark.broadcast.blockSize properties, try to increase them .
I'm quite new to ZK and the concept of event queues. What I'm trying to do is run a long operation in the server and update the UI of the progress in real-time, instead of blocking the UI while the long operation runs. So for example, if there are 3 tasks (this number is not fixed) to do in the long operation, it should update the UI by updating a "log trace" textbox and a progress bar that same number of times.
My code structure looks like:
if (EventQueues.exists("longop")) {
print("It is busy. Please wait");
return; //busy
}
EventQueue eq = EventQueues.lookup("longop"); //create a queue
String result;
//subscribe async listener to handle long operation
eq.subscribe(new EventListener() {
public void onEvent(Event evt) {
if ("doLongOp".equals(evt.getName())) {
//simulate a long operation
doTask1();
eq.publish(new Event("printStatus", null, "Task1 completed."));
doTask2();
eq.publish(new Event("printStatus", null, "Task2 completed."));
doTask3();
eq.publish(new Event("printStatus", null, "Task3 completed."));
result = "success";
eq.publish(new Event("endLongOp")); //notify it is done
}
}
}, true); //asynchronous
//subscribe a normal listener to show the resul to the browser
eq.subscribe(new EventListener() {
public void onEvent(Event evt) {
if("printStatus".equals(evt.getName())) {
printToTextbox((String)evt.getData()); //appends value to the log textbox
}
if ("endLongOp".equals(evt.getName())) {
print(result); //show the result to the browser
EventQueues.remove("longop");
}
}
}); //synchronous
eq.publish(new Event("doLongOp")); //kick off the long operation
This didn't work. All the printStatus events happen AFTER the long operation is finished. The only thing this fixed is that the UI is not getting blocked whenever the long operation runs. I was assuming that since the long operation thread is asynch, it will still send the events to the queue and the synch UI thread will be able to handle them as soon as they happen. So after several hours of trial and error, and after noticing that the server push is NOT used in a desktop scope queue, I changed the scope to application and explicitly enabled server push:
EventQueue<Event> eq = EventQueues.lookup("longop", EventQueues.APPLICATION, true);
desktop.enableServerPush(true);
it just worked. I know that ZK CE only has the client polling, which is fine for my use case. But why is it that in desktop scope, server push is not used? How can we accomplish such task if we don't want the queue to be shared application-wide? I want each desktop to have their own event queue.
It might also be worth mentioning that I have enabled the event thread. And that I tried disabling it but the result was the same. So it looks to me that it doesn't affect my problem.
Any help is greatly appreciated.
PS: I am using ZK CE 7.0.3
There are many possible solutions for your situation.
Please take a look at this section of ZK documents.
You can use the piggyback, but when the user doesn't do anything, you also have no updates on the screen.
So I suggest go for the echoEvents.
So you have to do task 1, update screen and echo onTask2.
In OnTask2 do your stuff, update screen and echo onTask3.
And for onTask3 do task 3 and update the screen.
Edit :
The scope doesn't have to be application scope. The application scope event queue has already server push build in (And I believe Session also). For the desktop you have to do it manually(or other approach). (your desktop.enableServerPush isn't needed for application scope)
If you want to work simple with the eventqueue look here.
Use the EventQueue.subscribe(EventListener, EventListener) what is the async and sync Eventlistener.
The only thing is, in the Sync listener you need to call your Task 2 with again the sync listener for refreshing GUI and start task 3 in same way.
The other way is passing the desktop to the async listener so you can enable (and disable) server push there.(async listener never has reference to desktop, it's a complete new thread)
I'm trying to execute delayed DeferredTask in Google App Engine (JAVA).
So far here is what I got.
The task class itself:
public class TestTask implements DeferredTask {
#Override
public void run() {
System.out.print("test");
}
}
And the execution:
QueueFactory.getDefaultQueue().add(TaskOptions.Builder.withEtaMillis(10000).payload(new TestTask()));
When I run it on the dev server, console output show up right away when task is added to queue, and not after 10 seconds as I wanted :(
The Dev Server typically handles the execution differently. This is detailed in the following section : https://developers.google.com/appengine/docs/java/taskqueue/overview-push#Java_Push_queues_and_the_development_server
So, it is likely that some of the parameters that you are trying to specify are ignored by the dev server and the task is executed immediately. In case you do not want the task to be executed and prefer that you manually invoke it in the dev server, there is a setting to be provided for the app server as detailed in the note above.
I'm not quite sure whether this is more of an Openbravo issue or more of a Quartz issue, but we have some manual processes that run on schedules via Openbravo ProcessRequest objects (OB v2.50MP24), but it seems that the processes are running twice, at the exact same time. Openbravo extends the Quartz platform for their scheduling. I've tried to resolve this issue on my own by ensuring that my process classes extend this class:
import java.util.List;
import org.openbravo.dal.service.OBDal;
import org.openbravo.model.ad.ui.ProcessRequest;
import org.openbravo.scheduling.ProcessBundle;
import org.openbravo.service.db.DalBaseProcess;
public abstract class RBDDalProcess extends DalBaseProcess {
#Override
protected void doExecute(ProcessBundle bundle) throws Exception {
org.quartz.Scheduler sched = org.openbravo.scheduling.OBScheduler
.getInstance().getScheduler();
int runCount = 0;
synchronized (sched) {
List<org.quartz.JobExecutionContext> currentlyExecutingJobs = (List<org.quartz.JobExecutionContext>) sched
.getCurrentlyExecutingJobs();
for (org.quartz.JobExecutionContext jec : currentlyExecutingJobs) {
ProcessRequest processRequest = OBDal.getInstance().get(
ProcessRequest.class, jec.getJobDetail().getName());
if (processRequest == null)
continue;
String processClass = processRequest.getProcess()
.getJavaClassName();
if (bundle.getProcessClass().getCanonicalName()
.equals(processClass)) {
runCount++;
}
}
}
if (runCount > 1) {
System.out.println("Process "
+ bundle.getProcessClass().getSimpleName()
+ " is already running. Cancelling.");
return;
}
doRun(bundle);
}
protected abstract void doRun(ProcessBundle bundle);
}
This worked fine when I tested by requesting the process to run immediately twice at the same time. One of them cancelled. However, it's not working on the scheduled processes. I have S.o.p's set up to log when the processes start, and looking at the logs shows each line of the output twice, each line one right after the other.
I have a sneaking suspicion that it's because the processes are either running in two completely different threads that don't know about each others' processes, however, I'm not sure how to verify my suspicions or, if I am correct, what to do about it. I've already verified that there is only one instance of each of the ProcessRequest objects stored in the database.
Has anyone else experienced this, know why they might be running twice, or know what I can do to prevent them from simultaneously running?
The most common reasons for a double Job execution are the following:
EDITED:
Your application is deployed in a clustered environment and you have not configured Quartz to run in a cluster environment.
Your application is deployed more than once. There are many cases where the application is deployed twice especially in Tomcat server. As a consequence the QuartzInitializerListener is invoked twice and the Jobs are executed twice. In case you use Tomcat server and you are defining contexts explicitly in server.xml, you should turn off automatic application deployment or specify deployIgnore. Both the autoDeploy set to true and the context element existence in server.xml, have as a consequence the twice deployment of the application. Set autoDeploy to false or remove the context element from the server.xml.
Your application has been redeployed without unscheduling the current processes.
I hope this helps you.
Quartz uses a thread pool for the jobs execution. So as you suspect, the RBDDalProcess will probably have separate instances a in separate thread and the counter check will fail.
One thing you can do is list the jobs registered in the Scheduler (you can get the Scheduler using the OB API as: OBScheduler.getScheduler()):
// enumerate each job group
for(String group: sched.getJobGroupNames()) {
// enumerate each job in group
for(JobKey jobKey : sched.getJobKeys(groupEquals(group))) {
System.out.println("Found job identified by: " + jobKey);
}
}
If you see the same job added twice, check out org.quartz.spi.JobFactory and the org.quartz.Scheduler.setJobFactory method for controlling jobs instantiations.
Also make sure you have only one entry for this process in the 'Report and Process' table in Openbravo.
I have used DalBaseProcess in Openbravo 3.0 and I cannot confirm this behavior you're describing. Having this in mind it would be probably a good idea to checkout the reported bugs for Openbravov2.50MP24 and Quartz or post a thread in Openbravo Forge forums with your problem.
I have a web application, I need to run a backgroung process which will hit a web-service, after getting the response it will wait for few seconds(say 30) then again hit the service. The response data can vary from very less to very large, so i dont want to call the processagain untill i am finished with processing of data. So, its a recursive call with a time delay. How i intend to do is:
Add a ContextListener to web app.
On contextIntialized() method , call invokeWebService() i.e. arbitary method to hit web service.
invokeWebService will look like:
invokeWebService()
{
//make request
//hit service
//get response
//process response
timeDelayInSeconds(30);
//recursive call
invokeWebService();
}
Pls. suggest whether I am doing it right. Or go with threads or schedulers. Pls. answer with sample codes.
You could use a ScheduledExecutorService, which is part of the standard JDK since 1.5:
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
Runnable r = new Runnable() {
#Override
public void run() {
invokeWebService();
}
};
scheduler.scheduleAtFixedRate(r, 0, 30, TimeUnit.SECONDS);
It is not recursive but repeated. You have two choice here:
Use a Timer and a TimerTask with scheduleAtFixedRate
Use Quartz with a repeated schedule.
In quartz, you can create a repeated schedule with this code:
TriggerBuilder.newTrigger().withSchedule(SimpleScheduleBuilder.repeatSecondlyForever(30))
.build()
From what I am getting, waiting sort of implies hanging, which I do not really think is a good idea. I would recommend you use something such as Quartz and run your method at whatever interval you wish.
Quartz is a full-featured, open source job scheduling service that can
be integrated with, or used along side virtually any Java EE or Java
SE application
Tutorials can be accessed here.
As stated in here you can do something like so:
JobDetail existingJobDetail = sched.getJobDetail(jobName, jobGroup);
if (existingJobDetail != null) {
List<JobExecutionContext> currentlyExecutingJobs = (List<JobExecutionContext>) sched.getCurrentlyExecutingJobs();
for (JobExecutionContext jec : currentlyExecutingJobs) {
if(existingJobDetail.equals(jec.getJobDetail())) {
//String message = jobName + " is already running.";
//log.info(message);
//throw new JobExecutionException(message,false);
}
}
//sched.deleteJob(jobName, jobGroup); if you want to delete the scheduled but not-currently-running job
}