Let's say that we have bussiness process A. Process A might take more or less time (it's not known).
Normally you can have multiple A processes, but sometimes during some operations we need to make sure that one process execution is made after previous one.
How can we achieve it in Camunda? Tried to find something like process dependency (so process starts after previous one is complete), but couldn't find anything :(
I thought about adding some variable in process (like depending_process) and checking if specified process is done, but maybe there would be some better solution.
Ok, after some research I got solution.
On the beginning of every process I check for processes started by current user:
final DateTime selfOrderDate = (DateTime) execution.getVariable(PROCESS_ORDER_DATE);
List<ProcessInstance> processInstanceList = execution
.getProcessEngineServices()
.getRuntimeService()
.createProcessInstanceQuery()
.processDefinitionId(execution.getProcessDefinitionId())
.variableValueEquals(CUSTOMER_ID, execution.getVariable(CUSTOMER_ID))
.active()
.list();
int processesOrderedBeforeCurrentCount = 0;
for (ProcessInstance processInstance : processInstanceList) {
ExecutionEntity entity = (ExecutionEntity) processInstance;
if (processInstance.getId().equals(execution.getId()))
continue;
DateTime orderDate = (DateTime) entity.getVariable(PROCESS_ORDER_DATE);
if (selfOrderDate.isAfter(orderDate)) {
processesOrderedBeforeCurrentCount += 1;
}
}
Then I save number of previously started processes to Camunda and in next task check if it's equal to 0. If yes, I proceed, if nope, I wait 1s (using Camunda's timer) and check again.
Related
I have a workflow like: startevent -> task1(Assignee:Tom) -> choose sequence flow "agree" ->task2(Assignee:Jerry) -> choose sequence flow "disagree" -> task1
When the flow arrive to task1, i want to set assignee to "Tom" again.
Now i have an idea like:
When the flow arrive to task1, i use complete method, after the complete method, set a local variable "pre_task_id(task1's taskid)" in task2 so that i can use task1's taskid to search in "act_hi_taskinst" table for assignee(Tom), but this method taskService.setVariableLocal(taskId, variableName, value) need task2's taskid, how can i get the task2's taskid after complete method?
#Test
public void testCompleteTask() {
Task task = taskService.createTaskQuery().taskAssignee("Tom").singleResult();
if (task == null) {
System.out.println("no task!!!");
return;
}
String preTaskId = task.getId();
HashMap <String,Object> variables = new HashMap<>();
variables.put("userId", "Jerry");
variables.put("oper", "saolu");
taskService.complete(task.getId(),variables);
//don't konw how to get the taskId
//taskService.setVariableLocal(taskId, "pre_task_id", preTaskId);
}
I am using activiti6
Or please let me konw if there are any better solutions
workflow.png
If I understand properly. You need to keep track which task is executed most recently.
In that case you can use a stack. When ever you execute any task you just call the push with the task id.
When ever you call pop it will give you the task id of the most recent task executed.
Just be careful about how you want to clear the stack as well.
I would recommend to always call pop before calling push.
In the past I have always used a process variable for this purpose, Have never attempted the "push" operation suggested above but I would be concerned about getting the correct value back if multiple parallel processes are all pushing/pulling to and from the stack at once. The process variable is simple and I know it works.
i create a job running a Spring bean class with this code
MethodInvokingJobDetailFactoryBeanjobDetail = new MethodInvokingJobDetailFactoryBean();
Class<?> businessClass = Class.forName(task.getBusinessClassType());
jobDetail.setTargetObject(applicationContext.getBean(businessClass));
jobDetail.setTargetMethod(task.getBusinessMethod());
jobDetail.setName(task.getCode());
jobDetail.setGroup(task.getGroup().getCode());
jobDetail.setConcurrent(false);
Object[] argumentArray = builArgumentArray(task.getBusinessMethodParams());
jobDetail.setArguments(argumentArray);
jobDetail.afterPropertiesSet();
CronTrigger trigger = TriggerBuilder.newTrigger().withIdentity(task.getCode() + "_TRIGGER", task.getGroup().getCode() + "_TRIGGER_GROUP")
.withSchedule(CronScheduleBuilder.cronSchedule(task.getCronExpression())).build();
dataSchedulazione = scheduler.scheduleJob((JobDetail) jobDetail.getObject(), trigger);
scheduler.start();
sometimes the task stop to respond if i remove the trigger and the task from scheduler
remain in
List ob = scheduler.getCurrentlyExecutingJobs();
The state of the trigger is NONE but is still in scheduler.getCurrentlyExecutingJobs();
I have tried to implent InterruptableJob in a class that extend MethodInvokingJobDetailFactoryBeanjobDetail
But when i use
scheduler.interrupt(jobKey);
It say that the InterruptableJob is not implemented.
I think is because the instance of the class is MethodInvokingJobDetailFactoryBeanjobDetail
`scheduler.scheduleJob((JobDetail) jobDetail.getObject(), trigger);`
this is the code inside the quartz scheduler
`job = jec.getJobInstance();
if (job instanceof InterruptableJob) {
((InterruptableJob)job).interrupt();
interrupted = true;
} else {
throw new UnableToInterruptJobException(
"Job " + jobDetail.getKey() +
" can not be interrupted, since it does not implement " +
InterruptableJob.class.getName());
}
`
Is there another way to kill a single task?
I use Quartz 2.1.7 and java 1.6 and java 1.8
TIA
Andrea
There is no magic way to force JVM to stop execution of some piece of code.
You can implement different ways to interrupt the job. But the most appropriate way is to implement InterruptableJob.
Implementing this interface is not sufficient. You should implement a job in such way that it really reacts on such requests.
Example
Suppose, your job is processing 1 000 000 records in the database or in a file and it take relatively long time, let say 1 hour. Then one possible implementation can be following. In the method "interrupt()" you set some flag (member variable) to "true", let name it isInterruptionRequested. In the main logic part that is processing 1 000 000 records you can regularly, e.g. each 5 seconds or after each let say 100 records check if this flag isInterruptionRequested is set to "true". If set, you exit from the method where you implemented the main logic.
It is important that you don't check the condition too often. Otherwise, depending on the logic, it may happen that checking if the job interruption was requested may take 80-90% of CPU, much more than the actual logic :)
Thus, even when you implement the InterruptableJob interface properly, it doesn't mean that the job will be stopped immediately. It will be just a hint like "I would like to stop this job when it is possible". When it will be stopped (if at all) depends on how you implement it.
This is a basic functionality and I see repeated questions , but unfortunately no clear answer yet.
How do I print/list all the tasks in the given process ( finished / unfinished ) in the order of execution.
The two solution I found on the forum are working as expected
repositoryService.getBpmnModel().getFlowElements() - Does not print in the order of execution . Printed in the order of definition
historyService.createHistoricActivityQuery - Does not print all Service task
How do I just list all the task under the given process.
If by tasks you mean all the elements in the process then you can use the HistoricActivityInstanceQuery to get the information about them.
The code would look something like:
List<HistoricActivityInstance> activityInstances = historyService
.createHistoricActivityInstanceQuery().
.processInstanceId(processInstanceId)
.orderByHistoricActivityInstanceStartTime().asc()
.list();
In order to see if a HistoricActivityInstance is finished or not you'll need to check the HistoricActivityInstance#getEndTime(). When that is null it means that the activity is not finished, if it is null then it means it is finished.
You can create a TaskQuery
import org.camunda.bpm.engine.ProcessEngine;
...
#Autowired
private ProcessEngine processEngine;
private List<Task> getAllTaskByProcessId(string processInstanceId){
return processEngine.getTaskService()
.createTaskQuery()
.processInstanceId(processInstanceId)
.list();
}
I'm using LuaJ to run user-created Lua scripts in Java. However, running a Lua script that never returns causes the Java thread to freeze. This also renders the thread uninterruptible. I run the Lua script with:
JsePlatform.standardGlobals().loadFile("badscript.lua").call();
badscript.lua contains while true do end.
I'd like to be able to automatically terminate scripts which are stuck in unyielding loops and also allow users to manually terminate their Lua scripts while they are running. I've read about debug.sethook and pcall, though I'm not sure how I'd properly use them for my purposes. I've also heard that sandboxing is a better alternative, though that's a bit out of my reach.
This question might also be extended to Java threads alone. I've not found any definitive information on interrupting Java threads stuck in a while (true);.
The online Lua demo was very promising, but it seems the detection and termination of "bad" scripts is done in the CGI script and not Lua. Would I be able to use Java to call a CGI script which in turn calls the Lua script? I'm not sure that would allow users to manually terminate their scripts, though. I lost the link for the Lua demo source code but I have it on hand. This is the magic line:
tee -a $LOG | (ulimit -t 1 ; $LUA demo.lua 2>&1 | head -c 8k)
Can someone point me in the right direction?
Some sources:
Embedded Lua - timing out rogue scripts (e.g. infinite loop) - an example anyone?
Prevent Lua infinite loop
Embedded Lua - timing out rogue scripts (e.g. infinite loop) - an example anyone?
How to interrupt the Thread when it is inside some loop doing long task?
Killing thread after some specified time limit in Java
I struggled with the same issue and after some digging through the debug library's implementation, I created a solution similar to the one proposed by David Lewis, but did so by providing my own DebugLibrary:
package org.luaj.vm2.lib;
import org.luaj.vm2.LuaValue;
import org.luaj.vm2.Varargs;
public class CustomDebugLib extends DebugLib {
public boolean interrupted = false;
#Override
public void onInstruction(int pc, Varargs v, int top) {
if (interrupted) {
throw new ScriptInterruptException();
}
super.onInstruction(pc, v, top);
}
public static class ScriptInterruptException extends RuntimeException {}
}
Just execute your script from inside a new thread and set interrupted to true to stop the execution. The exception will be encapsulated as the cause of a LuaError when thrown.
There are problems, but this goes a long way towards answering your question.
The following proof-of-concept demonstrates a basic level of sandboxing and throttling of arbitrary user code. It runs ~250 instructions of poorly crafted 'user input' and then discards the coroutine. You could use a mechanism like the one in this answer to query Java and conditionally yield inside a hook function, instead of yielding every time.
SandboxTest.java:
public static void main(String[] args) {
Globals globals = JsePlatform.debugGlobals();
LuaValue chunk = globals.loadfile("res/test.lua");
chunk.call();
}
res/test.lua:
function sandbox(fn)
-- read script and set the environment
f = loadfile(fn, "t")
debug.setupvalue(f, 1, {print = print})
-- create a coroutine and have it yield every 50 instructions
local co = coroutine.create(f)
debug.sethook(co, coroutine.yield, "", 50)
-- demonstrate stepped execution, 5 'ticks'
for i = 1, 5 do
print("tick")
coroutine.resume(co)
end
end
sandbox("res/badfile.lua")
res/badfile.lua:
while 1 do
print("", "badfile")
end
Unfortunately, while the control flow works as intended, something in the way the 'abandoned' coroutine should get garbage collected is not working correctly. The corresponding LuaThread in Java hangs around forever in a wait loop, keeping the process alive. Details here:
How can I abandon a LuaJ coroutine LuaThread?
I've never used Luaj before, but could you not put your one line
JsePlatform.standardGlobals().loadFile("badscript.lua").call();
Into a new thread of its own, which you can then terminate from the main thread?
This would require you to make some sort of a supervisor thread (class) and pass any started scripts to it to supervise and eventually terminate if they don't terminate on their own.
EDIT: I've not found any way to safely terminate LuaJ's threads without modifying LuaJ itself. The following was what I came up with, though it doesn't work with LuaJ. However, it can be easily modified to do its job in pure Lua. I may be switching to a Python binding for Java since LuaJ threading is so problematic.
--- I came up with the following, but it doesn't work with LuaJ ---
Here is a possible solution. I register a hook with debug.sethook that gets triggered on "count" events (these events occur even in a while true do end). I also pass a custom "ScriptState" Java object I created which contains a boolean flag indicating whether the script should terminate or not. The Java object is queried in the Lua hook which will throw an error to close the script if the flag is set (edit: throwing an error doesn't actually terminate the script). The terminate flag may also be set from inside the Lua script.
If you wish to automatically terminate unyielding infinite loops, it's straightforward enough to implement a timer system which records the last time a call was made to the ScriptState, then automatically terminate the script if sufficient time passes without an API call (edit: this only works if the thread can be interrupted). If you want to kill infinite loops but not interrupt certain blocking operations, you can adjust the ScriptState object to include other state information that allows you to temporarily pause auto-termination, etc.
Here is my interpreter.lua which can be used to call another script and interrupt it if/when necessary. It makes calls to Java methods so it will not run without LuaJ (or some other Lua-Java library) unless it's modified (edit: again, it can be easily modified to work in pure Lua).
function hook_line(e)
if jthread:getDone() then
-- I saw someone else use error(), but an infinite loop still seems to evade it.
-- os.exit() seems to take care of it well.
os.exit()
end
end
function inithook()
-- the hook will run every 100 million instructions.
-- the time it takes for 100 million instructions to occur
-- is based on computer speed and the calling environment
debug.sethook(hook_line, "", 1e8)
local ret = dofile(jLuaScript)
debug.sethook()
return ret
end
args = { ... }
if jthread == nil then
error("jthread object is nil. Please set it in the Java environment.",2)
elseif jLuaScript == nil then
error("jLuaScript not set. Please set it in the Java environment.",2)
else
local x,y = xpcall(inithook, debug.traceback)
end
Here's the ScriptState class that stores the flag and a main() to demonstrate:
public class ScriptState {
private AtomicBoolean isDone = new AtomicBoolean(true);
public boolean getDone() { return isDone.get(); }
public void setDone(boolean v) { isDone.set(v); }
public static void main(String[] args) {
Thread t = new Thread() {
public void run() {
System.out.println("J: Lua script started.");
ScriptState s = new ScriptState();
Globals g = JsePlatform.debugGlobals();
g.set("jLuaScript", "res/main.lua");
g.set("jthread", CoerceJavaToLua.coerce(s));
try {
g.loadFile("res/_interpreter.lua").call();
} catch (Exception e) {
System.err.println("There was a Lua error!");
e.printStackTrace();
}
}
};
t.start();
try { t.join(); } catch (Exception e) { System.err.println("Error waiting for thread"); }
System.out.println("J: End main");
}
}
res/main.lua contains the target Lua code to be run. Use environment variables or parameters to pass additional information to the script as usual. Remember to use JsePlatform.debugGlobals() instead of JsePlatform.standardGlobals() if you want to use the debug library in Lua.
EDIT: I just noticed that os.exit() not only terminates the Lua script but also the calling process. It seems to be the equivalent of System.exit(). error() will throw an error but will not cause the Lua script to terminate. I'm trying to find a solution for this now.
Thanks to #Seldon for suggesting the use of custom DebugLib. I implemented a simplified version of that by just checking before every instruction if a predefined amount of time is elapsed. This is of course not super accurate because there is some time between class creation and script execution. Requires no separate threads.
class DebugLibWithTimeout(
timeout: Duration,
) : DebugLib() {
private val timeoutOn = Instant.now() + timeout
override fun onInstruction(pc: Int, v: Varargs, top: Int) {
val timeoutElapsed = Instant.now() > timeoutOn
if (timeoutElapsed)
throw Exception("Timeout")
super.onInstruction(pc, v, top)
}
}
Important note: if you sandbox an untrusted script calling load function on Lua-code and passing a separate environment to it, this will not work. onInstruction() seems to be called only if the function environment is a reference to _G. I dealt with that by stripping everything from _G and then adding whitelisted items back.
-- whitelisted items
local sandbox_globals = {
print = print
}
local original_globals = {}
for key, value in pairs(_G) do
original_globals[key] = value
end
local sandbox_env = _G
-- Remove everything from _G
for key, _ in pairs(sandbox_env) do
sandbox_env[key] = nil
end
-- Add whitelisted items back.
-- Global pairs-function cannot be used now.
for key, value in original_globals.pairs(sandbox_globals) do
sandbox_env[key] = value
end
local function run_user_script(script)
local script_function, message = original_globals.load(script, nil, 't', sandbox_env)
if not script_function then
return false, message
end
return pcall(script_function)
end
I have multiple EBS-backed EC2 instances running and I want to be able to take a snapshot of the EBS volume behind one of them, create a new EBS volume from that snapshot, and then mount that new EBS volume onto another as an additional drive. I know how to do this via the AWS web console, but I would like to automate the process by using the AWS Java API.
If I simply call the following commands one after another:
CreateSnapshotResult snapRes
= ec2.createSnapshot(new CreateSnapshotRequest(oldVolumeID, "Test snapshot"));
Snapshot snap = snapRes.getSnapshot();
CreateVolumeResult volRes
= ec2.createVolume(new CreateVolumeRequest(snap.getSnapshotId(), aZone));
String newVolumeID = volRes.getVolume().getVolumeId();
AttachVolumeResult attachRes
= ec2.attachVolume(new AttachVolumeRequest(newVolumeID, instanceID, "xvdg"));
I get the following error:
Caught Exception: Snapshot 'snap-8e822cfd' is not 'completed'.
Reponse Status Code: 400
Error Code: IncorrectState
Request ID: 40bc6bad-43e0-49e6-a89a-0489744d24e6
To get around this, I obviously need to wait until the snapshot is completed before I create the new EBS volume from the snapshot. According to the Amazon docs, the possible values of Snapshot.getState() are "pending, completed, or error," so I decided to check in with AWS to see if the snapshot is still in the "pending" state. I wrote the following code, but it has not worked:
CreateSnapshotResult snapRes
= ec2.createSnapshot(new CreateSnapshotRequest(oldVolumeID, "Test snapshot"));
Snapshot snap = snapRes.getSnapshot();
System.out.println("Snapshot request sent.");
System.out.println("Waiting for snapshot to be created");
String snapState = snap.getState();
System.out.println("snapState is " + snapState);
// Wait for the snapshot to be created
while (snapState.equals("pending"))
{
Thread.sleep(1000);
System.out.print(".");
snapState = snapRes.getSnapshot().getState();
}
System.out.println("Done.");
When I run this, I get the following output:
Snapshot request sent.
Waiting for snapshot to be created
snapState is pending
.............................................
Where the dots continue to be printed until I kill the program. In the AWS Web Console, I can see that the snapshot has been created (it now has a green circle marking it as "completed"), but somehow my program has not gotten the message.
When I replace the while loop with a simple wait for a second (insert the line Thread.sleep(1000) after Snapshot snap = snapRes.getSnapshot(); in the first code snippet), the program will often create a new EBS volume without complaint (it then dies when I try to attach the volume to the new instance). Sometimes, however, I will get the IncorrectState error even after waiting for a second. I assume this means that there is some variance in the amount of time it takes to create a snapshot (even of the same EBS volume), and that one second is enough to account for some but not all of the possible delay times.
I could just increase the hard-coded delay to something sure to be longer than the expected time, but that approach has many faults (it waits unnecessarily for most of the times I will use it, it is still not guaranteed to be long enough, and it won't translate well into a solution for the second step, mounting the EBS volume onto the instance).
I would really like to be able to check in with AWS at regular intervals, check to see if the state of the snapshot has changed, and then proceed once it has. What am I doing wrong and how should I fix my code to allow my program to dynamically determine when the snapshot has been fully created?
EDIT: I've tried using getProgress() rather than getState() as per the suggestion. My changed code looks like this:
String snapProgress = snap.getProgress();
System.out.println("snapProgress is " + snapProgress);
// Wait for the snapshot to be created
while (!snapProgress.equals("100%"))
{
Thread.sleep(1000);
System.out.print(".");
snapProgress = snapRes.getSnapshot().getProgress();
}
System.out.println("Done.");
I get the same output as I did when using getState(). I think my problem is that the snapshot object that my code references is not being updated correctly. Is there a better way to refresh/update that object than simply calling its methods repeatedly? My suspicion is that I'm running up against some sort of issue with the way that the API handles requests.
Solved it. I think the problem was that the Snapshot.getState() call doesn't actually make a new call to AWS, but keeps returning the state of the object at the time it was created (which would always be pending).
I fixed the problem by using the describeSnapshots() method:
String snapState = snap.getState();
System.out.println("snapState is " + snapState);
System.out.print("Waiting for snapshot to be created");
// Wait for the snapshot to be created
while (snapState.equals("pending"))
{
Thread.sleep(500);
System.out.print(".");
DescribeSnapshotsResult describeSnapRes
= ec2.describeSnapshots(new DescribeSnapshotsRequest().withSnapshotIds(snap.getSnapshotId()));
snapState = describeSnapRes.getSnapshots().get(0).getState();
}
System.out.println("\nDone.");
This makes a proper call to AWS every time, and it works.
Instead of getstate() , try using getProgress() method. If you are getting it blank then your EBS snapshot is not ready. It gives output in string percentage format ( 100% when your snapshot is ready). Hopefully it should do the trick. Let me know if it works.