Camunda BPM execution and variable scope misunderstanding - java

I work with the camunda BPM process engine and think it is important to understand some concepts. At the moment I struggle a little bit with the concept of Process Executions and Variable Scopes.
To understand what happens during a process execution I designed the following demo process and marked the activities inside the same execution with the same color. I could do this because I debugged the execution id inside each activity.
I understand most of it. What surprised me is that an input parameter opens a new execution (Task 1.3). Thanks meyerdan for clarification on this.
What I do not understand is that "Task 2.2" is inside the same execution of "Task 2.1". A quote from the camunda documentation about Executions is
Internally, the process engine creates two concurrent executions
inside the process instance, one for each concurrent path of
execution.
So I would have exepcted that Task 2.1 / Task 2.2 and Task 3.1 each live inside an own execution.
Is anyone able to explain this?
My main motivation to understand this is the impact it has on process variable scopes. I did not figure out so far what the Java API methods
VariableScope#getVariable / VariableScope#setVariable
VariableScope#getVariableLocal / VariableScope#setVariableLocal
really do. I first thought that the "Local" variants only refer to the current execution and the other ones only refer to the process instance execution - but that seems to be only half of the truth. These are getters and setters where I miss JavaDoc painfully ;-) Bonus points for also explaining this!
Thanks!
You will find the process in a Maven project with an executable JUnit test on GitHub.

Have a look at Variable Scopes and Variable Visibility
A quote from the documentation (Java Object API) about the setVariable method:
Note that this code sets a variable at the highest possible point in
the hierarchy of variable scopes. This means, if the variable is
already present (whether in this execution or any of its parent
scopes), it is updated. If the variable is not yet present, it is
created in the highest scope, i.e. the process instance. If a variable
is supposed to be set exactly on the provided execution, the local
methods can be used.

Related

is there any best approach in java multi thread environment to avoid a class execution in second time?

I have a class which has to be executed from JBPM process (multi-thread environment). During this first execution, if any condition is violated, I am adding a variable in a Map which means this transaction should be stopped temporarily.
But once the user has resolved this problem from UI, the same class will be executed again by the same JBPM process. Here, whatever, I have updated in the map in the first execution level is not available for this second execution. But, this time, I have to process this transaction even though conditional failures available.
I hope the sample code is not required for this to demonstrate.
How can I achieve this by skipping the execution the second time? Is there any design pattern which will support this?
Any help is highly appreciated.
Thanks you.

How to Guarantee that Construction Phase Initializes All Entities' Planning Variables?

Occasionally, if I have my Construction Phase's "Seconds Spent" and "Unimproved Seconds Spent" termination settings set for too short an amount of time, I end up with a few Planning Entities that do not have all of their Planning Variables initialized. This results in my Search Phase throwing exceptions regarding uninitialized Planning Variables (Local Search phase (1) needs to start from an initialized solution...).
This seems to (partially) defeat the purpose of the Construction Phase. I feel like I am missing a caveat somewhere? Maybe I am over-configuring my Construction Phase?
Here is my Construction Phase's configuration code. I am using Java to configure my Solver rather than XML.
TerminationConfig terminationConfig = new TerminationConfig();
ConstructionHeuristicPhaseConfig phaseConfig = new ConstructionHeuristicPhaseConfig();
terminationConfig.setSecondsSpentLimit(60L);
terminationConfig.setUnimprovedSecondsSpentLimit(30L);
terminationConfig.setBestScoreLimit("0hard/0medium/0soft");
phaseConfig.setConstructionHeuristicType(ConstructionHeuristicType.FIRST_FIT);
phaseConfig.setTerminationConfig(terminationConfig);
phaseConfigs.add(phaseConfig);
Could anyone point me in the right direction? Is there a "correct" way to guarantee that all Planning Variables of all Planning Entities will be initialized by the end of the Construction Phase?
There's no point in terminating the CH before it's finished if you want to run the LS.
Let it finish and put a termination on the <localSearch> instead of the <solver> (the API supports this too, of course), to avoid it finishing too early.
There are many ways to make the CH go faster though, see docs.
Alternatively, combining every termination with an AND of a <bestScoreFeasible>true</> termination (= it can only terminate if a feasible solution is found) can also do what you want I believe, even as a global <solver> termination.

OptimisticLockingException with Camunda Service Task

We're seeing OptimisticLockingExceptions in a Camunda process with the following Scenario:
The process consists of one UserTask followed by one Gateway and one ServiceTask. The UserTask executes
runtimeService.setVariable(execId, "object", out);`.
taskService.complete(taskId);
The following ServiceTask uses "object" as input variable (does not modify it) and, upon completion throws said OptimisticLockingException. My problem seems to originate from the fact, that taskService.complete() immediately executes the ServiceTask, prior to flushing the variables set in the UserTask.
I've had another, related issue, which occured, when in one UserTask I executed runtimeService.setVariable(Map<Strong, Boolean>) and tried to access the members of the Map as transition-guards in a gateway following that UserTask.
I've found the following article: http://forums.activiti.org/content/urgenterror-updated-another-transaction-concurrently which seems somehow related to my issue. However, I'm not clear on the question whether this is (un)wanted behaviour and how I can access a DelegateExecution-Object from a UserTask.
After long and cumbersome search we think, we have nailed two issues with camunda which (added together) lead to the Exception from the original question.
Camunda uses equals on serialized objects (represented by byte-arrays) to determine, whether process variables have to be written back to the database. This even happens when variables are only read and not set. As equals is defined by pointer-identity on arrays, a serializabled-Object is never determined "equal" if it has been serialized more than once. We have found, that a single runtimeService.setVariable() leads to four db-updates at the time of completeTask() (One for setVariable itself, the other three for various camunda-internal validation actions). We think this is a bug and will file a bug report to camunda.
Obviously there are two ways to set variables. One way is to use runtimeService.setVariable(), the other is to use delegateTask/delegateExecution.setVariable(). There is some flaw when using both ways at the same time. While we cannot simplify our setup to a simple unit-test, we have identified several components which have to be involved for the Exception to occur:
2.1 We are using a TaskListener to set up some context-variables at the start of Tasks this task-listener used runtimeService.setVariable() instead of delegateTask.setVariable(). After we changed that, the Exception vanished.
2.2 We used (and still use) runtimeService.setVariable() during Task-Execution. After we switched to completeTask(Variables) and omitted the runtimeService.setVariable() calls, the Exception vanished as well. However, this isn't a permanent solution as we have to store process variables during task execution.
2.3 The exception occured only in combination when process variables where read or written by the delegate<X>.getVariable() way (either by our code or implicitly in the camunda implementation of juel-parsing with gateways and serviceTasks or completeTask(HashMap))
Thanks a lot for all your input.
You could consider using an asynchronous continuation on the service task. This will make sure that the service task is executed inside a new transaction / command context.
Consider reading the camunda documentation on transactions and asynchronous continuations.
The DelegateExecution object is meant for providing service task (JavaDelegate) implementations access to process instance variables. It is not meant to be used from a User Task.

Java logging across multiple threads

We have a system that uses threading so that it can concurrently handle different bits of functionality in parallel. We would like to find a way to tie all log entries for a particular "transaction" together. Normally, one might use 'threadName' to gather these together, but clearly that fails in a multithreaded situation.
Short of passing a 'transaction key' down through every method call, I can't see a way to tie these together. And passing a key into every single method is just ugly.
Also, we're kind of tied to Java logging, as our system is built on a modified version of it. So, I would be interested in other platforms for examples of what we might try, but switching platforms is highly unlikely.
Does anyone have any suggestions?
Thanks,
Peter
EDIT: Unfortunately, I don't have control over the creation of the threads as that's all handled by a workflow package. Otherwise, the idea of caching the ID once for each thread (on ThreadLocal maybe?) then setting that on the new threads as they are created is a good idea. I may try that anyway.
You could consider creating a globally-accessible Map that maps a Thread's name to its current transaction ID. Upon beginning a new task, generate a GUID for that transaction and have the Thread register itself in the Map. Do the same for any Threads it spawns to perform the same task. Then, when you need to log something, you can simply lookup the transaction ID from the global Map, based on the current Thread's name. (A bit kludgy, but should work)
This is a perfect example for AspectJ crosscuts. If you know the methods that are being called you can put interceptors on them and bind dynamically.
This article will give you several options http://www.ibm.com/developerworks/java/library/j-logging/
However you mentioned that your transaction spans more than one thread, take a look at how log4j cope with binding additional information to current thread with MDC and NDC classes. It uses ThreadLocal as you were advised before, but interesting thing is how log4j injects data into log messages.
//In the code:
MDC.put("RemoteAddress", req.getRemoteAddr());
//In the configuration file, add the following:
%X{RemoteAddress}
Details:
http://onjava.com/pub/a/onjava/2002/08/07/log4j.html?page=3
http://wiki.apache.org/logging-log4j/NDCvsMDC
How about naming your threads to include the transaction ID? Quick and Dirty, admittedly, but it should work (until you need the thread name for something else or you start reusing threads in a thread pool).
If you are logging, then you must have some kind of logger object. You should have a spearate instance in each thread.
add a method to it called setID(String id).
When it is initialized in your thread, set a unique ID using the method.
prepend the set iD to each log entry.
A couple people have suggested answers that have the newly spawned thread somehow knowing what the transaction ID is. Unless I'm missing something, in order to get this ID into the newly spawned thread, I would have to pass it all the way down the line into the method that spawns the thread, which I'd rather not do.
I don't think you need to pass it down, but rather the code responsible for handing work to these threads needs to have the transactionID to pass. Wouldn't the work-assigner have this already?

Detect Who Created a Thread (w. Eclipse)

How can I find out who created a Thread in Java?
Imagine the following: You use ~30 third party JARs in a complex plugin environment. You start it up, run lots of code, do some calculations and finally call shutdown().
This life-cycle usually works fine, except that on every run some (non-daemonic) threads remain dangling. This would be no problem if every shutdown was the last shutdown, I could simply run System.exit() in that case. However, this cycle may run several times and it's producing more garbage every pass.
So, what should I do? I see the threads in Eclipse's Debug View. I see their stack traces, but they don't contain any hint about their origin. No creator's stack trace, no distinguishable class name, nothing.
Does anyone have an idea how to address this problem?
Okay, I was able to solve (sort of) the problem on my own: I put a breakpoint into
Thread.start()
and manually stepped through each invocation. This way I found out pretty quickly that Class.forName() initialized lot of static code which in return created these mysterious threads.
While I was able to solve my problem I still think the more general task still remains unaddressed.
I religiously name my threads (using Thread(Runnable, String), say), otherwise they end up with a generic and somewhat useless name. Dumping the threads will highlight what's running and (thus) what's created them. This doesn't solve 3rd party thread creation, I appreciate.
EDIT: The JavaSpecialist newsletter addressed this issue recently (Feb 2015) by using a security manager. See here for more details
MORE: A couple of details for using the JavaSpecialist technique: The SecurityManager API includes "checkAccess(newThreadBeingCreated)" that is called on the thread creator's thread. The new thread already has its "name" initialized. So in that method, you have access to both the thread creator's thread, and the new one, and can log / print etc. When I tried this the code being monitored started throwing access protection exceptions; I fixed that by calling it under a AccessController.doPriviledged(new PrivilegedAction() { ... } where the run() method called the code being monitored.
When debuging your Eclipse application, you can stop all thread by clicking org.eclipse.equinox.launcher.Main field in the debug view.
Then from there, for each thread you can see the stack trace and goes up to the thred run method.
Sometimes this can help and sometimes not.
As Brian said, it a good practice to name threads because it's the only way to easily identify "who created them"
Unfortunately it doesn't. Within Eclipse I see all the blocking threads, but their stack traces only reflect their internal state and (apparently) disclose no information about the location of their creation. Also from a look inside the object (using the Variables view) I was unable to elicit any further hints.
For local debugging purposes, one can attach a debugger to a Java application as early as possible.
Set a non-suspending breakpoint at the end of java.lang.Thread#init(java.lang.ThreadGroup, java.lang.Runnable, java.lang.String, long, java.security.AccessControlContext, boolean) that will Evaluate and log the following:
"**" + getName() + "**\n" + Arrays.toString(Thread.currentThread().getStackTrace())
This will out the thread name and how the thread is created (stacktrace) that one can just scan through.

Categories