I'm looking for a solution to query completed tasks in Activiti by filtering on the completion date. Because once they're finished completed task entries are being moved into the act_hi_taskinst table by the BPMN engine i would expected the required filters to be in the HistoricTaskInstanceQuery class. However there's nothing like startedAfter/startedBefore and finishedAfter/finishedBefore methods like in the HistoricProcessInstanceQuery. The table has the start_time_ and end_time_ columns so there's no reason why this kind of query would be not possible.
Is there an other way to filter by these properties or currently the only way to get around this is to query the act_hi_tasks table directly bypassing the Activiti engine?
Activiti provides Query API so there is no need to query act_hi_taskinst directly.
You query may look like this one
NativeHistoricTaskInstanceQuery taskQuery = historyService.createNativeHistoricTaskInstanceQuery();
taskQuery.sql("SELECT * FROM "+ managementService.getTableName(HistoricTaskInstance.class)+" WHERE start_time_=#{startTime} AND end_time_=#{endTime}");
taskQuery.parameter("startTime", startTime).parameter("endTime", end_time);
List<HistoricTaskInstance> tasks = taskQuery.list();
Related
I was trying to check if there is any running instance of the job.
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(job.getName());
But the above code is not working when we have older executions that didn't finish correctly.
In that case the size of jobExecutions is more than 1.
I looked into Spring Batch code to see what executions this method is fetching & below is the query that I found in source code.
private static final String GET_RUNNING_EXECUTIONS = "SELECT E.JOB_EXECUTION_ID, E.START_TIME, E.END_TIME, E.STATUS, E.EXIT_CODE, E.EXIT_MESSAGE, E.CREATE_TIME, E.LAST_UPDATED, E.VERSION, "
+ "E.JOB_INSTANCE_ID, E.JOB_CONFIGURATION_LOCATION from %PREFIX%JOB_EXECUTION E, %PREFIX%JOB_INSTANCE I where E.JOB_INSTANCE_ID=I.JOB_INSTANCE_ID and I.JOB_NAME=? and E.END_TIME is NULL order by E.JOB_EXECUTION_ID desc";
So as you see in above query, there is only one criteria - END_TIME is NULL from Job Execution table. So it wouldn't matter as when execution completed - it would give you all executions where END_TIME is not populated.
You can simply do a Java filtering on desired executions that you need as per your business need like ,
Set<JobExecution> runningJobExecutions = jobExecutions.stream().filter(jobExecution->ExitStatus.COMPLETED.equals(jobExecution.getExitStatus())).collect(Collectors.toSet());
You can modify filter method as per your need.
I am currently implementing a blacklist feature for my application. Therefore I want to use a event based datatable, so I can also track, when a item has been blocked and by whom.
To give you a bit of a context: This is how the table looks like
id|object_id |object_type|change_time |change_type|
--|----------|-----------|-------------------|-----------|
0|1234567890|ITEM |2019-04-29 15:12:42|BLACKLISTED|
1|654321 |MATERIAL |2019-04-29 15:14:19|BLACKLISTED|
2|654321 |MATERIAL |2019-04-29 15:14:58|CLEARED |
As I am using spring and spring-data-jpa it is quite easy to get the current state of a single Item when querying for the first result ordered by time.
#Repository
public interface ItemFilterRepository extends JpaRepository<ItemFilterDpo, Integer> {
ItemFilterDpo findFirstByObjectIdAndObjectTypeOrderByChangeTimeDesc(String objectId, ItemFilterObjectTypeDpo type);
}
However, I can't find a nice solution for showing all Items that are currently blocked.
So I had a look here in stack overflow and found an answer using subqueries in the sql (SQL Query for current state of all entities in event table).
select
object_id, object_type, change_time, change_type as last_modified
from
item_filter ife
where
ife.change_time = (
select max(ife2.change_time)
from item_filter ife2
where ife.object_id=ife2.object_id
)
That gives me the following result, which I can filter for BLACKLISTED afterwards:
object_id |object_type|change_time |last_modified|
----------|-----------|-------------------|-------------|
1234567890|ITEM |2019-04-29 15:12:42|BLACKLISTED |
654321 |MATERIAL |2019-04-29 15:14:58|CLEARED |
To use that with spring-data, my first approach would be to create a view and query from that.
I'd really like to know, whether there is a better approach using spring-data to query the current state of all objects in a event datatable.
If an other framework suites better for my problem, I am happy to know.
Edit:
Using distinct on feels a bit better, however this doesn't solves my problem with spring-data.
select distinct on (object_id, object_type)
object_id, object_type, change_time, change_type as last_modified
from
item_filter bl
order by
object_id, object_type, change_time DESC;
I have a Job in Pentaho. The job has many sub-jobs and many transformation. Most of the transformation writes to a table. I would like to get some stat information like below.
Table1 Finished processing (I=0, O=0, R=86400, W=86400, U=0, E=0)
Table2 Finished processing (I=0, O=0, R=86400, W=86400, U=0, E=0)
Table3 Finished processing (I=0, O=0, R=86400, W=86400, U=0, E=0)
My code is: With this code, I'm just getting the result of the last transformation. For Example, If i run 40 transformation, my result is just the 40th transformation result. But I would like to see all the 40 transformation result.
KettleEnvironment.init();
JobMeta jobMeta = new JobMeta("Job.kjb", null);
Job job = new Job(null, jobMeta);
job.start();
job.waitUntilFinished()
Result result = job.getResult();
System.out.println("dfffdgfdg: "+result.getLogText());
Use the logging system. On each transformation of interest, right-click anywhere, select setting, then logging and setup the data you want to collect stat on (for example in front of the Output button select the step that writes the data on the table you want to monitor). I suggest you use the default to start with.
After that, press the SQL button, and Pentaho Data Integrator will create a table in a database, with the relevant columns. And each time you run the transformation (or anyone using the same repository) will put a row in the table. After that, just SELECT * FROM TRANSFORMATION_LOG.
In the last Pentaho Meetup, I explained why you should do that at transformation level, and at at job level (although you can automate this if you know how to navigate in a repository). You'll also have a pointer to a github with a JSP you an copy/paste in your Pentaho BA server's WEB_INF so that you get exactly what you are after in web server.
Do not hesitate to ask for more info or provide feedback.
the TL;DR is that I am not able to delete a row previously created with an upsert using Java.
Basically I have a table like this:
CREATE TABLE transactions (
key text PRIMARY KEY,
created_at timestamp
);
Then I execute:
String sql = "update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null";
session.execute(sql)
As expected the row is created:
cqlsh:thingleme> SELECT * FROM transactions ;
key | created_at
------+---------------------------------
test | 2018-01-30 16:35:16.663000+0000
But (this is what is making me crazy) if I execute:
sql = "delete from transactions where key = 'test'";
ResultSet resultSet = session.execute(sql);
Nothing happens. I mean: no exception is thrown and the row is still there!
Some other weird stuff:
if I replace the upsert with a plain insert, then the delete works
if I directly run the sql code (update and delete) by using cqlsh, it works
If I run this code against an EmbeddedCassandraService, it works (this is very bad, because my integration tests are just green!)
My environment:
cassandra: 3.11.1
datastax java driver: 3.4.0
docker image: cassandra:3.11.1
Any idea/suggestion on how to tackle this problem is really appreciated ;-)
I think the issue you are encountering might be explained by the mixing of lightweight transactions (LWTs) (update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null) and non-LWTs (delete from transactions where key = 'test').
Cassandra uses timestamps to determine which mutations (deletes, updates) are the most recently applied. When using LWTs, the timestamp assignment is different then when not using LWTs:
Lightweight transactions will block other lightweight transactions from occurring, but will not stop normal read and write operations from occurring. Lightweight transactions use a timestamping mechanism different than for normal operations and mixing LWTs and normal operations can result in errors. If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
Source: How do I accomplish lightweight transactions with linearizable consistency?
Further complicating things is that by default the java driver uses client timestamps, meaning the write timestamp is determined by the client rather than the coordinating cassandra node. However, when you use LWTs, the client timestamp is bypassed. In your case, unless you disable client timestamps, your non-LWT queries are using client timestamps, where your LWT queries are using a timestamp assigned by the paxos logic in cassandra. In any case, even if the driver wasn't assigning client timestamps this still might be a problem because the timestamp assignment logic is different on the C* side for LWT and non-LWT as well.
To fix this, you could alter your delete statement to include IF EXISTS, i.e.:
delete from transactions where key = 'test' if exists
Similar issue from the java driver mailing list
is it possible to get all process or task variables using TaskService:
processEngine.getTaskService.createTaskQuery().list();
I know there is an opportunity to get variables via
processEngine.getTaskService().getVariable()
or
processEngine.getRuntimeService().getVariable()
but every of operation above goes to database. If I have list of 100 tasks I'll make 100 queries to DB. I don't want to use this approach.
Is there any other way to get task or process related variables?
Unfortunately, there is no way to do that via the "official" query API! However, what you could do is writing a custom MyBatis query as described here:
https://app.camunda.com/confluence/display/foxUserGuide/Performance+Tuning+with+custom+Queries
(Note: Everything described in the article also works for bare Activiti, you do not need the fox engine for that!)
This way you could write a query which selects tasks along with the variables in one step. At my company we used this solution as we had the exact same performance problem.
A drawback of this solution is that custom queries need to be maintained. For instance, if you upgrade your Activiti version, you will need to ensure that your custom query still fits the database schema (e.g., via integration tests).
If it is not possible to use the API as elsvene says, you can query yourself the database. Activiti has several tables on the database.
You have act_ru_variable, were the currently running processes store the variables. For the already finished processess you have act_hi_procvariable. Probably you can find a detailed explanation on what is on each table in activiti userguide.
So you just need to make queries like
SELECT *
FROM act_ru_variable
WHERE *Something*
The following Test, sends a value object (Person) to a process which just adds a few tracking infos for demonstration.
I had the same problem, to get the value object after execution the service to do some validation in my test.
The following piece of code shows the execution and the gathering of the task varaible after the execution was finished.
#Test
public void justATest() {
Map<String, Object> inVariables = new HashMap<String, Object>();
Person person = new Person();
person.setName("Jens");
inVariables.put("person", person);
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("event01", inVariables);
String processDefinitionId = processInstance.getProcessDefinitionId();
String id = processInstance.getId();
System.out.println("id " + id + " " + processDefinitionId);
List<HistoricVariableInstance> outVariables =
historyService.createHistoricVariableInstanceQuery().processInstanceId(id).list();
for (HistoricVariableInstance historicVariableInstance : outVariables) {
String variableName = historicVariableInstance.getVariableName();
System.out.println(variableName);
Person person1 = (Person) historicVariableInstance.getValue();
System.out.println(person1.toString());
}
}