I am new to spring batch, I have an issue where my write skip count is considered as entire count of chunk and not just the invalid records in the chunk.
For e.g., I am reading 500 records, with chunk size of 100 records per chunk.
Then if the first chunk has 2 invalid records then, all records after that invalid records are mentioned as invalid with 'invalid Exception', where as they not invalid.
So the write_skip_count in batch_step_execution goes as 100, for that batch, rather than 2.
But on other hand, the chunk with invalid records get re-processed and except the two invalid, all records properly reach the destination.
Functionality is achieved but the write_skip_count is wrong which prevent us from showing proper log. Please suggest what I am missing here.
and I can see below logs,
Checking for rethrow: count=1
Rethrow in retry for policy: count=1
Initiating transaction rollback on application exception
Below is the code snippet we tried so far,
<batch:step id="SomeStep">
<batch:tasklet>
<batch:chunk reader="SomeStepReader"
writer="SomeWriter" commit-interval="1000"
skip-limit="1000" retry-limit="1">
<batch:skippable-exception-classes>
<batch:include class="org.springframework.dao.someException" />
</batch:skippable-exception-classes>
<batch:retryable-exception-classes>
<batch:exclude class="org.springframework.dao.someException"/>
</batch:retryable-exception-classes>
</batch:chunk>
</batch:tasklet>
</batch:step>
After trying for sometime. I figured out that when writing to database happens in chunk and there is not transaction manager for this database, specially when your batch job is reading from one database datasource and writting to another database datasource.
In such case, batch fails entire chunk and skip count becomes chunk size. but it later processes the chuck with commit interval = 1 and skips only faulty record and processes the correct one. but the skip write count is now incorrect, as it should have been only the incorrect record count.
To avoid this, create a transaction manager for the database datasource where you are writting the data.
<bean id="SometransactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="SomedataSource" />
</bean>
Then in step where all transaction happen use the transaction manager,
<batch:step id="someSlaveStep">
<batch:tasklet transaction-manager="SometransactionManager">
<batch:chunk reader="SomeJDBCReader"
writer="SomeWriterBean" commit-interval="1000"
skip-limit="1000">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception" />
</batch:skippable-exception-classes>
<batch:listeners>
<batch:listener ref="SomeSkipHandler" />
</batch:listeners>
</batch:chunk>
</batch:tasklet>
</batch:step>
Now here failure during write will happen under one transaction and faulty record will be handled gracefully and only faulty record with be logged in batch tables under write skip count.
Related
I have class like this.
class EmployeeTask implements Runnable {
public void run(){
PayDAO payDAO = new payDAO(session);
String newSalary= payDAO.getSalary(empid);
String newTitle= payDAO.getTitle(empid);
EmployeeDAO employeeDAO = new EmployeeDAO(session);
List<Employee> employees = employeeDAO.getEmployees();
employees.parallelStream().foreach(employee-> employeeDAO.add(newSalary, newTitle,employee));
}
}
When I run the above code for one thread, it completes DB operation in 1 second and returns.
When I run it using parallelStream(), it will submit 8 requests, all 8 threads will wait for eight seconds and then return. Which means the server is executing the requests sequentially instead of parallel.
I checked the java jetty logs
Theread-1 Insert …
Theread-2 Insert …
…
Theread-8 Insert …
After eight seconds, that is 1 second per request
Theread-1 Insert … <= update 1
Theread-2 Insert … <= update 1
…
Theread-8 Insert … <= update 1
the same thing continues.
This clearly tells, all the threads are getting blocked on one single resource, either the Datasource from my client java has only one connection so all the eight threads are getting blocked or the server is executing the requests one after the other.
I checked MaxPooledConnections — gives 20, MaxPooledConnections
PerNode - gives 5 default values.
I think the Vertica DB server is fine, maybe client is not having DatasourceConnection pooling, How do I enable Java side DatasourceConnection pooling. Any Code examples?
currently I am using mybatis ORM, with following datasource
<environments default=“XX”>
<environment id=“XX”>
<transactionManager type=“JDBC”/>
<dataSource type="POOLED">
<property name="driver" value="com.vertica.jdbc.Driver"/>
<property name="url" value="jdbc:vertica://host:port/schema”/>
<property name="username" value=“userName”/>
<property name="password" value=“password”/>
<property name="poolMaximumActiveConnections" value=“50”/>
<property name="poolMaximumIdleConnections" value=“10”/>
</dataSource>
</environment>
</environments>
What is the barrier condition all the threads are waiting on, my guess is Datasource connection. Any help is appreciated
Thanks.
So I have a code that gets value from Redis using Jedis Client. But at a time, the Redis was at maximum connection and these exceptions were getting thrown:
org.springframework.data.redis.RedisConnectionFailureException
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:140)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:229)
...
org.springframework.data.redis.RedisConnectionFailureException
java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:47)
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:36)
...
org.springframework.data.redis.RedisConnectionFailureException
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:140)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:229)
When I check an AppDynamics analysis of this scenario, I saw some iteration of some calls over a long period of time (1772 seconds). The calls are shown in the snips.
Can anyone explain what's happening here? And why Jedis didn't stop after the Timeout setting (500ms)? Can I prevent this from happening for long?
This is what my Bean definitions for the Jedis look like:
<bean id="jedisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" p:host-name="100.90.80.70" p:port="6382" p:timeout="500" p:use-pool="true" p:poolConfig-ref="jedisPoolConfig" p:database="3" />
<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
<property name="maxTotal" value="1000" />
<property name="maxIdle" value="10" />
<property name="maxWaitMillis" value="500" />
<property name="testOnBorrow" value="true" />
<property name="testOnReturn" value="true" />
<property name="testWhileIdle" value="true" />
<property name="numTestsPerEvictionRun" value="10" />
</bean>
I'm not familiar with the AppDynamics output. I assume that's a cumulative view of Threads and their sleep times. So Threads get reused and so the sleep times add up. In some cases, a Thread gets a connection directly, without any waiting and in another call the Thread has to wait until the connection can be provided. The wait duration depends on when a connection becomes available, or the wait limit is hit.
Let's have a practical example:
Your screenshot shows a Thread, which waited 172ms. Assuming the sleep is only called within the Jedis/Redis invocation path, the Thread waited 172ms in total to get a connection.
Another Thread, which waited 530ms looks to me as if the first attempt to get a connection wasn't successful (which explains the first 500ms) and on a second attempt, it had to wait for 30ms. It could also be that it waited 2x for 265ms.
Sidenote:
1000+ connections could severely limit scalability. Spring Data Redis also supports other drivers which don't require pooling but work with fewer connections (see Spring Data Redis Connectors and here).
I have deployed a Oracle SOA composite from JDeveloper 11g with a BPEL Polling DB Adapter to Weblogic 11g. I am trying to tell if it is working. I am looking in the soa_server1-diagnostic.log and I see the following message:
[2014-10-08T14:53:02.753-05:00] [soa_server1] [NOTIFICATION] [] [oracle.soa.adapter] [tid: [ACTIVE].ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: weblogic] [ecid: b4de9447a6405836:356834d:148f023a253:-8000-00000000000002ad,1:21897] [APP: soa-infra] JCABinding=> [NotificationService.SugarCRM_Poll/2.0] :init Successfully initialized SugarCRM_Poll_db.jca
First am I looking in the right log? And is this what I should see every time it runs?
The jca file for the polling DB Adapter looks like this:
<adapter-config name="SugarCRM_Poll" adapter="Database Adapter" wsdlLocation="SugarCRM_Poll.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/DB/SugarDbProd" UIConnectionName="SugarDbProd" adapterRef=""/>
<endpoint-activation portType="SugarCRM_Poll_ptt" operation="receive">
<activation-spec className="oracle.tip.adapter.db.DBActivationSpec">
<property name="DescriptorName" value="SugarCRM_Poll.OpportunityStagingTable"/>
<property name="QueryName" value="SugarCRM_PollSelect"/>
<property name="MappingsMetaDataURL" value="SugarCRM_Poll-or-mappings.xml"/>
<property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
<property name="MarkReadColumn" value="account_name_new"/>
<property name="MarkReadValue" value="X"/>
<property name="MarkUnreadValue" value="R"/>
<property name="PollingInterval" value="5"/>
<property name="MaxRaiseSize" value="1"/>
<property name="MaxTransactionSize" value="10"/>
<property name="NumberOfThreads" value="1"/>
<property name="ReturnSingleResultSet" value="false"/>
</activation-spec>
</endpoint-activation>
</adapter-config>
I am also seeing this Notification in the soa_server1-diagnostic.log:
[2014-10-10T07:31:05.328-05:00] [soa_server1] [NOTIFICATION] [] [oracle.soa.adapter] [tid: Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms\n] [userId: weblogic] [ecid: b4de9447a6405836:356834d:148f023a253:-8000-0000000000000708,1:19750] [APP: soa-infra] Database Adapter NotificationService <oracle.tip.adapter.db.InboundWork handleException> BINDING.JCA-11624[[
DBActivationSpec Polling Exception.
Query name: [SugarCRM_PollSelect], Descriptor name: [SugarCRM_Poll.OpportunityStagingTable]. Polling the database for events failed on this iteration.
Caused by com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed..
This exception is considered retriable, likely due to a communication failure. To classify it as non-retriable instead add property nonRetriableErrorCodes with value "0" to your deployment descriptor (i.e. weblogic-ra.xml). Polling will be attempted again next polling interval.
I am able to test the connection in Weblogic 11g Admin Console and it works fine. I see the following message : "Test of TestSugarDataSource on server soa_server1 was successful." And I was also able to use the netcat command to to test connectivity with success. "nc -vz xx.xx.xx.xx 3306" returns a "Connection to xx.xx.xx.xx 3306 port [tcp/mysql] succeeded!" So it appears the connectivity is not an issue.
Thanks,
Tom Henricksen
I was able to find the issue with the polling by changing the Log Configuration on the oracle.soa to TRACE:32 FINEST logging. This allowed me to see the underlying query that was running for the Polling DB Adapter and make corrections. The diagnostic log file gave me everything I needed once I made this change and corrected the testing.
Thanks,
Tom
Please share any links to configure activiti with camel. All examples I could get were showing SERVICETASK->CAMELROUTE->FILE and then FILE->RECIEVETASK(Activiti)
This involves some BUSINESS_KEY, which I couldn't figure out what exactly is
I need an example showing SERVICE TASK -> CAMEL ROUTE-> RECEIEVTASK(Signal the Activiti). I dont know why but this example gives me error
file: activiti-flow.bpmn20.xml:
<process id="camelprocess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<serviceTask id="servicetask1" name="Service Task" activiti:async="true" activiti:delegateExpression="${camel}"></serviceTask>
<receiveTask id="receivetask1" name="Receive Task"></receiveTask>
<endEvent id="endevent1" name="End"></endEvent>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="servicetask1"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="servicetask1" targetRef="receivetask1"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="receivetask1" targetRef="endevent1"></sequenceFlow>
activiti-camel-spring.xml
<bean id="camel" class="org.activiti.camel.CamelBehaviour">
<constructor-arg index="0">
<list>
<bean class="org.activiti.camel.SimpleContextProvider">
<constructor-arg index="0" value="camelprocess" />
<constructor-arg index="1" ref="camelContext" />
</bean>
</list>
</constructor-arg>
</bean>
<camel:camelContext id="camelContext">
<camel:route>
<camel:from uri="activiti:camelprocess:servicetask1"/>
<camel:to uri="bean:serviceActivator?method=doSomething(${body})"/>
<camel:to uri="activiti:camelprocess:receivetask1"/>
</camel:route>
</camel:camelContext>
Error is:
1|ERROR|org.slf4j.helpers.MarkerIgnoringBase:161||||>> Failed delivery for (MessageId: ID-viscx73-PC-49557-1376961951564-0-1 on ExchangeId: ID-viscx73-PC-49557-1376961951564-0-2). Exhausted after delivery attempt: 1 caught: org.activiti.engine.ActivitiIllegalArgumentException: Business key is null
at org.activiti.engine.impl.ProcessInstanceQueryImpl.processInstanceBusinessKey(ProcessInstanceQueryImpl.java:87)
at org.activiti.camel.ActivitiProducer.findProcessInstanceId(ActivitiProducer.java:78)
at org.activiti.camel.ActivitiProducer.signal(ActivitiProducer.java:58)
at org.activiti.camel.ActivitiProducer.process(ActivitiProducer.java:49)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process (AsyncProcessorConverterHelper.java:61)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
All forums/links that has ACTIVITI->CAMELROUTE(FILE) then
in other route CAMEL_FILE->RECIEVETASK
And they suggest to add some key like PROCESS_KEY_PROPERTY or PROCESS_ID_PROPERTY
I don't get where these properties fit into
I am trying to work it from example at link
http://bpmn20inaction.blogspot.in/2013/03/using-camel-routes-in-activiti-made.html
I am not sure whether process after giving service task to camel, is not moving at all to receive task and waiting up there or CAMEL is unable to find receive task
Please share some suggestion on this
Thanks
It worked by adding inbuilt camel queues as shown in the example. I thought they were just shown as example for various routes. But by passing to queue actually the ServiceTask was made asynchronous in camel and later from queue they were read and invoked the receive task in activiti
<camel:to uri="seda:tempQueue"/>
<camel:from uri="seda:tempQueue"/>
Thanks
I don't know whether you'd solved the problem or not, but actually I faced the same problem.
And finally, I found a solution of the problem.
In fact, it is correct that PROCESS_ID_PROPERTY property must be provided otherwise the activiti engine doesn't know to execute which process instance. So, I just set PROCESS_ID_PROPERTY value in the header when sending the JMS to activemq, and when the message back, just set the propertiy from header. Something likes:
from("activiti:process:simpleCall").setHeader("PROCESS_ID_PROPERTY", simple("${property.PROCESS_ID_PROPERTY}")).to("activemq:queue:request");
from("activemq:queue:reply").setProperty("PROCESS_ID_PROPERTY", simple("${header.PROCESS_ID_PROPERTY}")).to("activiti:process:simpleReceive");
Hope it will help you.
I'm using Quartz with Spring to run a specific task at midnight on the first day of the month. I have tested the job by setting my server date & time to be 11:59 on the last day of the month, starting the server and observing the task run when it turns to 12:00, but I'm concerned about cases where the server (for whatever reason) may not be running at midnight on the first of the month.
I'd assumed that misfire handling in Quartz would care for this, but maybe I'm mistaken on that?
Can anyone advise me on how I might be able to handle this? I'd really prefer not to create a job that runs every 'x' seconds/minutes/hours and check to see if I need to run the job if I can avoid it.
I'm also curious as to why I'm not seeing any Quartz related logging info, but that's a secondary issue.
Here is my spring configuration for the task:
<bean id="schedulerService" class="com.bah.pams.service.scheduler.SchedulerService">
<property name="surveyResponseDao" ref="surveyResponseDao"/>
<property name="organizationDao" ref="organizationDao"/>
</bean>
<bean name="createSurveyResponsesJob" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass" value="com.bah.pams.service.scheduler.jobs.CreateSurveyResponsesJob"/>
<property name="jobDataAsMap">
<map>
<entry key="schedulerService" value-ref="schedulerService"/>
</map>
</property>
</bean>
<!-- Cron Trigger -->
<bean id="cronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="createSurveyResponsesJob"/>
<property name="cronExpression" value="0 0 0 1 * ? *"/>
<!--if the server is down at midnight on 1st of month, run this job as soon as it starts up next -->
<property name="misfireInstructionName" value="MISFIRE_INSTRUCTION_FIRE_ONCE_NOW"/>
</bean>
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="autoStartup" value="true"/>
<property name="quartzProperties">
<props>
<prop key="org.quartz.jobStore.class">org.quartz.simpl.RAMJobStore</prop>
<prop key="org.quartz.jobStore.misfireThreshold">60000</prop>
</props>
</property>
<property name="jobDetails">
<list>
<ref bean="createSurveyResponsesJob"/>
</list>
</property>
<property name="triggers">
<list>
<ref bean="cronTrigger"/>
</list>
</property>
</bean>
MISFIRE_INSTRUCTION_FIRE_ONCE_NOW is for the purpose you mentioned and if you suspect a shutdown of the server(s) you should definitely persist your jobs out of the JVM memory (ex: by using a JDBCJobStore instead of a RAMJobStore).
RAMJobStore is fast and lightweight, but all scheduling information is lost when the process terminates.
http://quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigRAMJobStore
JDBCJobStore is used to store scheduling information (job, triggers and calendars) within a relational database.
http://quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJobStoreTX
Hope it help.
I too have faced the problem of wanting to run the last scheuled job on a server restart.
This is the solution I have found is to go back one time interval for the trigger and calculate what would have been the next firing time. By iterating through all the triggers the most recent time that a trigger should have fired in the past can be determined.
Calculate the interval between each firing:
Date nextFireTime = trigger.getNextFireTime();
Date subsequentFireTime = trigger.getFireTimeAfter(nextFireTime);
long interval = subsequentFireTime.getTime() - nextFireTime.getTime();
Find the next firing time for one time until interval in the past:
Date previousPeriodTime = new Date(System.currentTimeMillis() - interval);
Date previousFireTime = trigger.getFireTimeAfter(previousPeriodTime);
I have found that if you are using a CronTrigger this prevents you asking for a fire time in the past. To work around this I modify the start time, so the above snippet becomes:
Date originalStartTime = trigger.getStartTime(); // save the start time
Date previousPeriodTime = new Date(originalStartTime.getTime() - interval);
trigger.setStartTime(previousPeriodTime);
Date previousFireTime = trigger.getFireTimeAfter(previousPeriodTime);
trigger.setStartTime(originalStartTime); // reset the start time to be nice
Iterate through all of the triggers and find the one that is most recently in the past:
for (String groupName : scheduler.getTriggerGroupNames()) {
for (String triggerName : scheduler.getTriggerNames(groupName)) {
Trigger trigger = scheduler.getTrigger(triggerName, groupName);
// code as detailed above...
interval = ...
previousFireTime = ...
}
}
I'll leave it as an exercise to the reader to refactor this into helper methods or classes. I actually use the above algorithm in a subclassed delegating trigger that I then place in a set sorted by previous firing times.