Spring Quartz remote scheduler - java

I started quartz server instance using spring (as in article), it works good.
But when I try to start another app with quartz client and connect to quartz server, I have an error, please help what I'm doing wrong.
Server config:
#Configuration
public class ServerConfig {
#Bean
public SchedulerFactoryBean schedulerFactoryBean(ApplicationContext applicationContext) {
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setConfigLocation(new ClassPathResource("quartz.properties"));
schedulerFactory.setSchedulerListeners(...);
schedulerFactory.setJobFactory(...);
schedulerFactory.setTriggers(...);
return schedulerFactory;
}
...
}
Client config:
#Configuration
public class ClientConfig {
#Bean
public SchedulerFactoryBean schedulerFactoryBean() {
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setConfigLocation(new ClassPathResource("quartz.properties"));
return schedulerFactory;
}
...
}
Server quartz.properties:
org.quartz.scheduler.instanceName=myScheduler
org.quartz.scheduler.rmi.export=true
org.quartz.scheduler.rmi.createRegistry=true
org.quartz.scheduler.rmi.registryHost=localhost
org.quartz.scheduler.rmi.registryPort=1099
org.quartz.scheduler.rmi.serverPort=1100
Client quartz.properties:
org.quartz.scheduler.instanceName=myScheduler
org.quartz.scheduler.rmi.proxy=true
org.quartz.scheduler.rmi.registryHost=localhost
org.quartz.scheduler.rmi.registryPort=1099
Server logs:
[2019-09-03] [10:07:26.171] INFO SchedulerFactoryBean:538 - Loading Quartz config from [class path resource [quartz.properties]]
[2019-09-03] [10:07:26.229] INFO StdSchedulerFactory:1208 - Using default implementation for ThreadExecutor
[2019-09-03] [10:07:26.272] INFO SchedulerSignalerImpl:61 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
[2019-09-03] [10:07:26.273] INFO QuartzScheduler:229 - Quartz Scheduler v.2.3.0 created.
[2019-09-03] [10:07:26.277] INFO RAMJobStore:155 - RAMJobStore initialized.
[2019-09-03] [10:07:26.338] INFO QuartzScheduler:421 - Scheduler bound to RMI registry under name 'schedulerFactoryBean_$_NON_CLUSTERED'
[2019-09-03] [10:07:26.343] INFO QuartzScheduler:294 - Scheduler meta-data: Quartz Scheduler (v2.3.0) 'schedulerFactoryBean' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - access via RMI.
...
[2019-09-03] [10:07:26.344] INFO StdSchedulerFactory:1362 - Quartz scheduler 'schedulerFactoryBean' initialized from an externally provided properties instance.
[2019-09-03] [10:07:26.345] INFO StdSchedulerFactory:1366 - Quartz scheduler version: 2.3.0
...
[2019-09-03] [10:07:29.12] INFO SchedulerFactoryBean:684 - Starting Quartz Scheduler now
[2019-09-03] [10:07:29.13] INFO QuartzScheduler:547 - Scheduler schedulerFactoryBean_$_NON_CLUSTERED started.
Client logs:
[2019-09-03] [10:11:53.255] INFO SchedulerFactoryBean:538 - Loading Quartz config from [class path resource [quartz.properties]]
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'schedulerFactoryBean' defined in class path resource [com/mypackage/ClientConfig.class]: Invocation of init method failed; nested exception is org.quartz.SchedulerException: Operation not supported for remote schedulers.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1631)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:742)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:932)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:73)
... 3 more
Caused by: org.quartz.SchedulerException: Operation not supported for remote schedulers.
at org.quartz.impl.RemoteScheduler.getListenerManager(RemoteScheduler.java:913)
at org.springframework.scheduling.quartz.SchedulerAccessor.registerListeners(SchedulerAccessor.java:340)
at org.springframework.scheduling.quartz.SchedulerFactoryBean.afterPropertiesSet(SchedulerFactoryBean.java:476)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1689)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1627)
... 13 more
As you can see, the error is:
org.quartz.SchedulerException: Operation not supported for remote schedulers. and happens while bean initialization. What's wrong with it?
After client fails to start, server shows in logs:
[2019-09-03] [10:11:53.462] INFO QuartzScheduler:666 - Scheduler schedulerFactoryBean_$_NON_CLUSTERED shutting down.
[2019-09-03] [10:11:53.462] INFO QuartzScheduler:585 - Scheduler schedulerFactoryBean_$_NON_CLUSTERED paused.
[2019-09-03] [10:11:53.871] INFO QuartzScheduler:447 - Scheduler un-bound from name 'schedulerFactoryBean_$_NON_CLUSTERED' in RMI registry
[2019-09-03] [10:11:53.872] INFO QuartzScheduler:740 - Scheduler schedulerFactoryBean_$_NON_CLUSTERED shutdown complete.
I have versions of Spring - 4.3.20, Quartz - 2.3.0
Update:
I see registerListeners() call in SchedulerFactoryBean:
#Override
public void afterPropertiesSet() throws Exception {
...
try {
registerListeners();
registerJobsAndTriggers();
}
catch (Exception ex) {
try {
this.scheduler.shutdown(true);
}
catch (Exception ex2) {
logger.debug("Scheduler shutdown exception after registration failure", ex2);
}
throw ex;
}
}
and in registerListeners() we call getScheduler().getListenerManager() which fails with Operation not supported for remote schedulers exception. How to avoid calling registerListeners() for RemoteScheduler?

Finally I get it working the following way:
I refused from using Spring's SchedulerFactoryBean on a client, but instead I directly create an instance of StdSchedulerFactory from quartz library (then it properly creates an instance of RemoteScheduler): new StdSchedulerFactory(). It properly connects to a server quartz instance.
Due to I cannot obtain an instance of StdSchedulerFactory from SchedulerFactoryBean I just pass both to beans where it is needed. So on a client app SchedulerFactoryBean is null, and on a server app StdSchedulerFactory is null.
Job schedule part is the same for both factories as all I do with them is obtain scheduler:
private Scheduler getScheduler() throws SchedulerException {
if (schedulerFactoryBean != null) return schedulerFactoryBean.getScheduler();
if (schedulerFactory != null) return schedulerFactory.getScheduler();
else return null;
}
and then...
Scheduler scheduler = getScheduler();
if (scheduler != null) {
scheduler.scheduleJob(job, trigger);
}

Related

Hikari Connection Pool - Slow , Block , Connection is not available : SpringBoot

Problem:
I have a springboot application which has hikari configured (auto). I'm getting error
Connection is not available, request timed out after 30113ms
when I just do an insert operation in database and flow is like Controller > Service > Repository > save(entity) also not using #Transactional in repository, but the result is the same if I use it.
While load test 50request/1sec to this service sequentially getting success for 20-30 requests? remaining failed with below exception.
2019-03-28 20:58:29.507 ERROR 90260 --- [http-nio-8080-exec-234] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection] with root cause
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30113ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:697) ~[HikariCP-3.3.1.jar:na]
I am doing kind of load testing as triggering 50req/1sec and half and half getting success and failure. Enabled leak detection also but no trace in log.
Am I overdoing the configurations for this test or should I need to tune the pool connections? or it supports only that much?
Also hikari getconnection after 2nd request and subsequent requests takes almost increased 5+ seconds (blocks) why? Its not parallel why? Please help me or guide me on how much I need to tune to accept like 200 request per 1 min.
application.yml
spring:
application:
name: demo
datasource:
hikari:
connection-timeout: 20000
minimum-idle: 5
maximum-pool-size: 50
idle-timeout: 300000
max-lifetime: 1200000
auto-commit: true
driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
jdbc-url:: jdbc:sqlserver://ip:port;databaseName=sample
username: username
leak-detection-threshold: 30000
BootApplication.java
#SpringBootApplication
public class Sample{
public static void main(String[] args) {
SpringApplication.run(Sample.class, args);
}
#Bean
#ConfigurationProperties(prefix = "spring.datasource.hikari")
public DataSource dataSource() {
HikariDataSource dataSource=new HikariDataSource();
//configuring pass from vault
return dataSource;
}
}
SampleService.java
#Service
public class SampleService implements SampleService {
#Autowired
private SampleRepository sampleRepository;
#Override
public List<String> getAll() {
return (List<String>) sampleRepository.findAll();
}
#Override
public String saveOrUpdate(Sample obj) {
return sampleRepository.save(obj);
}
}

Quartz 2.2.3 JobStore Property Being Overriden by Default Settings - Spring Boot, Liquibase, Oracle

I am trying to implement a Quartz scheduler in my Spring Boot application; however, I continue to receive the following error after startup:
Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "SCHED_NAME": invalid identifier.
In the startup logs before the error, I see:
liquibase : Successfully acquired change log lock
liquibase : Reading from PARTSVOICE_APP.DATABASECHANGELOG
liquibase : Successfully released change log lock
o.q.i.StdSchedulerFactory : Using default implementation for ThreadExecutor
o.q.c.SchedulerSignalerImpl : Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
o.q.c.QuartzScheduler : Quartz Scheduler v.2.2.3 created.
o.s.s.q.LocalDataSourceJobStore : Using db table-based data access locking (synchronization).
o.s.s.q.LocalDataSourceJobStore : JobStoreCMT initialized.
o.q.c.QuartzScheduler : Scheduler meta-data: Quartz Scheduler (v2.2.3) 'scheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 25 threads.
Using job-store 'org.springframework.scheduling.quartz.LocalDataSourceJobStore' - which supports persistence. and is not clustered.
This message is concerning because in my application.yml, my setup is the following:
org:
quartz:
scheduler:
instanceName: cdp-scheduler
instanceId: AUTO
threadPool:
threadCount: 25
class: org.quartz.simpl.SimpleThreadPool
jobStore:
class: org.quartz.impl.jdbcjobstore.JobStoreTX
tablePrefix: c_
I read these properties and put them within a SchedulerFactoryBean here:
#Bean
public SchedulerFactoryBean scheduler(DataSource ds, SpringLiquibase dependent) throws SchedulerException, ConnectionException {
SchedulerFactoryBean factory = new SchedulerFactoryBean();
Properties quartzProps = new Properties();
quartzProps.setProperty("org.quartz.scheduler.instanceId", quartzConfig.getInstanceId());
quartzProps.setProperty("org.quartz.scheduler.instanceName", quartzConfig.getInstanceName());
quartzProps.setProperty("org.quartz.threadPool.threadCount", quartzConfig.getThreadCount());
quartzProps.setProperty("org.quartz.jobStore.class", quartzConfig.getJobStoreClass());
quartzProps.setProperty("org.quartz.jobStore.driverDelegateClass", "org.quartz.impl.jdbcjobstore.oracle.OracleDelegate");
quartzProps.setProperty("org.quartz.threadPool.class", quartzConfig.getThreadPoolClass());
factory.setOverwriteExistingJobs(true);
factory.setAutoStartup(false);
factory.setDataSource(ds);
factory.setQuartzProperties(quartzProps);
factory.setExposeSchedulerInRepository(true);
factory.setAutoStartup(true);
AutowiringSpringBeanJobFactory jobFactory = new AutowiringSpringBeanJobFactory();
jobFactory.setApplicationContext(applicationContext);
factory.setJobFactory(jobFactory);
return factory;
}
I am trying to use the org.quartz.impl.jdbcjobstore.JobStoreTX class so that I can store the Quartz information in our database. However, I believe this is getting overridden from what I can see from the log message above. A quick Google search has shown me that if you provide a datasource, then the JobStoreCMT class is automatically implemented but that doesn't make sense to me.
I'm also using Liquibase to execute a SQL script that creates the quartz tables, this is being executed fine, it drops and creates the tables I need. I've also tried using Flybase but I get the same error, so it definitely is related to my quartz setup.
Has anybody had a similar experience? Any suggestions? Please let me know if you think I should supply more information. Thanks.
For the error I can infer that the tables aren't properly created (either not at all or using the wrong scripts). For Quartz, you need to create the tables using the db scripts for you database type, which you can find in the quartz download.
Here is a similar case: https://groups.google.com/forum/#!topic/axonframework/IlWZ0UHK2hk
If you think that the script is OK, please try to create the table directly in your database.
In case that it still does not work you can go to the next step and check the configuration. Maybe the tablePrefix that you are using is not well interpreted. Try to use the default QRTZ_.
Here is an setup example with a lower version of Quartz.
http://www.opencodez.com/java/quartz-scheduler-with-spring-boot.htm
Let me know if it works!

FailedToCreateRouteException Deploying Camel-CDI application to Wildfly 8.2

I am developing an application that reads messages from a Kafka queue, and processes them creating an e-mail and sending it.
I was able to get a simple camel route working in a standalone java application, however when I package it up using CDI and Wildfly 8.2 it fails to deploy.
I am following the directions from here:
camel CDI Properties example
I used the example as a template for my application.
I am using Wildfly 8.2.0, Camel 2.18.1, Kafka 0.10.1.1
What am I doing wrong? I don't understand the error "org.apache.camel.FailedToCreateRouteException: Failed to create route Deep-email-notify-route: Route(Deep-email-notify-route)[[From[no uri or ref supplied!... because of Either 'uri' or 'ref' must be specified on: org.apache.camel.impl.DefaultRouteContext#55edcc54"
This is most of my code:
#ApplicationScoped
public class OrderEmailApplication {
/** The logger. */
private static Logger logger =LogManager.getLogger(OrderEmailRoute.class);
public OrderEmailApplication() {}
#ContextName("OrderNotifications")
static class OrderEmailRoute extends RouteBuilder {
#Uri("kafka://{{deep.notify.kafka.server}}?topic={{deep.notify.order.topics}}&groupId={{deep.notify.order.groupId}}&"+
"keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer&" +
"valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer")
Endpoint kafkaEndpoint;
#Uri("file:///home/whomer/tmp/Notify/deadMessages")
private Endpoint deadLetter;
#BeanInject
BulkEmailProcessor bulkEmailProcessor;
#BeanInject
FailedMessageEnrichmentProcessor failedEmailEnrichement;
#Override
public void configure() throws Exception {
errorHandler(defaultErrorHandler().maximumRedeliveries(0));
from(kafkaEndpoint).routeId("Deep-email-notify-route")
.unmarshal().json(JsonLibrary.Jackson, LinkedHashMap.class)
.bean("velocityBean")
.process(bulkEmailProcessor)
.marshal().json(JsonLibrary.Jackson, Map.class)
.to("Log:?level=DEBUG");
from("direct:getTemplate").to("velocity://dummy");
}
}
#Named("velocityBean")
static class VelocityBean {
#Uri("direct:getTemplate")
private ProducerTemplate template;
public void process(Exchange exchange) {
// removed for brevity
}
}
#Produces
#Named("properties")
// "properties" component bean that Camel uses to lookup properties
PropertiesComponent properties(PropertiesParser parser) {
PropertiesComponent component = new PropertiesComponent();
// Use DeltaSpike as configuration source for Camel CDI
component.setPropertiesParser(parser);
return component;
}
// PropertiesParser bean that uses DeltaSpike to resolve properties
static class DeltaSpikeParser extends DefaultPropertiesParser {
#Override
public String parseProperty(String key, String value, Properties properties) {
return ConfigResolver.getPropertyValue(key);
}
}
}
This is the stack trace:
14:51:37,674 INFO [org.apache.camel.cdi.CdiCamelExtension] (MSC service thread 1-4) Camel CDI is starting Camel context [OrderNotifications]
14:51:37,680 INFO [org.apache.camel.impl.DefaultCamelContext] (MSC service thread 1-4) Apache Camel 2.18.1 (CamelContext: OrderNotifications) is starting
14:51:37,682 INFO [org.apache.camel.management.ManagedManagementStrategy] (MSC service thread 1-4) JMX is enabled
14:51:37,821 INFO [org.apache.camel.impl.DefaultRuntimeEndpointRegistry] (MSC service thread 1-4) Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
14:51:37,827 INFO [org.apache.camel.impl.DefaultCamelContext] (MSC service thread 1-4) Apache Camel 2.18.1 (CamelContext: OrderNotifications) is shutting down
14:51:37,849 INFO [org.apache.camel.impl.DefaultCamelContext] (MSC service thread 1-4) Apache Camel 2.18.1 (CamelContext: OrderNotifications) uptime 0.169 seconds
14:51:37,849 INFO [org.apache.camel.impl.DefaultCamelContext] (MSC service thread 1-4) Apache Camel 2.18.1 (CamelContext: OrderNotifications) is shutdown in 0.022 seconds
14:51:37,859 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC000001: Failed to start service jboss.deployment.unit."NotificationService.war".WeldStartService: org.jboss.msc.service.StartException in service jboss.deployment.unit."NotificationService.war".WeldStartService: Failed to start service
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1904) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_51]
Caused by: org.jboss.weld.exceptions.DeploymentException: Exception List with 1 exceptions:
Exception 0 :
org.apache.camel.FailedToCreateRouteException: Failed to create route Deep-email-notify-route: Route(Deep-email-notify-route)[[From[no uri or ref supplied!... because of Either 'uri' or 'ref' must be specified on: org.apache.camel.impl.DefaultRouteContext#55edcc54
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:201)
at org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:1008)
at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:3397)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3128)
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:182)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2957)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2953)
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:2976)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:2953)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:2920)
at org.apache.camel.impl.DefaultCamelContext$Proxy$_$$_WeldClientProxy.start(Unknown Source)
at org.apache.camel.cdi.CdiCamelExtension.afterDeploymentValidation(CdiCamelExtension.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jboss.weld.injection.MethodInjectionPoint.invokeOnInstanceWithSpecialValue(MethodInjectionPoint.java:90)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:271)
at org.jboss.weld.event.ExtensionObserverMethodImpl.sendEvent(ExtensionObserverMethodImpl.java:121)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:258)
at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:237)
at org.jboss.weld.event.ObserverNotifier.notifyObserver(ObserverNotifier.java:174)
at org.jboss.weld.event.ObserverNotifier.notifyObservers(ObserverNotifier.java:133)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:107)
at org.jboss.weld.bootstrap.events.AbstractContainerEvent.fire(AbstractContainerEvent.java:54)
at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:35)
at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:439)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:90)
at org.jboss.as.weld.WeldStartService.start(WeldStartService.java:93)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Either 'uri' or 'ref' must be specified on: org.apache.camel.impl.DefaultRouteContext#55edcc54
at org.apache.camel.impl.DefaultRouteContext.resolveEndpoint(DefaultRouteContext.java:136)
at org.apache.camel.model.FromDefinition.resolveEndpoint(FromDefinition.java:69)
at org.apache.camel.impl.DefaultRouteContext.getEndpoint(DefaultRouteContext.java:90)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1051)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:196)
... 35 more
at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:37)
at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:439)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:90)
at org.jboss.as.weld.WeldStartService.start(WeldStartService.java:93)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
... 3 more
OOPS. Programmer error!
The main issue turned out to be the Endpoint injections.
Instead of
#Uri("kafka:...")
Endpoint fubar;
the injection should have been:
#EndpointInjection(uri = "kafka:...")
Endpoint fubar;

Spring Quartz Scheduler race condition

What I suspect the problem to be is SchedulerFactoryBean's setOverwriteExistingJobs not offering enough protection.
One node will be initializing the scheduler and it will decide to replace the trigger (breakpoint org.quartz.impl.jdbcjobstore.SimpleTriggerPersistenceDelegate#deleteExtendedTriggerProperties )
Right after it executes this method, the trigger won't be in the database any longer so when another node in the cluster will try to read it (org.quartz.impl.jdbcjobstore.JobStoreSupport#retrieveTrigger) it will fail with the exception below. Because of this exception, the whole application will fail to start (not just the scheduler).
Caused by: org.quartz.JobPersistenceException: Couldn't retrieve
trigger: No record found for selection of Trigger with key:
The logs can be found at https://github.com/apixandru/case-study/tree/master/spring-boot-quartz/logs
(The exception can be found on the Server-1 node after the 4th restart)
For the whole project that demonstrates this issue go to https://github.com/apixandru/case-study/tree/master/spring-boot-quartz
The way that we configure the scheduler is here
#Bean
JobDetailFactoryBean jobFactoryBean() {
JobDetailFactoryBean bean = new JobDetailFactoryBean();
bean.setDurability(true);
bean.setName("Sampler");
bean.setJobClass(SampleJob.class);
return bean;
}
#Bean
SimpleTriggerFactoryBean triggerFactoryBean(JobDetailFactoryBean jobFactoryBean) {
SimpleTriggerFactoryBean bean = new SimpleTriggerFactoryBean();
bean.setName("Sampler Trigger");
bean.setRepeatInterval(20_000);
bean.setJobDetail(jobFactoryBean.getObject());
return bean;
}
#Bean
SchedulerFactoryBean schedulerFactoryBean(SimpleTriggerFactoryBean triggerFactoryBean, DataSource dataSource, Dependency dependency) {
Properties props = new Properties();
props.put("org.quartz.scheduler.instanceId", "AUTO");
props.put("org.quartz.jobStore.isClustered", "true");
SchedulerFactoryBean bean = new SchedulerFactoryBean();
bean.setTriggers(triggerFactoryBean.getObject());
bean.setSchedulerName("Demo Scheduler");
bean.setSchedulerContextAsMap(Collections.singletonMap("dependency", dependency));
bean.setOverwriteExistingJobs(true);
bean.setDataSource(dataSource);
bean.setQuartzProperties(props);
return bean;
}
This happens a lot on our work servers but it's a lot harder to get locally (possibly due to the fact that the actual servers are dedicated and have a lot more power than my local machine?)
To get the bug on any machine, start one server in debug mode and put a breakpoint on SimpleTriggerPersistenceDelegate.deleteExtendedTriggerProperties and just after it executes, start the second server and you will get this exception
Anyway, I managed to get this error locally as well after about 40 redeploys to my local clustered weblogic server.
The problem is the fact that by default no transaction manager is used, so no locking is used.
To solve the issue, it is required to call the schedulerFactoryBean's setTransactionManager method.

Testing ejb with stub and openejb framework

I am trying to test an EJB that have another one injected in it.
For the tests purpose I want to use a stub for the injected EJB. I had use openEJB as framework for the EJB for the testing.
Here is the EJB :
#Stateless
#Local(IService.class)
public class Service implements IService {
#EJB
private IBean bean;
#Override
public String doService(String data) {
return bean.process(data);
}
}
The real injected EJB :
#Stateless
#Local(IBean.class)
public class Bean implements IBean {
private static Logger logger = Logger.getLogger(Bean.class);
#Override
public String process(String data) {
logger.info("Bean processing : " + data);
return "Bean processing : " + data;
}
}
The stub version of the EJB :
#Stateless
#Local(IBean.class)
public class BeanStub implements IBean {
private static Logger logger = Logger.getLogger(BeanStub.class);
#Override
public String process(String data) {
logger.info("Stub processing : " + data);
return "Stub processing : " + data;
}
}
And the JUnit test used :
public class ServiceTest {
private static Logger logger = Logger.getLogger(ServiceTest.class);
private static InitialContext context;
#BeforeClass
public static void setUpBeforeClass() throws Exception {
// openEJB
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY,"org.apache.openejb.client.LocalInitialContextFactory");
p.put("openejb.altdd.prefix", "stub"); // use specific ejb-jar
p.put("openejb.descriptors.output", "true");
context = new InitialContext(p);
}
#Test
public void testServiceStub() {
try {
IService service = (IService) context.lookup("ServiceStubLocal");
assertNotNull(service);
String msg = service.doService("service");
assertEquals("Stub processing : service", msg);
} catch (NamingException e) {
logger.error(e);
fail(e.getMessage());
}
}
}
I had try to override the use of the real EJB by the stub one, using a specific ejb-jar (I want to use "BeanStub" instead of default "Bean" in my service) :
<ejb-jar>
<enterprise-beans>
<session id="ServiceStub">
<ejb-name>ServiceStub</ejb-name>
<ejb-class>tests.Service</ejb-class>
<ejb-local-ref>
<ejb-ref-name>tests.Service/bean</ejb-ref-name>
<ejb-link>BeanStub</ejb-link>
</ejb-local-ref>
</session>
</enterprise-beans>
</ejb-jar>
Unfortunatly I have a problem the EJB are declared :
Apache OpenEJB 3.1.4 build: 20101112-03:32
http://openejb.apache.org/
17:14:29,225 INFO startup:70 - openejb.home = D:\Workspace_Java\tests\testejb
17:14:29,225 INFO startup:70 - openejb.base = D:\Workspace_Java\tests\testejb
17:14:29,350 INFO config:70 - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service)
17:14:29,350 INFO config:70 - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager)
17:14:29,381 INFO config:70 - Found EjbModule in classpath: D:\Workspace_Java\tests\testejb\target\test-classes
17:14:29,412 INFO config:70 - Found EjbModule in classpath: D:\Workspace_Java\tests\testejb\target\classes
17:14:29,428 INFO config:70 - Beginning load: D:\Workspace_Java\tests\testejb\target\test-classes
17:14:29,428 INFO config:70 - AltDD ejb-jar.xml -> file:/D:/Workspace_Java/tests/testejb/target/test-classes/META-INF/stub.ejb-jar.xml
17:14:29,850 INFO config:70 - Beginning load: D:\Workspace_Java\tests\testejb\target\classes
17:14:29,850 INFO config:70 - AltDD ejb-jar.xml -> file:/D:/Workspace_Java/tests/testejb/target/classes/META-INF/stub.ejb-jar.xml
17:14:29,850 INFO config:70 - Configuring enterprise application: classpath.ear
17:14:29,912 INFO config:70 - Configuring Service(id=Default Stateless Container, type=Container, provider-id=Default Stateless Container)
17:14:29,912 INFO config:70 - Auto-creating a container for bean ServiceStub: Container(type=STATELESS, id=Default Stateless Container)
17:14:29,912 INFO options:70 - Using 'openejb.descriptors.output=true'
17:14:29,912 INFO options:70 - Using 'openejb.descriptors.output=true'
17:14:29,928 INFO config:70 - Dumping Generated ejb-jar.xml to: C:\TEMP\ejb-jar-6391test-classes.xml
17:14:29,959 INFO config:70 - Dumping Generated openejb-jar.xml to: C:\TEMP\openejb-jar-6392test-classes.xml
17:14:29,959 INFO options:70 - Using 'openejb.descriptors.output=true'
17:14:29,959 INFO config:70 - Dumping Generated ejb-jar.xml to: C:\TEMP\ejb-jar-6393classes.xml
17:14:29,975 INFO config:70 - Dumping Generated openejb-jar.xml to: C:\TEMP\openejb-jar-6394classes.xml
17:14:30,006 INFO config:70 - Enterprise application "classpath.ear" loaded.
17:14:30,084 INFO startup:70 - Assembling app: classpath.ear
17:14:30,131 INFO startup:70 - Jndi(name=ServiceStubLocal) --> Ejb(deployment-id=ServiceStub)
17:14:30,131 ERROR startup:46 - Jndi name could not be bound; it may be taken by another ejb. Jndi(name=openejb/Deployment/ServiceStub/tests.IService!Local)
17:14:30,131 INFO startup:70 - Undeploying app: classpath.ear
17:14:30,147 ERROR startup:50 - Application could not be deployed: classpath.ear
org.apache.openejb.OpenEJBException: Creating application failed: classpath.ear: Unable to bind business local interface for deployment ServiceStub
at org.apache.openejb.assembler.classic.Assembler.createApplication(Assembler.java:679)
at org.apache.openejb.assembler.classic.Assembler.createApplication(Assembler.java:450)
Is there something wrong in the approach, or in the way to write the ejb-jar ?
I had similar problems and hurdles with OpenEJB. If you need stubbing and mocking for tests (who doesn't) have a look who I finally managed to handle it (with a great help from David - OpenEJB co-founder). In the latest version (3.1.4) OpenEJB works pretty much like Arquillian, allowing inner-class test driver, without ejb-jar.xml and classpath scanning.
I've described my hurdles here: http://jakub.marchwicki.pl/posts/2011/07/01/testing-ejb-application-openejb-without-classpath-scanning/. Have a look, maybe that will make your testing easier.
Why don't you simply use a mocking framework like EasyMock or Mockito to test this. You wouldn't need any deployment descriptor, EJB container, JNDI lookup, etc. Just this kind of code :
#Test
public void testDoService() {
IBean mockBean = EasyMock.createMock(IBean.class);
mockBean.process("data");
EasyMock.replay(mockBean);
Service serviceToTest = new Service(mockBean);
serviceTotest.doService("data");
EasyMock.verify(mockBean);
}
And it would certainly run much faster, too.

Categories