Spring - Switch SchedulerFactoryBean To Be Used - java

I'm using Spring's SchedulerFactoryBean to run some Quartz jobs within a Spring based java application. At present, this is a single instance application in development, but as soon as we start horizontally scaling this we will want to use a jdbc based JobStore for Quartz so no more than one app will run a given job.
Right now, SchedulerFactoryBean is configured as follows:
<bean id="schedulerFactoryBean" class="org.springframework.scheduling.quartz.SchedulerFactoryBean" >
<property name="taskExecutor" ref="taskExecutor"/>
<property name="triggers">
<list>
<!-- a bunch of triggers here -->
</list>
<property name="applicationContextSchedulerContextKey">
<value>applicationContext</value>
</property>
</bean>
and with using a jdbc based JobStore it will look like this
<bean id="schedulerFactoryBean" class="org.springframework.scheduling.quartz.SchedulerFactoryBean" >
<property name="dataSource" ref="mysqlJobDataSource"/>
<property name="taskExecutor" ref="taskExecutor"/>
<property name="triggers">
<list>
<!-- a bunch of triggers here -->
</list>
</property>
<property name="applicationContextSchedulerContextKey">
<value>applicationContext</value>
</property>
<property name="quartzProperties">
<props>
<prop key="org.quartz.jobStore.class">org.quartz.impl.jdbcjobstore.JobStoreTX</prop>
<prop key="org.quartz.jobStore.driverDelegateClass">org.quartz.impl.jdbcjobstore.StdJDBCDelegate</prop>
<!-- and a bunch of other quartz props -->
</props>
</property>
</bean>
Ideally, I'd like to continue using the default RAMJobStore version (the first one) for developers, but use the jdbc version for deployed environments. However, there doesn't seem to be a very good way to switch between the two through something like a property, since the jdbc store involves lots more configuration and the mere existence of the dataSource property on SchedulerFactoryBean means it tries to a JDBC based job store.
Also, Since SchedulerFactoryBean is an initializing bean where the initializing basically starts running all of the jobs, so I can't have both of those beans defined in a config file loaded into the spring context either, which means I'll have parallel jobs running.
I've also read through this answer, but this situtation differs in that I'm dealing with two InitializingBeans that should never be in the same context at the same time.
What would be the simplest way to configure switching between these two configurations of SchedulerFactoryBean?

From Spring 3.1 you can use Spring profiles:
<bean name="schedulerFactoryBean" profile="dev" ...
<bean name="schedulerFactoryBean" profile="prd" ...
Then you can instruct Spring container which profile to use, see How to set active spring 3.1 environment profile via a properites file and not via an env variable or system property and Spring autowire a stubbed service - duplicate bean.
If you can't use 3.1 or profiles, the old-school of solving such issues is to have two context files: schedulerContext-dev.xml and schedulerContext-prd.xml`. Then you can import them selectively:
<import resource="schedulerContext-${some.property}"/>

A better option would be using quartz properties file. As part of your release you can have different files per environment. The context that way is the same for all the environments, the only thing that changes is the configuration file. Using maven profiles you can solve it

Related

How to register Spring Cloud config server before any XML bean is declared?

I am trying to migrate a spring app who uses PropertyPlaceholderConfigurer to resolve all the XML placeholders in it's bean declarations to a spring cloud usage, I can check that the config server is contacted and responds with the respective data generated from a git repository, however, at server startup during the BeanFactoryPostProcessor registration the XML context fails to resolve the placeholders.
I assumed that by removing the bean definition:
<bean
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="properties">
<bean class="org.apache.commons.configuration.ConfigurationConverter"
factory-method="getProperties">
<constructor-arg>
<ref bean="domainConfiguration" />
</constructor-arg>
</bean>
</property>
</bean>
And adding the POM dependency for config client and respective enviroment variables the placeholders should work but they dont.
Can I manually set the config server in a higher priority?
Or as an alternative, teach PropertyPlaceholderConfigurer to consume a config server?
If you are using spring-cloud-config, this should work out of the box. When spring will build/start the ApplicationContext, first it will create a bootstrap (parent) context which will happen before creating the main context. Getting the properties of the config server should happen in the bootstrap phase so that your beans which are created in the normal context should be able to get those properties.
Check out the Client Side Usage part of the documentation for an example and check out the usage of the bootstrap.properties file.
If you don't have spring-boot (it should work w/o it as well but the docs are spring-boot centric), check out this repo or this GitHub issue, you will need a ConfigServicePropertySourceLocator.

Using batch mode in Quartz

With the intent to improve performance of quartz, I intend to use batching in quartz as suggested in Performance Tuning on Quartz Scheduler
We create our quartz scheduler in integration with spring as below.
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<!-- Quartz requires a separate 'quartz.properties' file -->
<property name="configLocation" value="classpath:/quartz.properties"/>
<!-- Naturally, Quartz with the DB requires references
to the data source and transaction manager beans -->
<property name="dataSource" ref="quartzDataSource"/>
<!--<property name="transactionManager" ref="transactionManager"/>-->
<!-- reference to our 'autowiring job factory bean', defined above: -->
<property name="jobFactory" ref="quartzJobFactory"/>
<!-- Boolean controlling whether you want to override
the job definitions in the DB on the app start up.
We'll talk about it more in the next section. -->
<property name="overwriteExistingJobs" value="true"/>
<!-- I will not explain the next three properties, just use it as shown: -->
<property name="autoStartup" value="${isQuartzEnabled}" />
<property name="schedulerName" value="quartzScheduler"/>
<property name="applicationContextSchedulerContextKey" value="applicationContext"/>
<!-- Controls whether to wait for jobs completion on app shutdown, we use 'true' -->
<property name="waitForJobsToCompleteOnShutdown"
value="true"/>
<!-- Tell the Quartz scheduler about the triggers.
We have implemented the 'quartzTriggers' bean in the 'Jobs and triggers' section.
No need to pass job definitions, since triggers created via Spring know their jobs.
Later we'll see a case when we'll have to disable this and pass the jobs explicitly.-->
<property name="triggers" ref="quartzTriggers"/>
</bean>
How do I specify maxBatchSize & batchTimeWindow of createScheduler in DirectSchedulerFactory ?
I found that we can achieve by configuring the below property in quartz.properties file
org.quartz.scheduler.batchTriggerAcquisitionMaxCount
Reference : Configure Main Scheduler Settings
org.quartz.scheduler.batchTriggerAcquisitionFireAheadTimeWindow will set the batchTimeWindow.
org.quartz.scheduler.batchTriggerAcquisitionMaxCount will set the maxBatchSize property.
Source : https://github.com/quartz-scheduler/quartz/blob/master/quartz-core/src/main/java/org/quartz/impl/StdSchedulerFactory.java

Setting annotation based SpyMemcached configuration in spring mvc

I am new to spring framework. I want to use spy memcached in my application, but i cant find the proper annotation based configuration to set the bean. Currently i am using Memcached static object in my Controller which looks really bad programming. Please provide a simple way to implement memcache in spring configuration. just on default values of memcached "127.0.0.1:11211". Thank you.
edit.
how to convert this xml cinfiguration into proper annitation based config and what to Autowire in cintroller..
<bean name="defaultMemcachedClient" class="com.google.code.ssm.CacheFactory">
<property name="cacheClientFactory">
<bean name="cacheClientFactory" class="com.google.code.ssm.providers.spymemcached.Mem
</property>
<property name="addressProvider">
<bean class="com.google.code.ssm.config.DefaultAddressProvider">
<property name="address" value="127.0.0.1:11211" />
</bean>
</property>
<property name="configuration">
<bean class="com.google.code.ssm.providers.CacheConfiguration">
<property name="consistentHashing" value="true" />
</bean>
</property>
</bean>
Take a look at Simple Spring Memcached (SSM) library.
It provides integration to memcached (via spymemcached or xmemcached client) using:
Spring Cache annotations (#Cacheable)
custom annotations (like ReadThroughSingleCache).

Seam/Hibernate: liquibase before JPA startup

I have a Java EE web application (hibernate3, seam) that I'm using in Weblogic container.
I want to introduce Liquibase for schema migrations.
Currently we use
<property name="hibernate.hbm2ddl.auto" value="update"/>
which we want to drop because it can be dangerous.
I want the migration to automatically happen at deployments, so I'm using the servlet listener integration.
In web.xml, the first listener is:
<listener>
<listener-class>liquibase.integration.servlet.LiquibaseServletListener</listener-class>
</listener>
Sadly, this listener comes into play after the Hibernate initialization and it throws missing table errors (because the schema is empty).
I'm google-ing like a boss for hours and I'm a bit confused now.
Thanks in advance
UPDATE
If I set <property name="hibernate.hbm2ddl.auto" value="none" />, liquibase finishes it's job successfully and the app starts up as expected. If I set validate, it seems like hibernate schema validation takes place before liquibase and it cries because of missing tables.
UPDATE
It seems like Seam initializes Hibernate, but Liquibase listener is listed before SeamListener, so I have no clue on how to enable schema validation and liquibase at the same time...
My understanding is that the LiquibaseServletListener requires the path to change log file which is passed using liquibase.changelog context param. So you already have a change log generated or am I missing something here ?
You can take a look at the liquibase hibernate integration library provided by Liquibase.
This library works with both the classic hibernate configuration (via .cfg and .xml files) as well as JPA configuration via persistence.xml.
AFAIK, generating the changelog and running the change log are two seperate process. Liquibase hibernate integration library helps in generating the change log from the diff of current state of entities in persistence unit and the current database state.
How to determine the order of listeners in web.xml
You should place:
<listener>
<listener-class>liquibase.integration.servlet.LiquibaseServletListener</listener-class>
</listener>
before ORM or framework other related listeners.
I use Spring beans LiquiBase activation to reduce DB authentication data duplication by using already provided datasource bean:
<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="dataSource" />
<property name="changeLog" value="classpath:sql/master.sql" />
<property name="defaultSchema" value="PRODUCT" />
</bean>
To restrict order use depends-on attribute:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
depends-on="liquibase">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
<property name="packagesToScan" value="product.domain" />
<property name="jpaProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
<prop key="hibernate.hbm2ddl.auto">validate</prop>
</props>
</property>
</bean>

How do I read JVM arguments in the Spring applicationContext.xml

I have a JSF web application with Spring and I am trying to figure out a way to reference the JVM arguments from the applicationContext.xml. I am starting the JVM with an environment argument (-Denv=development, for example). I have found and tried a few different approaches including:
<bean id="myBean" class="com.foo.bar.myClass">
<property name="environment">
<value>${environment}</value>
</property>
</bean>
But, when the setter method is invoked in MyClass, the string "${environment}" is passed, instead of "development". I have a work around in place to use System.getProperty(), but it would be nicer, and cleaner, to be able to set these values via Spring. Is there any way to do this?
Edit:
What I should have mentioned before is that I am loading properties from my database using a JDBC connection. This seems to add complexity, because when I add a property placeholder to my configuration, the properties loaded from the database are overridden by the property placeholder. I'm not sure if it's order-dependent or something. It's like I can do one or the other, but not both.
Edit:
I'm currently loading the properties using the following configuration:
<bean id="myDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc.mydb.myschema"/>
</bean>
<bean id="props" class="com.foo.bar.JdbcPropertiesFactoryBean">
<property name="jdbcTemplate">
<bean class="org.springframework.jdbc.core.JdbcTemplate">
<constructor-arg ref="myDataSource" />
</bean>
</property>
</bean>
<context:property-placeholder properties-ref="props" />
You can use Spring EL expressions, then it is #{systemProperties.test} for -Dtest="hallo welt"
In your case it should be:
<bean id="myBean" class="com.foo.bar.myClass">
<property name="environment">
<value>#{systemProperties.environment}</value>
</property>
</bean>
The # instead of $ is no mistake!
$ would refer to place holders, while # refers to beans, and systemProperties is a bean.
May it is only a spelling error, but may it is the cause for your problem: In the example for your command line statement you name the variable env
(-Denv=development, for example...
But in the spring configuration you name it environment. But both must be equals of course!
If you register a PropertyPlaceholderConfigurer it will use system properties as a fallback.
For example, add
<context:property-placeholder/>
to your configuration. Then you can use ${environment} in either your XML configuration or in #Value annotations.
You can load a property file based on system property env like this:
<bean id="applicationProperties"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="ignoreResourceNotFound" value="false" />
<property name="ignoreUnresolvablePlaceholders" value="true" />
<property name="searchSystemEnvironment" value="false" />
<property name="locations">
<list>
<value>classpath:myapp-${env:prod}.properties</value>
</list>
</property>
</bean>
If env is not set default it to production otherwise development and testing teams can have their flavor of app by setting -Denv=development or -Denv=testing accordingly.
Use #{systemProperties['env']}. Basically pass the propertyName used in the Java command line as -DpropertyName=value. In this case it was -Denv=development so used env.
Interestingly, Spring has evolved to handled this need more gracefully with PropertySources:
http://spring.io/blog/2011/02/15/spring-3-1-m1-unified-property-management/
With a few configurations and perhaps a custom ApplicationInitializer if you are working on a Web app, you can have the property placeholder handle System, Environment, and custom properties. Spring provides PropertySourcesPlaceholderConfigurer which is used when you have in your Spring config. That one will look for properties in your properties files, then System, and then finally Environment.
Spring 3.0.7
<context:property-placeholder location="classpath:${env:config-prd.properties}" />
And at runtime set:
-Denv=config-dev.properties
If not set "env" will use default "config-prd.properties".

Categories