How to dynamically create multiple CronTriggerBean beans.
My system currently uses a property file to get the times for running the job
For example:
SCHEDULE_TIME=09:30,10:55,17:35
Now, I will get these values and create cron expressions for them.
Now I want to create multiple CronTriggerBean beans with the cron expressions that I have with me.
How can I do it ?
<bean id="crontestJobTriggerTradeCheck"
class="frameworkx.springframework.scheduling.quartz.InitializingCronTrigger">
<property name="jobDetail">
<ref bean="tradeCheck" />
</property>
</bean>
try to create class like InitializingCronTrigger extends CronTriggerFactoryBean .Set CronExpression in InitializingCronTrigger .
Related
Currently,I am storing database details in a property file and then creating an datasource using
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName">
<value>${driverClassName}</value>
</property>
<property name="url">
<value>${url}</value>
</property>
<property name="username">
<value>${username}</value>
</property>
<property name="password">
<value>${password}</value>
</property>
</bean>
My Client asked us to place a config database and this DB will store all the i18keys and the main database values.
So I need to create one two datasources one is for Configs and other is the main database.
I can create the config data sources using the same. But How I can create an second datasource as all the database details are stored in config database.
can you pointers will be very helpful.
You might take a look into the Java-configuration for Spring. You can combine that with your current XML-configuration using <context:component-scan base-package="..."/>.
The general approach would be to configure the first datasource for configuration (like in your current setup) using XML. The XML should also refer to a 'configuration class'.
That is a special class, annotated with #Configuration, which gets the first datasource injected (or maybe some DAO), and then defines a method like so:
#Bean
public DataSource secondDataSource() {
// Construct the second datasource using the configuration
// retrieved from the first datasource.
return new BasicDataSource();
}
Note that you might want to add a qualifier to either (or even both) datasources so you can distinguish between the two datasources when you want to have them injected into other beans using #Injector #Autowired.
Does spring batch provide any dynamic/generic file writers? For example, if i have multiple requirements of generating a file and i have one view created for each purpose, all i want to do is specify the view name and i want spring-batch to extract the data from the view to a flat file with column headings. This is as simple as if you have used dbviz or sql developer, just export the result of a query to a file.
Recently i had 4 different requirements of extracting data to file, and i have repeated the config file and created a record bean and record mapper to map the bean to the columns of view for each file. Rather than repeating this entire process every time, i am looking to see if Spring batch or any other java frameworks provides a generic approach to extracting a file based on the table and its columns without writing result set mappers or dealing with field extractors.
I can build a generic spring batch file extractor but wanted to check if spring-batch already does that, which seems like a basic thing?
Also if i create a generic extractor then the attributes of the bean will be kinda dynamic based on the columns names of the view. So in that scenario, if i have about 50 columns i dont want to specify each attribute in fieldExtractor, i want all the attributes to be extracted. Currently i am specifying attributes in spring config as given below, but i don't want to spefiy the attribute names. I just want to say extract all attributes. Is that possible?
<bean id="CsvItemWriter" class="some.class.FileWriter" scope="step">
<property name="resource" value="file://#{jobParameters['file.name']}"/>
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter">
<util:constant static-field="org.springframework.batch.item.file.transform.DelimitedLineTokenizer.DELIMITER_TAB" />
</property>
<property name="fieldExtractor"> <bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="name.last, name.first, name.middle, birthDate, gender, homePhone, cellPhone"/>
</bean>
</property>
</bean>
</property>
In case some one needs it, i found a similar question in SO that was difficult to find. I am using columnMapRowMapper as mentioned here
I have 3 projects:
framework
product-a
product-b
Each of the products depends on the framework, but they don't know each other.
I have 3 spring configuration files: one for each project. The configuration file of each product includes (with <import resource="classpath:/...) the configuration file of the framework.
In the framework there is a bean called "manager", which has a property List<AnInterface> theList. The "manager" has a addXxx(anImplementation), which adds elements to the list).
The framework, and each of the product provide implementations of AnInterface, which have to be added to theList.
So in the end, when product-a is running, the manager contains implementations from the framework, and from product-a, idem for product-b
What is the best practice to perform this initialization with Spring ?
The only solution I could think about is to create a dedicated class which contructor will take the manager and a list of contributions, and add them to the manager, but it's ugly because 1/ It manipulate external objects in the constructor, 2/ I have to create a dummy class just to initialize other classes... I don't like that.
I think that code should not know about Spring if it is not really needed. Therefore I would do all initialization in Spring config.
We can use bean definition inheritance and property overriding to do it.
Framework class
public class Manager {
private List<AnInterface> theList;
public void init() {
// here we use list initialized by product
}
}
Framework context
<bean id="manager"
init-method="init"
abstract="true"
class="Manager">
<property name="theList">
<list/> <!-- this will be overriden or extnded -->
</property>
</bean>
Product A context
<bean id="managerA"
parent="manager"
scope="singleton"
lazy-init="false">
<property name="theList">
<list>
<ref bean="impl1"/>
<ref bean="impl2"/>
</list>
</property>
</bean>
Watch out for parent and child properties in such configuration. Not all are inherited from parent. Spring documentation specifies:
The remaining settings are always taken from the child definition: depends on, autowire mode, dependency check, singleton, scope, lazy init.
Moreover, there is also collection merging in Spring so by specifing in child bean
<list merge="true">
you can merge parent and child lists.
I have observed this pattern in a number of projects and some extendable Web frameworks based on Spring.
I have accepted the answer of Grzegorz because it's a clean solution to my initial problem, but here as an alternate answer, the a technical solution to contribute to a list property of an existing bean.
<bean class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">
<property name="targetObject" ref="manager"/>
<property name="targetMethod"><value>addXxx</value></property>
<property name="arguments"><list value-type="com.xxx.AnInterface">
<value ref="impl1" />
<value ref="impl2" />
...
</list></property>
</bean>
I have the following definition:
<bean id="logger" factory-method="createLog" scope="prototype" class="com.test.beans.LogBean" ></bean>
<bean id="aone" class="com.test.beans.AggregationOne">
<property name="log" ref="logger"></property>
</bean>
<bean id="atwo" class="com.test.beans.AggregationTwo">
<property name="log" ref="logger"></property>
</bean>
Is it possible to recognize for which object (aone or atwo) bean 'logger' is being created?
Why I'm asking: in a legacy application I have one log instance for all classes. I want to change level for some packages, but can't do that (except using filters, what I don't want). For that purpose I want to utilize some spring magic, if it exists for that case )
I don't think it can be done this way. What you could try is a BeanPostProcessor implementation which detects common logger object in beans and replaces it with a specific one.
currently I'm creating proxy classes from interfaces with spring 3 xml config like this:
<bean id="abstractDaoTarget" class="mypackage.GenericDaoImpl" abstract="true" />
<bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true" />
<bean id="personDao" parent="abstractDao">
<property name="proxyInterfaces">
<value>mypackage.CustomerDao</value>
</property>
<property name="target">
<bean parent="abstractDaoTarget">
</bean>
</property>
</bean>
Note that I have only one interface named PersonDao and NO implementation of this interface. The above xml snippet works fine, I can create an 'instance' of the interface.
My Question is how can I achieve this with pure Spring 3 annotations without the above xml snippet?
Is it possible without xml?
Have a look at Spring Data JPA. Here's an introductory tutorial. They are doing pretty much exactly what you are.
Are you looking for an way to generate Beans with an factory completely written in Java without xml? - Then use #Configuration to annotate the class and #Bean to annotate the method that creates the bean. 3.11.1 Basic concepts: #Configuration and #Bean
If this is not what you mean, then have a look at the code of Hades. This is a project that do the same think like (I guess) you. Creating DAOs from Interfaces.