I am writing the data retrieved from the database in a .txt file in a particular format. I need to put 0s before a particular field retrieved.
For Example: If the value of a column retrieved from the database is 123, I need to make it as 5 digits and the number should be preceded by 2 zeros. i.e. "00123".
For a number like 2, it should be "00002"
But instead of 0s I need space. I am unable to figure it out.
The .xml code that I use for it is
<bean id="neiertzItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="${renewalNeiertzData}" />
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.FormatterLineAggregator">
<property name="fieldExtractor">
<bean
class="com.accord.batch.neiertz.tasklet.FormattingBeanWrapperFieldExtractor">
<property name="names"
value="contractNumber,engagementNumber,creationDate,commercantNumber,productCode,externalReference" />
<property name="dateFormatting">
<map key-type="java.lang.String" value-type="java.lang.String">
<entry key="creationDate" value="yyyyMMdd" />
</map>
</property>
<property name="numberFormatting">
<map key-type="java.lang.String" value-type="java.lang.String">
<entry key="contractNumber" value="%015d" />
<entry key="engagementNumber" value="%011d" />
<entry key="commercantNumber" value="%09d" />
<entry key="productCode" value="%05d" />
<entry key="externalReference" value="%030d" />
</map>
</property>
</bean>
</property>
<property name="format" value="%-15s%-11s%-8s%-9s%-5s%-30s" />
</bean>
</property>
</bean>
How Do I change to make it for white spaces instead of zeros. I don't understand the equivalent for space in java and how to put it in xml.
Related
I am developing a Spring batch to get data from a table and into a CSV file. This is the table I am extracting data from:
CREATE TABLE TMP_SYNCHRONIZED_RSLT
(SERIAL_QA NUMBER(20) UNIQUE
, NB_RECENT VARCHAR2(10)
, NB_ARTICLE VARCHAR2(2)
);
I have my reader as follows:
<bean id="serialNumbersReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="fetchSize" value="${batch.job.fetch.interval}" />
<property name="dataSource" ref="dataSource" />
<property name="rowMapper">
<bean class="org.springframework.jdbc.core.BeanPropertyRowMapper">
<property name="mappedClass" value="ca.org.serialNumbersGenerationBatch.batch.synchronization.dto.ExtractSerialNumbersDto" />
</bean>
</property>
<property name="sql">
<value>
<![CDATA[
SELECT SERIAL_QA, NB_RECENT, NB_ARTICLE from TMP_SYNCHRONIZED_RSLT
]]>
</value>
</property>
</bean>
And my writer:
<bean id="csvWriteSerialNumbers"
class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="serialQa, nbRecent, nbArticle" />
</bean>
</property>
</bean>
</property>
<property name="encoding" value="${csv.encoding}" />
<property name="headerCallback">
<bean class="fr.canalplus.cgaweb.batch.common.writer.StringHeaderFooterCallback">
<property name="header" value="SERIAL_QA;NB_RECENT;NB_ARTICLE" />
</bean>
</property>
<property name="resource" value="file:${tmp.dir}/serialNumbers.csv" />
</bean>
my csv.encoding = ISO-8859-1. And I put String in all of my DTO attributes.
This generates weird numbers like 2,49254E+12 instead of 24925418071 in my CSV under SERIAL_QA. Any idea on how to resolve this without changing the column type?
I read a CSV file as input using spring batch and i have 2 CSV file as output.
The first file contains about 100 lines.
the input file contains 5 colones id,typeProduct and price.
And i have just 2 type of product
i route through all these lines and i write two output files.
For both files a single line containing the type of product and the sum of the prices of all these products which have the same type.
So my need is before writing in the output files. i want to get all lines in a list to make conditions and adding a new attribute to my object for example result if sum> 5000 will take the value good else will take not good for example. and display them in the output line that exists in the file
Here is My product
public class Product {
private Long idt;
private String typeProduct;
private Double price;
private String result;
}
and here is the definition of my job
<batch:job id="exampleMultiWritersJob">
<batch:step id="stepMultiWriters">
<batch:tasklet transaction-manager="txManager">
<batch:chunk reader="exampleFileSourceReader" writer="exampleMultiWriters" commit-interval="10">
<batch:streams>
<batch:stream ref="cnraCosWriter" />
<batch:stream ref="cnraCopWriter" />
<batch:stream ref="rcarCosWriter" />
<batch:stream ref="rcarCopWriter" />
</batch:streams>
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="exampleFileSourceReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="file:#{jobParameters['file']}" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<!-- split it -->
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<!-- this is missing -->
<property name="delimiter" value=";"/>
<property name="names" value="idt,productType,price" />
</bean>
</property>
<property name="fieldSetMapper">
<!-- map to an object -->
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="prototypeBeanName" value="exampleFileMapper" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="exampleFileMapper" class="ma.controle.gestion.modele.Product" scope="prototype"/>
And here is the classifier methode
public class ExampleWriterRouteImpl {
#Classifier
public String classify(Product batch){
if(batch.getTypeProduct().equals("Telephone"))
return "tel";
else if(batch.getTypeProduct()).equals("PC")))
return "pc";
return null;
}
}
<bean id="classifier" class="org.springframework.batch.classify.BackToBackPatternClassifier">
<property name="routerDelegate">
<bean class="ma.controle.gestion.springbatch.ExampleWriterRouteImpl" />
</property>
<property name="matcherMap">
<map>
<entry key="tel" value-ref="telWriter" />
<entry key="pc" value-ref="pcWriter" />
</map>
</property>
<bean id="telWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<!-- write to this csv file -->
<property name="resource" value="file:C:/output/tel.csv" />
<property name="shouldDeleteIfExists" value="true" />
<property name="shouldDeleteIfEmpty" value="true" />
<property name="appendAllowed" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="productType,price(Sum of all prdouctType),result" />
</bean>
</property>
</bean>
</property>
<bean id="pcWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<!-- write to this csv file -->
<property name="resource" value="file:C:/output/pc.csv" />
<property name="shouldDeleteIfExists" value="true" />
<property name="shouldDeleteIfEmpty" value="true" />
<property name="appendAllowed" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="productType,price(Sum of all prdouctType),result" />
</bean>
</property>
</bean>
</property>
So i need to make sum of all product which have the same type and have only one single output line that contains product type and the sum of prices and result for each outputs files.
I did not know how to retrieve the list of these objects and retrieve only one line at the end.
Thanks.
i used Spring Batch to read a database an write a xml file, but i cant find some configuration to make a multiple xml line file, i need on this way:
<xmlRecord><name>RecordOne</name></xmlRecord>
<xmlRecord><name>RecordTwo</name></xmlRecord>
each record into a line
but i only can created on this way:
<xmlRecord><name>RecordOne</name></xmlRecord><xmlRecord><name>RecordTwo</name></xmlRecord>
that is my configuration:
<bean id="itemWriter" class="org.springframework.batch.item.xml.StaxEventItemWriter" scope="step">
<property name="resource"
value="file:/var/opt/result.tmp" />
<property name="marshaller" ref="userUnmarshaller" />
<property name="overwriteOutput" value="true" />
<property name="RootTagName" value="!-- --"/>
</bean>
Marshal bean configuration:
<bean id="userUnmarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound">
<list>
<value>my.jaxb.data.TCRMService</value>
</list>
</property>
<property name="marshallerProperties">
<map>
<entry>
<key><util:constant static-field="javax.xml.bind.helpers.AbstractMarshallerImpl.JAXB_SCHEMA_LOCATION"/></key>
<value>http://www.ibm.com/mdm/schema MDMDomains.xsd</value>
</entry>
</map>
</property>
</bean>
Someone can help me or provide me some configuration to solve my problem?
Use below unmarshaller configuration:-
<bean id="userUnmarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound">
<list> your_class </list>
</property>
<property name="marshallerProperties">
<map>
<entry>
<key>
<util:constant static-field="javax.xml.bind.Marshaller.JAXB_FORMATTED_OUTPUT" />
</key>
<value type="java.lang.Boolean">true</value>
</entry>
</map>
</property>
</bean>
you can also do, something like below:-
<property name="marshallerProperties">
<map>
<entry key="jaxb.formatted.output">
<value type="boolean">true</value>
</entry>
</map>
</property>
I'm working on an app that extract records from an Oracle database and then are exported as one single tabulated file.
However, when I attempt to read from the DB using JdbcPagingItemReader and write to a file I only get the number of records specified in pageSize. So if the pageSize is 10, then I get a file with 10 lines and the rest of the records seem to be ignored. So far, I haven't been able to find whats is really going on and any help would be most welcome.
Here is the JdbcPagingItemReader config:
<bean id="databaseItemReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader" >
<property name="dataSource" ref="dataSourceTest" />
<property name="queryProvider">
<bean
class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSourceTest" />
<property name="selectClause" value="SELECT *" />
<property name="fromClause" value="FROM *****" />
<property name="whereClause" value="where snapshot_id=:name" />
<property name="sortKey" value="snapshot_id" />
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="name" value="18596" />
</map>
</property>
<property name="pageSize" value="100000" />
<property name="rowMapper">
<bean class="com.mkyong.ViewRowMapper" />
</property>
<bean id="itemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<!-- write to this csv file -->
<property name="resource" value="file:cvs/report.csv" />
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="ID" />
</bean>
</property>
</bean>
</property>
<job id="testJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step1">
<tasklet>
<chunk reader="databaseItemReader" writer="itemWriter" commit-interval="1" />
</tasklet>
</step>
thanks
it was the scope="step" that was missing it should be:
<bean id="databaseItemReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader" scope="step">
Your setting seem incorrect for whereClause and sort key can not be same because pagesize works hand to hand with your sorting column name.
Check how is your data(in corresponding table) looks like.
In spring batch , as per your configuration, spring will create and execute as given below..
first query executed with pagesize = 10 , is like following
SELECT top 10 FROM tableName where snapshot_id=18596 snapshot_id > 10
Second /remaining query executed depends on your sort key.
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 10
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 20
and so on.. try running this query in database , doesn't it look weird . :-)
If you don't need where clause, remove it.
And if possible keep page size and commit-interval same, because that's how you decide to process and persist. But of course that depends on your design. So you decide.
Adding #StepScope made my item reader take off with paging capability.
#Bean
#StepScope
ItemReader<Account> ItemReader(#Value("#{jobParameters[id]}") String id) {
JdbcPagingItemReader<Account> databaseReader = new JdbcPagingItemReader<>();
databaseReader.setDataSource(dataSource);
databaseReader.setPageSize(100);
databaseReader.setFetchSize(100);
PagingQueryProvider queryProvider = createQueryProvider(id);
databaseReader.setQueryProvider(queryProvider);
databaseReader.setRowMapper(new BeanPropertyRowMapper<>(Account.class));
return databaseReader;
}
Here a property mention that durability=true.
<bean name="complexJobDetail" class="org.springframework.scheduling.quartz.JobDetailFactoryBean">
<property name="jobClass" value="com.websystique.spring.quartz.ScheduledJob" />
<property name="jobDataMap">
<map>
<entry key="anotherBean" value-ref="anotherBean1" />
<entry key="myBean" value-ref="myBean" />
</map>
</property>
<property name="durability" value="true" />
</bean>
could you please explain what is the use of durability=true.
From here:
Specify the job's durability, i.e. whether it should remain stored in
the job store even if no triggers point to it anymore.