Java & Apache-Camel: From direct-endpoint to file-endpoint - java

I've tried to build a route to copy files from one directory to an other directory. But instead of using:
from(file://source-directory).to(file://destination-directory)
I want to do something like this:
from(direct:start)
.to(direct:doStuff)
.to(direct:readDirectory)
.to(file://destination-folder)
I've done the following stuff:
Route
#Component
public class Route extends AbstractRouteBuilder {
#Override
public void configure() throws Exception {
from("direct:start")
.bean(lookup(ReadDirectory.class))
.split(body())
.setHeader("FILENAME", method(lookup(CreateFilename.class)))
.to("file:///path/to/my/output/directory/?fileName=${header.FILENAME}");
}
Processor
#Component
public class ReadDirectory implements CamelProcessorBean {
#Handler
public ImmutableList<File> apply(#Header("SOURCE_DIR") final String sourceDir) {
final File directory = new File(sourceDir);
final File[] files = directory.listFiles();
if (files == null) {
return ImmutableList.copyOf(Lists.<File>newArrayList());
}
return ImmutableList.copyOf(files);
}
}
I can start my route by using the following pseudo-Test (The point is I can manually start my route by producer.sendBodyAndHeader(..))
public class RouteIT extends StandardIT {
#Produce
private ProducerTemplate producer;
#Test
public void testRoute() throws Exception {
final String uri = "direct:start";
producer.sendBodyAndHeaders(uri, InOut, null, header());
}
private Map<String, Object> header() {
final Map<String, Object> header = Maps.newHashMap();
header.put("SOURCE_DIR", "/path/to/my/input/directory/");
return header;
}
}
AbstractRouteBuilderextends SpringRouteBuilder
CamelProcessorBean is only a Marker-Interface
StandardIT loads SpringContext and stuff
The problem is, that I must set the filename. I've read some stuff that camel sets the header CamelFileNameProduced (during the file endpoint). It is a generic string with timestamp and if I don't set the filename - the written files will get this generic string as the filename.
My Question is: Is there a more beautiful solution to copy files (but starting with a direct-endpoint and read the directory in the middle of the route) and keep the filename for the destination? (I don't have to set the filename when I use from("file:source").to("file:destination"), why must I do it now?)

You can set the file name when you send using the producer template, as long as the header is propagated during the routing between the routes you are all fine, which Camel does by default.
For example
#Test
public void testRoute() throws Exception {
final String uri = "direct:start";
Map headers = ...
headers.put(Exchange.FILE_NAME, "myfile.txt");
producer.sendBodyAndHeaders(uri, InOut, null, headers);
}
The file component talks more about how to control the file name
http://camel.apache.org/file2

Related

Invalid character \u0000 in Spring PropertiesPersistingMetadataStore file

As shown in the code below, we have an FtpInboundFileSynchronizingMessageSource with a FileSystemPersistentAcceptOnceFileListFilter using PropertiesPersistingMetadataStore.
#Bean
public PropertiesPersistingMetadataStore getMetadataStore() {
final PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore() {
#Override
public String putIfAbsent(final String key, final String value) {
try {
super.afterPropertiesSet();
} catch (final Exception e) {
e.printStackTrace();
}
return super.putIfAbsent(key, value);
}
};
metadataStore.setBaseDirectory(getRegistryValue("LOCALMETASTOREDIRECTORY"));
return metadataStore;
}
#Bean
#InboundChannelAdapter(value = "CSVChannel", poller = #Poller(fixedRate = "30000", maxMessagesPerPoll = "1"))
public MessageSource<File> ftpMessageSource() {
final String METHODNAME = "ftpMessageSource()";
if (LoggingHelper.isEntryExitTraceEnabled(LOGGER)) {
LOGGER.entering(CLASSNAME, METHODNAME);
}
final Comparator<File> fileLastModifiedDateComparator = new Comparator<File>() {
#Override
public int compare(final File f1, final File f2) {
return Long.valueOf(f1.lastModified())
.compareTo(f2.lastModified());
}
};
final FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer(), fileLastModifiedDateComparator);
source.setLocalDirectory(new File(getRegistryValue("LOCALDIRECTORY")));
final FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = new FileSystemPersistentAcceptOnceFileListFilter(getMetadataStore(),
getRegistryValue("REMOTEFILENAMEPATTERN_ANAG_CLI"));
fileSystemPersistentAcceptOnceFileListFilter.setFlushOnUpdate(true);
source.setLocalFilter(fileSystemPersistentAcceptOnceFileListFilter);
if (LoggingHelper.isEntryExitTraceEnabled(LOGGER)) {
LOGGER.exiting(CLASSNAME, METHODNAME);
}
return source;
}
We have 4 instances of the application running in production and the local directory, meta store directory are all on a location shared by all 4 instances.
The problem we facing now is we are seeing invalid characters written in the metadata-store.properties file and sometimes there is some process writing this character \u0000 continuously and that causes the file to grow in big size, like 1GB in few minutes. And since the metadata is read in to memory by the framework that is causing outofmemoryexception when the file is very big.
Please see below some entries from the metadata-store.properties file below.
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200609113855907_ANAG_CLI_20200609113846.CSV.a=1591695480000
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200610105125916_ANAG_CLI_20200610105118.CSV.a.writing=1591779085951
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200609133155929_ANAG_CLI_20200609133146.CSV.a=1591702315917
Is it safe to use the PropertiesPersistingMetadataStore like this in a shared location between more than one application instances? How to understand what is causing this invalid character issue and how to avoid this?
Any help would be appreciated!

spring batch file writer to write directly to amazon s3 storage without PutObjectRequest

I'm trying to upload a file to amazon s3. Instead of uploading, I want to read the data from database using spring batch and write the file directly into the s3 storage. Is there anyway we can do that ?
Spring Cloud AWS adds support for the Amazon S3 service to load and write resources with the resource loader and the s3 protocol. Once you have configured the AWS resource loader, you can write a custom Spring Batch writer like:
import java.io.OutputStream;
import java.util.List;
import org.springframework.batch.item.ItemWriter;
import org.springframework.core.io.ResourceLoader;
import org.springframework.core.io.WritableResource;
public class AwsS3ItemWriter implements ItemWriter<String> {
private ResourceLoader resourceLoader;
private WritableResource resource;
public AwsS3ItemWriter(ResourceLoader resourceLoader, String resource) {
this.resourceLoader = resourceLoader;
this.resource = (WritableResource) this.resourceLoader.getResource(resource);
}
#Override
public void write(List<? extends String> items) throws Exception {
try (OutputStream outputStream = resource.getOutputStream()) {
for (String item : items) {
outputStream.write(item.getBytes());
}
}
}
}
Then you should be able to use this writer with an S3 resource like s3://myBucket/myFile.log.
Is there anyway we can do that ?
Please note that I did not compile/test the previous code. I just wanted to give you a starting point of how to do it.
Hope this helps.
The problem is that the OutputStream will only write the last List items sent by the step...
I think you might need to write a temporary file on file system and then send the whole file in a separate tasklet
See this example :
https://github.com/TerrenceMiao/AWS/blob/master/dynamodb-java/src/main/java/org/paradise/microservice/userpreference/service/writer/CSVFileWriter.java
I had the same thing to do. Because spring has no clas to write to a stream alone I made one my self like above example:
You need to classes for this. A Resource class which implements WriteableResource and extends AbstractResource:
...
public class S3Resource extends AbstractResource implements WritableResource {
ByteArrayOutputStream resource = new ByteArrayOutputStream();
#Override
public String getDescription() {
return null;
}
#Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(resource.toByteArray());
}
#Override
public OutputStream getOutputStream() throws IOException {
return resource;
}
}
And your writer which extends ItemWriter:
public class AmazonStreamWriter<T> implements ItemWriter<T>{
private WritableResource resource;
private LineAggregator<T> lineAggregator;
private String lineSeparator;
public String getLineSeparator() {
return lineSeparator;
}
public void setLineSeparator(String lineSeparator) {
this.lineSeparator = lineSeparator;
}
AmazonStreamWriter(WritableResource resource){
this.resource = resource;
}
public WritableResource getResource() {
return resource;
}
public void setResource(WritableResource resource) {
this.resource = resource;
}
public LineAggregator<T> getLineAggregator() {
return lineAggregator;
}
public void setLineAggregator(LineAggregator<T> lineAggregator) {
this.lineAggregator = lineAggregator;
}
#Override
public void write(List<? extends T> items) throws Exception {
try (OutputStream outputStream = resource.getOutputStream()) {
StringBuilder lines = new StringBuilder();
Iterator var3 = items.iterator();
while(var3.hasNext()) {
T item = (T) var3.next();
lines.append(this.lineAggregator.aggregate(item)).append(this.lineSeparator);
}
outputStream.write(lines.toString().getBytes());
}
}
}
With this setup you will write your Item-Information you recieve from your database and write it to your Customresource via an OutputStream. The filled resource then can be used in one of your Steps zu open an InputStream and upload to S3 via Client.
I did it with: amazonS3.putObject(awsBucketName, awsBucketKey , resource.getInputStream(), new ObjectMetadata());
My solution may be not the perfect aproach, but from here on you can optimize it.

How to process two files paired with Apache Camel

I am building applications using Apache Camel.
In this application, two files with the same name as the xml file and extension of jpg are placed in a specific directory.
We will process this file using Apache Camel's file2 component.
And I'm using Apache Camel Version 2.19.0
I would like to meet the following specifications.
1. When the processing is completed, move the xml file and the paired jpg file to the done directory
2. When the processing fails, move the xml file and the paired jpg file to the error directory
The directory structure is as follows.
main/ftp/20170605-110000.xml
/20170605-110000.jpg
main/done/20170604-090000.xml
20170604-090000.jpg
main/error/20170604-090000.xml
20170604-090000.jpg
I have satisfied the desired behavior with the following code.
public class ExampleRoute extends RouteBuilder {
private final File ftpDir;
private final File doneDir;
private final File errorDir;
public ExampleRoute() {
this.ftpDir = new File("work/main/ftp");
this.doneDir = new File("work/main/done");
this.errorDir = new File("work/main/error");
}
#Override
public void configure() throws Exception {
String format = "file://%s?include=.*.xml&move=%s&moveFailed=%s";
String from = String.format(format,
ftpDir.getAbsolutePath(),
doneDir.getAbsolutePath(),
errorDir.getAbsolutePath());
// #formatter:off
onException(Exception.class)
.handled(false)
.process(new MoveResourceProcessor((errorDir)))
.stop();
// #formatter:on
// #formatter:off
from(from)
.process(exchange -> {
// Nothing to do...
})
.process(new MoveResourceProcessor(doneDir))
.end();
// #formatter:on
}
private class MoveResourceProcessor implements Processor {
private final File dir;
public MoveResourceProcessor(File dir) {
this.dir = dir;
}
#Override
public void process(Exchange exchange) throws Exception {
String parent = (String) exchange.getIn().getHeader(Exchange.FILE_PARENT);
File parentDir = new File(parent);
String filename = (String) exchange.getIn().getHeader(Exchange.FILE_NAME_ONLY);
String baseName = FilenameUtils.getBaseName(filename);
File source = new File(parentDir, baseName + ".jpg");
if (source.exists()) {
File dest = new File(dir, source.getName());
FileUtils.moveFile(source, dest);
}
}
}
}
But is there a better method for arranging multiple files related to such target files?

How to create MIME-atachment text/xml in Java?

How can I create mime-atachment text/xml for my SOAPMessage?
I have a function, which sends binary file of XML. But I don't know how can I do it.
Use a DataHandler/DataSource to push the binary data into the message on the client side.
On the server side, you need to create a DataContentHandler implementation and register it with the activation framework.
Step 1 - Adding the binary attachment
Implement a simple DataSource for getting the data:
import javax.activation.*;
class BinaryDataSource implements DataSource {
InputStream _is;
public BinaryDataSource(InputStream is) {
_is = is;
}
public String getContentType() { return "application/binary"; }
public InputStream getInputStream() throws IOException { return _is; }
public String getName() { return "some file"; }
public OutputStream getOutputStream() throws IOException {
throw new IOException("Cannot write to this file");
}
}
Now use this code to add the attachment:
InputStream data = ...
SOAPMessage msg = ...
DataHandler dh = new DataHandler(new BinaryDataSource(data));
AttachmentPart attachment = msg.createAttachmentPart(dh);
msg.addAttachmentPart(attachment);
Step 2 - Setup the server side
[Note: this worked for me]
Create a DataContentHandler which handles the incoming attachment of type "application/binary".
import javax.activation.*;
import java.io.*;
public class BinaryDataHandler implements DataContentHandler {
/** Creates a new instance of BinaryDataHandler */
public BinaryDataHandler() {
}
/** This is the key, it just returns the data uninterpreted. */
public Object getContent(javax.activation.DataSource dataSource) throws java.io.IOException {
System.out.println("BinaryDataHandler: getContent called with: " + dataSource);
return dataSource.getInputStream();
}
public Object getTransferData(java.awt.datatransfer.DataFlavor dataFlavor,
javax.activation.DataSource dataSource)
throws java.awt.datatransfer.UnsupportedFlavorException,
java.io.IOException {
return null;
}
public java.awt.datatransfer.DataFlavor[] getTransferDataFlavors() {
return new java.awt.datatransfer.DataFlavor[0];
}
public void writeTo(Object obj, String str, java.io.OutputStream outputStream)
throws java.io.IOException {
// You would need to implement this to have
// the conversion done automatically based on
// mime type on the client side.
}
}
Now, you can use this code to get the data of the attachment:
SOAPMessage msg = ... //received message
Iterator ats = msg.getAttachments();
if( ats.hasNext() ){
AttachmentPart attachment = (AttachmentPart)ats.next();
InputStream contents = (InputStream)attachment.getContent();
}
Finally, you need to register your DataContentHandler so that the activation framework will use it. There are a couple of ways (see MailcapCommandMap in the activation framework javadocs). What I did was to create a file called "mailcap" in the lib directory used by my "java" interpreter.
This file looks like this:
application/binary; BinaryDataHandler
application/binary;; x-java-content-handler=BinaryDataHandler
This tells the activation framework to use your handler for the indicated
MIME type.

Is it possible to get the absolute request url without path parameters

I am trying to get the request url without values of path parameters into it.
Consider my complete url is
URl: http://localhost:8080/aaa/mock/abcd/1234/true
Path parameters: abcd, true
Output needed: /aaa/mock/abcd
My web service method looks like this.
#Path(value = "/aaa/mock")
#Component
public class MockService
{
private static Log log = LogFactory.getLog(MockService.class);
//address
#GET
#Path(value = "/{mockrequest}/{status}")
#Produces(MediaType.JSON)
public String mockEngagement(#Context ContainerRequestContext request,#PathParam("mockrequest") String mockrequest,#PathParam("status") String status )
{
log.info("The mock url is"+request.getUriInfo().getRequestUri());
log.info("The mock url is"+request.getUriInfo().getAbsolutePath());
log.info("The mock url is"+request.getUriInfo().getBaseUri());
log.info("The mock url is"+request.getUriInfo().getMatchedURIs());
**//Out put needed /aaa/mock/abcd**
return "ajaja";
}
}
None of the above calls return the required info.
I am thinking if there is a generic process to get the desired output irrespective of number of path parameters.
Any such methods.
Try UriInfo#getPath(), UriInfo#getPath(boolean), or UriInfo#getPathSegments(). The boolean argument is whether the path should be encoded or not.
https://jersey.java.net/apidocs/2.3.1/jersey/index.html
You could also get the absolute path and the base path and then use URI#relativize(URI).
Try this:
request.getUriInfo().getPathSegments().get(0).getPath()
public void filter(ContainerRequestContext context) throws IOException {
Message message = PhaseInterceptorChain.getCurrentMessage();
Set<Map.Entry<String, Object>> o = (Set<Map.Entry<String, Object>>)message.entrySet();
for (Map.Entry<String, Object> oo : o) {
String key = oo.getKey();
Object val = oo.getValue();
// Thises two properties gives the path of web service
//path_to_match_slash
//org.apache.cxf.request.uri
if(key.equals("path_to_match_slash"))
{ String v = (String)val;
System.out.println (key);
System.out.println (v);
}
if(key.equals("org.apache.cxf.request.uri"))
{ String v = (String)val;
System.out.println (key);
System.out.println (v);
}
}
}
this code could work only for apache cxf rest
we can found path_to_match_slash , org.apache.cxf.request.uri properties in the ContainerRequestContext

Categories