how to set filename and timestamp using spring-integration sftp? - java

I need to set filename & timestamp of a file using sftpoutputgateway object.
How do i do it ?
I know it will be done through Spel language ,but not sure what the systax looks like.

it would be better to just use the SftpRemoteFileTemplate directly in your code.
something like this way.
template.rename(...);
template.get(pathToFile, inputStream -> ...);
template.rename(...); // or template.remove(...);
for timestamp,
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows
.from(Sftp.inboundAdapter(this.sftpSessionFactory)
.preserveTimestamp(true)
.remoteDirectory("foo")
.regexFilter(".*\\.txt$")
.localFilenameExpression("#this.toUpperCase() + '.a'")
.localDirectory(new File("sftp-inbound")),
e -> e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(5000)))
.handle(m -> System.out.println(m.getPayload()))
.get();
}
}
you can also refer to this documentation

Setting timestamp on the remote file is not a gateway responsibility.
See SftpRemoteFileTemplate.executeWithClient(ClientCallback<C, T> callback):
public void handleMessage(Message<?> message) throws MessagingException {
String remoteFile = (String) message.getPayload();
Integer newModTime = message.getHeaders().get("newModTime", Integer.class);
template.executeWithClient((ClientCallbackWithoutResult<ChannelSftp>) client -> {
try {
SftpATTRS attrs = client.lstat(remoteFile);
attrs.setACMODTIME(attrs.getATime(), newModTime);
client.setStat(remoteFile, attrs);
}
catch (SftpException e) {
throw new RuntimeException(e);
}
});
}
This one can be used in a service activator method where you get access to the Message.

Related

Spring generate Flux<Part> from File

I build a utility class to upload a file to AWS S3 using a full WebFlux reactive stack.
The controller class method looks like this:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Timed(value = "timed.upload_customer_media", description = "Time taken to upload customer media")
public Mono<ServerResponse> uploadCustomerMedia(ServerRequest serverRequest) {
return serverRequest.body(BodyExtractors.toMultipartData())
.flatMap(parts -> {
Map<String, Part> partMap = parts.toSingleValueMap();
partMap.forEach((partName, value) -> log.info("Name: {}, value: {}", partName, value));
FilePart filePart = (FilePart) partMap.get("file");
log.info("File name is : [{}]", filePart.filename());
FormFieldPart formFieldPart = (FormFieldPart) partMap.get("mediaDTO");
log.info("mediaDTO is : [{}]", formFieldPart.value());
MediaDTO mediaDTO;
try {
mediaDTO = objectMapper.readValue(formFieldPart.value(), MediaDTO.class);
log.info("mediaDTO is : [{}]", mediaDTO);
var customerId = Long.parseLong(serverRequest.pathVariable(CUSTOMER_ID));
log.info("customerId is : [{}]", customerId);
return s3FileHandlerService.multipartUploadHandler(customerId, mediaDTO, Flux.just(filePart))
.elapsed()
.flatMap(tr -> {
log.info("Duration to upload file to S3 [fileName : {}, duration : {}]", filePart.filename(), tr.getT1());
log.debug("Now deleting file part from temp folder.");
return Mono.just(tr.getT2());
})
.flatMap(s -> filePart.delete()
.then(Mono.just(s)));
} catch (Exception ex) {
log.error("Error parsing mediaDTO: {}", ex.getMessage());
return Mono.error(() -> new CustomerProcessingException(HttpStatus.INTERNAL_SERVER_ERROR, "Error processing request.", ex));
}
}
)
.flatMap(body -> ServerResponse.status(HttpStatus.CREATED)
.contentType(MediaType.APPLICATION_JSON).body(BodyInserters.fromValue(body)))
.metrics();
}
The signature for the method looks like this:
public Mono<String> multipartUploadHandler(Long customerId, MediaDTO mediaDTO, Flux<Part> parts) {
So, my MultipartFile upload controller works like a dream. I can extract the Form payload and the attached file, upload it to S3 and happiness is.
A new requirement is to take an existing file that has been downloaded to the local os using WebClient, and submit it to this method.
For the life of me, I cannot find a way to construct an instance of the Part interface using the file contents to submit.
I am looking the org.springframework.http.codec.multipart.Part and FilePart interface JavaDoc, but all the known implementations are private classes.
Example: DefaultFilePart is private static final in DefaultParts.
So my question: Has anybody ever needed to do something like this or pointers?

How to wait for #Async annotated method to complete execution completely for all the elements of List<String> which has 130k element then execute next

I have used ThreadPoolTaskExecutor class to call my #Async annotated method as number of api calls are more then 130k+ so I am trying to achieve it through async api calls using executor framework, but once the list through which I am streaming and making async calls gets completed then next flow is getting executed, but here I want to wait until for all async calls gets completed. Which means I want to wait until I will get api response for all 130k+ call which has been made async while streaming the list
public void downloadData(Map.Entry<String, String> entry, String downloadPath,
Locale locale, ApiClient apiClient, Task task,
Set<Locale> downloadFailedLocales) {
String targetFileName = entry.getKey() + ".xml";
Path filePath = null;
try {
filePath = getTargetDestination(downloadPath, "2", entry.getKey(), targetFileName);
MultiValueMap<String, String> queryParameters = restelApiClient.fetchQueryParameters();
if (downloadPath != null && !downloadFileService.localFileExists(filePath)) {
fetchCountryAndHotelList(entry.getValue(), filePath, task, downloadFailedLocales, locale, queryParameters);
//After fetching hotelList proceed for fetching hotelInfo from hotelList xml Data
if (entry.getKey().equals(HotelConstants.HOTEL_LIST)) {
//fetching hotelCodes from downloaded xml of hotelList, to make API calls for hotelInfo
List<String> hotelInfoArray = getHotelCodeList(filePath);
AtomicInteger hotelCounter = new AtomicInteger();
String hotelInfoXml = apiClient.getApiClientSettings().getEndpoints()
.get(HotelConstants.HOTEL_INFO);
/*Fetching data from HotelInfo API Async but once it will stream the hotelinfo list then next flow of code execute and it won't wait all api calls to be made and get the response back. */
hotelInfoArray.stream().forEach(hotel -> {
StringBuilder fileName = new StringBuilder();
fileName.append(HotelConstants.HOTEL_INFO).append(hotelCounter.getAndIncrement()).append(".xml");
Path path = getTargetDestination(downloadPath, "2", HotelConstants.HOTEL_INFO,
fileName.toString());
StringBuilder hotelCode = new StringBuilder();
hotelCode.append("<codigo>").append(hotel).append("</codigo>");
String xml = String.format(hotelInfoXml).replace("<codigo></codigo>", hotelCode);
try {
hotelDataFetchThreadService.fetchHotelInfo(xml, path, task, downloadFailedLocales, locale, queryParameters);
} catch (DownloadFailedException e) {
log.info("Download failed for hotel code {} with exception {}", hotel, e);
downloadFileService.deleteIncompleteFiles(path);
}
});
}
} else {
log.info("file already exist skipping downloading again");
}
} catch (DownloadException e) {
downloadFileService.deleteIncompleteFiles(filePath);
log.info("Download failed for endpoint {} with exception {}", entry.getKey(), e);
} catch (DownloadFailedException e) {
throw new RuntimeException(e);
}
}
/*
This method make api call and write the xml response in local file in async way
*/
#Async("TestExecutor")
public void fetchHotelInfo(String xml, Path path, Task task, Set<Locale> downloadFailedLocales, Locale locale,
MultiValueMap<String, String> queryParameters) throws DownloadFailedException {
Flux<DataBuffer> bufferedData;
try {
// log.info("using thread {}", Thread.currentThread().getName());
bufferedData = apiClient.getWebClient()
.uri(uriBuilder -> uriBuilder
.queryParams(queryParameters)
.queryParam(HotelConstants.XML, xml.trim())
.build()
).retrieve()
.bodyToFlux(DataBuffer.class)
.retryWhen(Retry.fixedDelay(maxRetryAttempts, Duration.ofSeconds(maxRetryDelay))
.onRetryExhaustedThrow(
(RetryBackoffSpec retryBackoffSpec, Retry.RetrySignal retrySignal) -> {
throw new DownloadException(
"External Service failed to process after max retries");
}));
writeBufferDataToFile(bufferedData, path);
} catch (DownloadException e) {
downloadFileService.deleteIncompleteFiles(path);
downloadFailedLocales.add(locale);
if (locale.equals(task.getJob().getProvider().getDefaultLocale().getLocale())) {
throw new DownloadFailedException(
String.format("Network issue during download, Max retry reached: %s", e.getMessage()), e);
}
log.info("Download failed for with exception ", e);
}
}

What's the procedure to re-download locally deleted files using SFTP Inbound

As per this doc couldn't find the right process to re-downloading a locally removed file from remote SFTP.
The requirement is, to delete local file which already been fetched from remote SFTP and use sftp-inbound-adapter (DSL configuration) to re-fetch that same file when required. In this implementation, MetadataStore haven't been persisted into any external system like PropertiesPersistingMetadataStore or Redis Metadata Store. So as per doc, MetadataStore stored in In-Memory.
Couldn't find any way to remove meta data of that remote file from MetadataStore to re-fetch the locally deleted file using file_name. And don't have any clue, how should this removeRemoteFileMetadata() callback needs to be implemented (according to this doc).
Configuration class contain followings:
#Bean
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp.inboundAdapter(sftpConfig.getSftpSessionFactory())
.preserveTimestamp(true)
.patternFilter(Constants.FILE_NAME_CONVENTION)
.remoteDirectory(sftpConfig.getSourceLocation())
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(sftpConfig.getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id("sftpInboundAdapter").autoStartup(false)
.poller(Pollers.fixedDelay(5000).get()))
.channel(MessageChannels.direct().get())
.handle(message -> {
log.info("Fetching File : " + message.getHeaders().get("file_name").toString());
})
.get();
}
I tried to solve this and I used Tanvir Hossain's reference code. I coded like this.
#Bean
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp
.inboundAdapter(sftpConfig.getSftpSessionFactory())
.preserveTimestamp(true)
.filter(sftpFileListFilter())
.localFilter(systemFileListFilter())
.remoteDirectory(sftpConfig.getSourceLocation())
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(sftpConfig.getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id("sftpInboundAdapter").autoStartup(false)
.poller(Pollers.fixedDelay(5000).get()))
.channel(MessageChannels.direct().get())
.handle(message -> {
log.info("Fetching File : "
+ message.getHeaders().get("file_name").toString());
})
.get();
}
private FileSystemPersistentAcceptOnceFileListFilter systemFileListFilter() {
return new FileSystemPersistentAcceptOnceFileListFilter(store(), prefix);
}
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter =
new ChainFileListFilter<>();
chainFileListFilter.addFilters(
new SftpPersistentAcceptOnceFileListFilter(store(), prefix),
new SftpSimplePatternFileListFilter(sftpConfig.getFileFilterValue())
);
return chainFileListFilter;
}
#Bean
public SimpleMetadataStore store() {
return new SimpleMetadataStore();
}
and my Controller for removing metadata is like below :
public class Controller {
private final SimpleMetadataStore simpleMetadataStore;
public Controller(SimpleMetadataStore simpleMetadataStore) {
this.simpleMetadataStore = simpleMetadataStore;
}
#GetMapping("/test/remove-metadata/{type}/{fileName}")
#ResponseBody
public String removeFileMetadata(
#PathVariable("fileName") String fileName,
#PathVariable("type") String type
) {
String prefix = definedPrefix;
String filePath = "";
if(type.equals("local")){
filePath = "/local/storage/path/" + fileName;
}else if(type.equals("remote")){
filePath = fileName
}
String key = prefix + filePath;
simpleMetadataStore.remove(key);
return key;
}
}
I am getting my desired file. It is re-fetching file for me.
Use a ChainFileListFilter, with a SftpSimplePatternFileListFilter and a SftpPersistentAcceptOnceFileListFilter.
Use a SimpleMetadataStore to store the state in memory (or some other MetadataStore).
new SftpPersistentAcceptOnceFileListFilter(store, "somePrefix");
Then, store.remove(key) where key is somePrefix + fileName.
Use a similar filter in the localFilter with FileSystemPersistentAcceptOnceFileListFilter.

Spring Integration error is attaching completed payload

I have a listener to the JMS. Once I read the message then I convert to my custom object
public IntegrationFlow queueProcessorFlow() {
return IntegrationFlows.from(Jms.inboundAdapter(jmsTemplate)
.destination("test_queue"),
c -> c.poller(Pollers.fixedDelay(5000L)
.maxMessagesPerPoll(1)))
//convert json to our custom object
.transform(new JsonToQueueEventConverterTransformer(springBeanFactory))
.transform(new CustomTransformer(springBeanFactory))
.handle(o -> {
}).get();
}
The transformer
public class CustomerTransformer implements GenericTransformer<CustomPojo, CustomPojo> {
private final QueueDataProcessorSpringBeanFactory factory;
#Override
public CustomPojo transform(CustomPojo CustomPojo) {
try {
//do something e.g. service call
throw new Exception("This failed mate !! SOS");
} catch (Exception e) {
//ISSUE here
//e contains the original payload in the stack trace
throw new RuntimeException(e);
}
return CustomPojo;
}
Now when I throw my custom exception the stack trace contains everything. It even contains the payload. I am not interested in the payload in case of exception.
How do I update not to include payload?
** Update **
After changing as per answer I still see the issue
org.springframework.integration.transformer.MessageTransformationException: Failed to transform Message; nested exception is org.springframework.messaging.MessageHandlingException: nested exception is org.springframework.integration.transformer.MessageTransformationException: Error initiliazing the :; nested exception is CustomException Error lab lab lab , failedMessage=GenericMessage [payload=
my error handler
#Bean
public IntegrationFlow errorHandlingFlow() {
return IntegrationFlows.from("errorChannel")
.handle(message -> {
try {
ErrorMessage e = (ErrorMessage) message;
if (e.getPayload() instanceof MessageTransformationException) {
String stackTrace = ExceptionUtils.getStackTrace(e.getPayload());
LOG.info("Exception trace {} ", stackTrace);
Not sure what is the business purpose to lose a payload in the stack trace, but you can achieve that throwing a MessageTransformationException instead of that RuntimeException.
To avoid a message in stack trace with the mentioned payload, you need to use one of these constructors:
public MessageTransformationException(String description, Throwable cause) {
super(description, cause);
}
public MessageTransformationException(String description) {
super(description);
}
Instead of those based on the Message<?>.
This way a wrapping MessageTransformingHandler will do an appropriate logic:
protected Object handleRequestMessage(Message<?> message) {
try {
return this.transformer.transform(message);
}
catch (Exception e) {
if (e instanceof MessageTransformationException) {
throw (MessageTransformationException) e;
}
throw new MessageTransformationException(message, "Failed to transform Message", e);
}
}
UPDATE
It turned out that MessageTransformationException is not enough since the AbstractMessageHandler checks for the MessageHandlingException for wrapping in the IntegrationUtils.wrapInHandlingExceptionIfNecessary(). Therefore I suggest to throw a MessageHandlingException from your code instead. And use this constructor with the null for the message arg:
MessageHandlingException(Message<?> failedMessage, Throwable cause)
I had almost the same issue maybe this can help you. If you use the default errorChannel Bean this already has been subscribed to a LoggingHandler which prints the the full message, if you want avoid printing the payload you can create your own errorChannel by this way you'll override the default behavior
#Bean
#Qualifier(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
public MessageChannel errorChannel() {
return new PublishSubscribeChannel();
}
If your problem is when you use the .log() handler you can always use a function to decide which part of the Message you want to show
#Bean
public IntegrationFlow errorFlow(IntegrationFlow
createOutFileInCaseErrorFlow) {
return
IntegrationFlows.from(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
.log(LoggingHandler.Level.ERROR, m -> m.getHeaders())
.<MessagingException>log(Level.ERROR, p -> p.getPayload().getMessage())
.get();
}

Using flatMap with Observable in JAVA

Can someone help me to understand this portion of code?
I'm trying to get some config files from database using a dataRepository class that returns an Observable of the config files in a special form ( it was developed by another developer)
final List<LegalBookDescriptor> legalBookDescriptors = dataRepository.findAllConfigFiles(legalBookDescriptorsDir)
.flatMap(new Func1<ConfigFile, Observable<LegalBookDescriptor>>() {
#Override
public Observable<LegalBookDescriptor> call(ConfigFile configFile) {
try {
final LegalBookDescriptor legalBookDescriptor = conversionService.convert(configFile.getContent(), LegalBookDescriptor.class);
LOG.info(String.format("Successfully loaded [Legal Book Descriptor] from file [%s]", configFile.getPath()));
return Observable.just(legalBookDescriptor);
} catch (Exception e) {
LOG.error(String.format("Failed to load [Legal Book Descriptor] from file [%s]", configFile.getPath()), e);
return Observable.empty();
}
}
})
.toList()
.toBlocking()
.single();
if (legalBookDescriptors.isEmpty()) {
LOG.warn(String.format("Hasn't found any valid Legal Book Descriptor file in the root directory [%s].", legalBookDescriptorsDir));
}
Thank you in advance!

Categories