Spring generate Flux<Part> from File - java

I build a utility class to upload a file to AWS S3 using a full WebFlux reactive stack.
The controller class method looks like this:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Timed(value = "timed.upload_customer_media", description = "Time taken to upload customer media")
public Mono<ServerResponse> uploadCustomerMedia(ServerRequest serverRequest) {
return serverRequest.body(BodyExtractors.toMultipartData())
.flatMap(parts -> {
Map<String, Part> partMap = parts.toSingleValueMap();
partMap.forEach((partName, value) -> log.info("Name: {}, value: {}", partName, value));
FilePart filePart = (FilePart) partMap.get("file");
log.info("File name is : [{}]", filePart.filename());
FormFieldPart formFieldPart = (FormFieldPart) partMap.get("mediaDTO");
log.info("mediaDTO is : [{}]", formFieldPart.value());
MediaDTO mediaDTO;
try {
mediaDTO = objectMapper.readValue(formFieldPart.value(), MediaDTO.class);
log.info("mediaDTO is : [{}]", mediaDTO);
var customerId = Long.parseLong(serverRequest.pathVariable(CUSTOMER_ID));
log.info("customerId is : [{}]", customerId);
return s3FileHandlerService.multipartUploadHandler(customerId, mediaDTO, Flux.just(filePart))
.elapsed()
.flatMap(tr -> {
log.info("Duration to upload file to S3 [fileName : {}, duration : {}]", filePart.filename(), tr.getT1());
log.debug("Now deleting file part from temp folder.");
return Mono.just(tr.getT2());
})
.flatMap(s -> filePart.delete()
.then(Mono.just(s)));
} catch (Exception ex) {
log.error("Error parsing mediaDTO: {}", ex.getMessage());
return Mono.error(() -> new CustomerProcessingException(HttpStatus.INTERNAL_SERVER_ERROR, "Error processing request.", ex));
}
}
)
.flatMap(body -> ServerResponse.status(HttpStatus.CREATED)
.contentType(MediaType.APPLICATION_JSON).body(BodyInserters.fromValue(body)))
.metrics();
}
The signature for the method looks like this:
public Mono<String> multipartUploadHandler(Long customerId, MediaDTO mediaDTO, Flux<Part> parts) {
So, my MultipartFile upload controller works like a dream. I can extract the Form payload and the attached file, upload it to S3 and happiness is.
A new requirement is to take an existing file that has been downloaded to the local os using WebClient, and submit it to this method.
For the life of me, I cannot find a way to construct an instance of the Part interface using the file contents to submit.
I am looking the org.springframework.http.codec.multipart.Part and FilePart interface JavaDoc, but all the known implementations are private classes.
Example: DefaultFilePart is private static final in DefaultParts.
So my question: Has anybody ever needed to do something like this or pointers?

Related

Spring Reactive. How wait for all monos to finish?

I have the following code where I call external APIs via webclient and return Mono.
I need to execute some logic when I receive data. And after all, requests are processed, execute one logic for all gathered data. I can collect all Monos and put them to flux and then execute some logic at the end. But I have serviceName filed which is accessible only in the loop, so I need to execute logic for mono in loop and here I'm stuck and don't know how to wait for all data to complete and do it in a reactive way.
#Scheduled(fixedDelay = 50000)
public void refreshSwaggerConfigurations() {
log.debug("Starting Service Definition Context refresh");
List<SwaggerServiceData> allServicesApi = new ArrayList<>();
swaggerProperties.getUrls().forEach((serviceName, serviceSwaggerUrl) -> {
log.debug("Attempting service definition refresh for Service : {} ", serviceName);
Mono<SwaggerServiceData> swaggerData = getSwaggerDefinitionForAPI(serviceName,
serviceSwaggerUrl);
swaggerData.subscribe(swaggerServiceData -> {
if (swaggerServiceData != null) {
allServicesApi.add(swaggerServiceData);
String content = getJSON(swaggerServiceData);
definitionContext.addServiceDefinition(serviceName, content);
} else {
log.error("Skipping service id : {} Error : Could not get Swagger definition from API ",
serviceName);
}
});
});
//I need to wait here for all monos to complete and after that proceed for All gathered data...
//Now it's empty And I know why, just don't know how to make it.
Optional<SwaggerServiceData> swaggerAllServicesData = getAllServicesApiSwagger(allServicesApi);
if (swaggerAllServicesData.isPresent()) {
String allApiContent = getJSON(swaggerAllServicesData.get());
definitionContext.addServiceDefinition("All", allApiContent);
}
}
private Mono<SwaggerServiceData> getSwaggerDefinitionForAPI(String serviceName, String url) {
log.debug("Accessing the SwaggerDefinition JSON for Service : {} : URL : {} ", serviceName,
url);
Mono<SwaggerServiceData> swaggerServiceDataMono = webClient.get()
.uri(url)
.exchangeToMono(clientResponse -> clientResponse.bodyToMono(SwaggerServiceData.class));
return swaggerServiceDataMono;
}
I would add a temporary class to group data and serivce name :
record SwaggerService(SwaggerServiceData swaggerServiceData, String serviceName) {
boolean hasData() {
return swaggerServiceData != null;
}
}
And then change your pipeline :
Flux.fromStream(swaggerProperties.getUrls().entrySet().stream())
.flatMap((e) -> {
Mono<SwaggerServiceData> swaggerDefinitionForAPI = getSwaggerDefinitionForAPI(e.getKey(),
e.getValue());
return swaggerDefinitionForAPI.map(swaggerServiceData -> new SwaggerService(swaggerServiceData, e.getKey()));
})
.filter(SwaggerService::hasData)
.map(swaggerService -> {
String content = getJSON(swaggerService.swaggerServiceData());
definitionContext.addServiceDefinition(swaggerService.serviceName(), content);
return swaggerService.swaggerServiceData();
})
// here we will collect all datas and they will be emmited as single Mono with list of SwaggerServiceData
.collectList()
.map(this::getAllServicesApiSwagger)
.filter(Optional::isPresent)
.map(Optional::get)
.subscribe(e -> {
String allApiContent = getJSON(e);
definitionContext.addServiceDefinition("All", allApiContent);
});
This does not deal with logging error when SwaggerServiceData is null but you can further change it if you want. Also I assume that DefinitionContext is thread safe.
Solution with error logging (using flatMap and Mono.empty()) :
Flux.fromStream(swaggerProperties.getUrls().entrySet().stream())
.flatMap((e) -> {
Mono<SwaggerServiceData> swaggerDefinitionForAPI = getSwaggerDefinitionForAPI(e.getKey(),
e.getValue());
return swaggerDefinitionForAPI
.flatMap(swaggerServiceData -> {
if(swaggerServiceData != null) {
return Mono.just(new SwaggerService(swaggerServiceData, e.getKey()));
} else {
log.error("Skipping service id : {} Error : Could not get Swagger definition from API ",
e.getKey());
return Mono.empty();
}
});
})
.map(swaggerService -> {
String content = getJSON(swaggerService.swaggerServiceData());
definitionContext.addServiceDefinition(swaggerService.serviceName(), content);
return swaggerService.swaggerServiceData();
}).collectList()
.map(this::getAllServicesApiSwagger)
.filter(Optional::isPresent)
.map(Optional::get)
.subscribe(e -> {
String allApiContent = getJSON(e);
definitionContext.addServiceDefinition("All", allApiContent);
});
You can also wrap those lambads into some meaningful methods to improve readibility.

how to set filename and timestamp using spring-integration sftp?

I need to set filename & timestamp of a file using sftpoutputgateway object.
How do i do it ?
I know it will be done through Spel language ,but not sure what the systax looks like.
it would be better to just use the SftpRemoteFileTemplate directly in your code.
something like this way.
template.rename(...);
template.get(pathToFile, inputStream -> ...);
template.rename(...); // or template.remove(...);
for timestamp,
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows
.from(Sftp.inboundAdapter(this.sftpSessionFactory)
.preserveTimestamp(true)
.remoteDirectory("foo")
.regexFilter(".*\\.txt$")
.localFilenameExpression("#this.toUpperCase() + '.a'")
.localDirectory(new File("sftp-inbound")),
e -> e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(5000)))
.handle(m -> System.out.println(m.getPayload()))
.get();
}
}
you can also refer to this documentation
Setting timestamp on the remote file is not a gateway responsibility.
See SftpRemoteFileTemplate.executeWithClient(ClientCallback<C, T> callback):
public void handleMessage(Message<?> message) throws MessagingException {
String remoteFile = (String) message.getPayload();
Integer newModTime = message.getHeaders().get("newModTime", Integer.class);
template.executeWithClient((ClientCallbackWithoutResult<ChannelSftp>) client -> {
try {
SftpATTRS attrs = client.lstat(remoteFile);
attrs.setACMODTIME(attrs.getATime(), newModTime);
client.setStat(remoteFile, attrs);
}
catch (SftpException e) {
throw new RuntimeException(e);
}
});
}
This one can be used in a service activator method where you get access to the Message.

How to wait for #Async annotated method to complete execution completely for all the elements of List<String> which has 130k element then execute next

I have used ThreadPoolTaskExecutor class to call my #Async annotated method as number of api calls are more then 130k+ so I am trying to achieve it through async api calls using executor framework, but once the list through which I am streaming and making async calls gets completed then next flow is getting executed, but here I want to wait until for all async calls gets completed. Which means I want to wait until I will get api response for all 130k+ call which has been made async while streaming the list
public void downloadData(Map.Entry<String, String> entry, String downloadPath,
Locale locale, ApiClient apiClient, Task task,
Set<Locale> downloadFailedLocales) {
String targetFileName = entry.getKey() + ".xml";
Path filePath = null;
try {
filePath = getTargetDestination(downloadPath, "2", entry.getKey(), targetFileName);
MultiValueMap<String, String> queryParameters = restelApiClient.fetchQueryParameters();
if (downloadPath != null && !downloadFileService.localFileExists(filePath)) {
fetchCountryAndHotelList(entry.getValue(), filePath, task, downloadFailedLocales, locale, queryParameters);
//After fetching hotelList proceed for fetching hotelInfo from hotelList xml Data
if (entry.getKey().equals(HotelConstants.HOTEL_LIST)) {
//fetching hotelCodes from downloaded xml of hotelList, to make API calls for hotelInfo
List<String> hotelInfoArray = getHotelCodeList(filePath);
AtomicInteger hotelCounter = new AtomicInteger();
String hotelInfoXml = apiClient.getApiClientSettings().getEndpoints()
.get(HotelConstants.HOTEL_INFO);
/*Fetching data from HotelInfo API Async but once it will stream the hotelinfo list then next flow of code execute and it won't wait all api calls to be made and get the response back. */
hotelInfoArray.stream().forEach(hotel -> {
StringBuilder fileName = new StringBuilder();
fileName.append(HotelConstants.HOTEL_INFO).append(hotelCounter.getAndIncrement()).append(".xml");
Path path = getTargetDestination(downloadPath, "2", HotelConstants.HOTEL_INFO,
fileName.toString());
StringBuilder hotelCode = new StringBuilder();
hotelCode.append("<codigo>").append(hotel).append("</codigo>");
String xml = String.format(hotelInfoXml).replace("<codigo></codigo>", hotelCode);
try {
hotelDataFetchThreadService.fetchHotelInfo(xml, path, task, downloadFailedLocales, locale, queryParameters);
} catch (DownloadFailedException e) {
log.info("Download failed for hotel code {} with exception {}", hotel, e);
downloadFileService.deleteIncompleteFiles(path);
}
});
}
} else {
log.info("file already exist skipping downloading again");
}
} catch (DownloadException e) {
downloadFileService.deleteIncompleteFiles(filePath);
log.info("Download failed for endpoint {} with exception {}", entry.getKey(), e);
} catch (DownloadFailedException e) {
throw new RuntimeException(e);
}
}
/*
This method make api call and write the xml response in local file in async way
*/
#Async("TestExecutor")
public void fetchHotelInfo(String xml, Path path, Task task, Set<Locale> downloadFailedLocales, Locale locale,
MultiValueMap<String, String> queryParameters) throws DownloadFailedException {
Flux<DataBuffer> bufferedData;
try {
// log.info("using thread {}", Thread.currentThread().getName());
bufferedData = apiClient.getWebClient()
.uri(uriBuilder -> uriBuilder
.queryParams(queryParameters)
.queryParam(HotelConstants.XML, xml.trim())
.build()
).retrieve()
.bodyToFlux(DataBuffer.class)
.retryWhen(Retry.fixedDelay(maxRetryAttempts, Duration.ofSeconds(maxRetryDelay))
.onRetryExhaustedThrow(
(RetryBackoffSpec retryBackoffSpec, Retry.RetrySignal retrySignal) -> {
throw new DownloadException(
"External Service failed to process after max retries");
}));
writeBufferDataToFile(bufferedData, path);
} catch (DownloadException e) {
downloadFileService.deleteIncompleteFiles(path);
downloadFailedLocales.add(locale);
if (locale.equals(task.getJob().getProvider().getDefaultLocale().getLocale())) {
throw new DownloadFailedException(
String.format("Network issue during download, Max retry reached: %s", e.getMessage()), e);
}
log.info("Download failed for with exception ", e);
}
}

What's the procedure to re-download locally deleted files using SFTP Inbound

As per this doc couldn't find the right process to re-downloading a locally removed file from remote SFTP.
The requirement is, to delete local file which already been fetched from remote SFTP and use sftp-inbound-adapter (DSL configuration) to re-fetch that same file when required. In this implementation, MetadataStore haven't been persisted into any external system like PropertiesPersistingMetadataStore or Redis Metadata Store. So as per doc, MetadataStore stored in In-Memory.
Couldn't find any way to remove meta data of that remote file from MetadataStore to re-fetch the locally deleted file using file_name. And don't have any clue, how should this removeRemoteFileMetadata() callback needs to be implemented (according to this doc).
Configuration class contain followings:
#Bean
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp.inboundAdapter(sftpConfig.getSftpSessionFactory())
.preserveTimestamp(true)
.patternFilter(Constants.FILE_NAME_CONVENTION)
.remoteDirectory(sftpConfig.getSourceLocation())
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(sftpConfig.getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id("sftpInboundAdapter").autoStartup(false)
.poller(Pollers.fixedDelay(5000).get()))
.channel(MessageChannels.direct().get())
.handle(message -> {
log.info("Fetching File : " + message.getHeaders().get("file_name").toString());
})
.get();
}
I tried to solve this and I used Tanvir Hossain's reference code. I coded like this.
#Bean
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp
.inboundAdapter(sftpConfig.getSftpSessionFactory())
.preserveTimestamp(true)
.filter(sftpFileListFilter())
.localFilter(systemFileListFilter())
.remoteDirectory(sftpConfig.getSourceLocation())
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(sftpConfig.getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id("sftpInboundAdapter").autoStartup(false)
.poller(Pollers.fixedDelay(5000).get()))
.channel(MessageChannels.direct().get())
.handle(message -> {
log.info("Fetching File : "
+ message.getHeaders().get("file_name").toString());
})
.get();
}
private FileSystemPersistentAcceptOnceFileListFilter systemFileListFilter() {
return new FileSystemPersistentAcceptOnceFileListFilter(store(), prefix);
}
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter =
new ChainFileListFilter<>();
chainFileListFilter.addFilters(
new SftpPersistentAcceptOnceFileListFilter(store(), prefix),
new SftpSimplePatternFileListFilter(sftpConfig.getFileFilterValue())
);
return chainFileListFilter;
}
#Bean
public SimpleMetadataStore store() {
return new SimpleMetadataStore();
}
and my Controller for removing metadata is like below :
public class Controller {
private final SimpleMetadataStore simpleMetadataStore;
public Controller(SimpleMetadataStore simpleMetadataStore) {
this.simpleMetadataStore = simpleMetadataStore;
}
#GetMapping("/test/remove-metadata/{type}/{fileName}")
#ResponseBody
public String removeFileMetadata(
#PathVariable("fileName") String fileName,
#PathVariable("type") String type
) {
String prefix = definedPrefix;
String filePath = "";
if(type.equals("local")){
filePath = "/local/storage/path/" + fileName;
}else if(type.equals("remote")){
filePath = fileName
}
String key = prefix + filePath;
simpleMetadataStore.remove(key);
return key;
}
}
I am getting my desired file. It is re-fetching file for me.
Use a ChainFileListFilter, with a SftpSimplePatternFileListFilter and a SftpPersistentAcceptOnceFileListFilter.
Use a SimpleMetadataStore to store the state in memory (or some other MetadataStore).
new SftpPersistentAcceptOnceFileListFilter(store, "somePrefix");
Then, store.remove(key) where key is somePrefix + fileName.
Use a similar filter in the localFilter with FileSystemPersistentAcceptOnceFileListFilter.

Using flatMap with Observable in JAVA

Can someone help me to understand this portion of code?
I'm trying to get some config files from database using a dataRepository class that returns an Observable of the config files in a special form ( it was developed by another developer)
final List<LegalBookDescriptor> legalBookDescriptors = dataRepository.findAllConfigFiles(legalBookDescriptorsDir)
.flatMap(new Func1<ConfigFile, Observable<LegalBookDescriptor>>() {
#Override
public Observable<LegalBookDescriptor> call(ConfigFile configFile) {
try {
final LegalBookDescriptor legalBookDescriptor = conversionService.convert(configFile.getContent(), LegalBookDescriptor.class);
LOG.info(String.format("Successfully loaded [Legal Book Descriptor] from file [%s]", configFile.getPath()));
return Observable.just(legalBookDescriptor);
} catch (Exception e) {
LOG.error(String.format("Failed to load [Legal Book Descriptor] from file [%s]", configFile.getPath()), e);
return Observable.empty();
}
}
})
.toList()
.toBlocking()
.single();
if (legalBookDescriptors.isEmpty()) {
LOG.warn(String.format("Hasn't found any valid Legal Book Descriptor file in the root directory [%s].", legalBookDescriptorsDir));
}
Thank you in advance!

Categories