Apache Camel SFTP get file content null - java

Java Code
#Component
#RequiredArgsConstructor
public class WireInboundFileListener extends RouteBuilder {
private final JwtWireTokenService jwtWireTokenService;
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
#Value("${wire.ftp.protocol}")
private String ftpProtocol;
#Value("${wire.ftp.server}")
private String ftpServer;
#Value("${wire.ftp.server.port}")
private String ftpServerPort;
#Value("${wire.ftp.username}")
private String ftpUsername;
#Value("${wire.ftp.password}")
private String ftpPassword;
#Value("${wire.ftp.private.key.file}")
private String ftpPrivateKeyFile;
#Value("${wire.ftp.private.key.passphrase}")
private String privateKeyPassphrase;
#Value("${wire.file.inbound.dir}")
private String ftpListenDir;
#Value("${wire.file.inbound.url}")
private String inboundUrl;
#Override
public void configure() {
var fromFtpUri = String.format("%s:%s:%s/%s?username=%s&delete=true&antInclude=*.txt",
ftpProtocol, ftpServer, ftpServerPort, ftpListenDir, ftpUsername);
log.info("SFTP inbound listen dir : " + ftpListenDir);
if (Environment.getExecutionEnvironment().equals(Environment.ExecutionEnvironment.AWS)) {
fromFtpUri += "&privateKeyFile=" + ftpPrivateKeyFile + "&privateKeyPassphrase=" + privateKeyPassphrase;
} else {
fromFtpUri += "&password=" + ftpPassword;
}
from(fromFtpUri)
//.delay(10000) event I put delay but still got file content null
.convertBodyTo(String.class)
.process(exchange -> {
final var requestBody = new HashMap<String, Object>();
final var content = exchange.getIn().getBody();
final var fileName = exchange.getIn().getHeader("CamelFileName");
requestBody.put("content", content);
requestBody.put("name", fileName);
exchange.getIn().setBody(OBJECT_MAPPER.writeValueAsString(requestBody));
})
.to("log:com.test.wire.inbound.listener.SftpRoute")
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.setHeader("Content-Type", constant("application/json"))
.setHeader("Authorization", method(this, "clientToken"))
.to(inboundUrl + "?throwExceptionOnFailure=false")
.log("Response body from wire inbound : ${body}")
.end();
}
public String clientToken() {
return "Bearer " + jwtWireTokenService.getToken();
}
}
Success Request
2022-04-20 03:46:47.910 INFO 1 --- [read #6 - Delay] c.test.wire.inbound.listener.SftpRoute : Exchange[ExchangePattern: InOnly, BodyType: String, Body: {
"name": "sample-inbound-transfer-20220414024722.txt",
"content": "file content"
}]
Fail Request
2022-04-21 09:36:54.148 INFO 1 --- [read #4 - Delay] c.test.wire.inbound.listener.SftpRoute : Exchange[ExchangePattern: InOnly, BodyType: String, Body: {
"name": "sample-inbound-transfer-20220414024722.txt",
"content": ""
}]
Main issue
final var content = exchange.getIn().getBody();// sometimes get null, sometimes can get file contents
When I test to drop a file to SFTP server in local it worked as I expected in Sucess Request because the process to upload seemed fast because it is in local (FileZilla).
But when I test to drop a file again to SFTP server that hosts in real server sometimes it works and sometimes it didn't work Fail Request. Seem like the SFTP consuming file issue. Could you please help me on that? Thanks

The file is probably (sometimes) consumed by your Camel route before the file upload is finished.
What you can try is to configure your endpoint to poll the files only if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written); see the readLock parameter.
I think the default is no read lock; you should set readLock=changed...but I do not remember is such mode is also possible on endpoints of SFTP type.

Related

Spring generate Flux<Part> from File

I build a utility class to upload a file to AWS S3 using a full WebFlux reactive stack.
The controller class method looks like this:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Timed(value = "timed.upload_customer_media", description = "Time taken to upload customer media")
public Mono<ServerResponse> uploadCustomerMedia(ServerRequest serverRequest) {
return serverRequest.body(BodyExtractors.toMultipartData())
.flatMap(parts -> {
Map<String, Part> partMap = parts.toSingleValueMap();
partMap.forEach((partName, value) -> log.info("Name: {}, value: {}", partName, value));
FilePart filePart = (FilePart) partMap.get("file");
log.info("File name is : [{}]", filePart.filename());
FormFieldPart formFieldPart = (FormFieldPart) partMap.get("mediaDTO");
log.info("mediaDTO is : [{}]", formFieldPart.value());
MediaDTO mediaDTO;
try {
mediaDTO = objectMapper.readValue(formFieldPart.value(), MediaDTO.class);
log.info("mediaDTO is : [{}]", mediaDTO);
var customerId = Long.parseLong(serverRequest.pathVariable(CUSTOMER_ID));
log.info("customerId is : [{}]", customerId);
return s3FileHandlerService.multipartUploadHandler(customerId, mediaDTO, Flux.just(filePart))
.elapsed()
.flatMap(tr -> {
log.info("Duration to upload file to S3 [fileName : {}, duration : {}]", filePart.filename(), tr.getT1());
log.debug("Now deleting file part from temp folder.");
return Mono.just(tr.getT2());
})
.flatMap(s -> filePart.delete()
.then(Mono.just(s)));
} catch (Exception ex) {
log.error("Error parsing mediaDTO: {}", ex.getMessage());
return Mono.error(() -> new CustomerProcessingException(HttpStatus.INTERNAL_SERVER_ERROR, "Error processing request.", ex));
}
}
)
.flatMap(body -> ServerResponse.status(HttpStatus.CREATED)
.contentType(MediaType.APPLICATION_JSON).body(BodyInserters.fromValue(body)))
.metrics();
}
The signature for the method looks like this:
public Mono<String> multipartUploadHandler(Long customerId, MediaDTO mediaDTO, Flux<Part> parts) {
So, my MultipartFile upload controller works like a dream. I can extract the Form payload and the attached file, upload it to S3 and happiness is.
A new requirement is to take an existing file that has been downloaded to the local os using WebClient, and submit it to this method.
For the life of me, I cannot find a way to construct an instance of the Part interface using the file contents to submit.
I am looking the org.springframework.http.codec.multipart.Part and FilePart interface JavaDoc, but all the known implementations are private classes.
Example: DefaultFilePart is private static final in DefaultParts.
So my question: Has anybody ever needed to do something like this or pointers?

Audio URI `gc://...` is an invalid GCS path

Help me please if you can)
I am trying to use google speech to text API. I have uploaded an audio file ( record1638730550724.wav) on google storage bucket with the name "summaryapp". But when i try to make longrecognize request i get this error
2021-12-05 18:56:20 INFO RecordController:60 -
com.google.api.gax.rpc.InvalidArgumentException:
io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Audio URI
gc://summaryapp/record1638730255219.wav is an invalid GCS path.
RecognizeRequestHandler class:
public class RecognizeRequestHandler {
private static CallsInfo callsInfo;
#Autowired
private static CallsInfoDaoImpl callsInfoDao;
public static void asyncRecognizeGcs(String gcsUri, int oid) throws Exception {
// Configure polling algorithm
SpeechSettings.Builder speechSettings = SpeechSettings.newBuilder();
TimedRetryAlgorithm timedRetryAlgorithm =
OperationTimedPollAlgorithm.create(
RetrySettings.newBuilder()
.setInitialRetryDelay(Duration.ofMillis(500L))
.setRetryDelayMultiplier(1.5)
.setMaxRetryDelay(Duration.ofMillis(5000L))
.setInitialRpcTimeout(Duration.ZERO) // ignored
.setRpcTimeoutMultiplier(1.0) // ignored
.setMaxRpcTimeout(Duration.ZERO) // ignored
.setTotalTimeout(Duration.ofHours(24L)) // set polling timeout to 24 hours
.build());
speechSettings.longRunningRecognizeOperationSettings().setPollingAlgorithm(timedRetryAlgorithm);
// Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
try (SpeechClient speech = SpeechClient.create(speechSettings.build())) {
// Configure remote file request for FLAC
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setLanguageCode("ru-RU")
.setSampleRateHertz(16000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
// Use non-blocking call for getting file transcription
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
speech.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s\n", alternative.getTranscript());
//Trying to write response to database
Date date = new Date(System.currentTimeMillis());
String text = alternative.getTranscript();
callsInfo = new CallsInfo(oid, date, text);
callsInfoDao.create(callsInfo);
}
}
}
}
And this is RecordController class, which call Recognize class with GCS URL
public class RecordController {
public static Logger logger = LoggerFactory.getLogger(RecordController.class);
#Autowired
AuthToken authToken;
#Autowired
AuthTokenDaoImpl authTokenDao;
#Autowired
AudioRecord audioRecord;
#Autowired
RecordDaoImpl recordDao;
Thread t;
private String RESULT = "NO MATCHING";
private String GCS_URL = "gc://summaryapp/";
private String RECORD_FILE_NAME = "";
#PostMapping(value = "/newAudioRecord", consumes = "application/json",
produces = "text/plain;charset=UTF-8")
public String postNewRecord(#RequestParam(value = "token", required = true) String token,
#RequestBody AudioRecord audioRecord) throws JsonProcessingException {
System.out.println(audioRecord.getOid() + " " + audioRecord.getRecordFileName());
authToken = authTokenDao.readByToken(token);
if (authToken != null) {
RECORD_FILE_NAME = audioRecord.getRecordFileName();
audioRecord.setOid(authToken.getOid());
recordDao.create(audioRecord);
AudioRecord resultAudioRecord = recordDao.readByName(audioRecord.getOid(),
audioRecord.getRecordFileName());
ObjectMapper mapper = new ObjectMapper();
RESULT = mapper.writeValueAsString(resultAudioRecord);
try {
RecognizeRequestHandler.asyncRecognizeGcs(GCS_URL+RECORD_FILE_NAME, audioRecord.getOid());
} catch (Exception e) {
logger.info(e.getMessage());
}
}
return RESULT;
}
}
UPD: After changing gc:// to gs:// in path i am do not get any messages about "invalid path". But now process ends with following lines in logs:
2021-12-06 18:37:34 INFO RecognizeRequestHandler:108 - -------->
Result list size: 0 2021-12-06 18:37:34 DEBUG NettyClientHandler:214 -
[id: 0xa576e0da, L:/172.18.0.2:49394 -
R:speech.googleapis.com/64.233.164.95:443] OUTBOUND GO_AWAY:
lastStreamId=0 errorCode=0 length=0 bytes= 2021-12-06 18:37:35 DEBUG
PoolThreadCache:229 - Freed 24 thread-local buffer(s) from thread:
grpc-default-worker-ELG-1-4
As seen from "sout debug" - line of code
List<SpeechRecognitionResult> results = response.get().getResultsList();
returns a result list size: 0
--------> Result list size: 0
UPD: Audio recorded with next settings:
int mic = MediaRecorder.AudioSource.MIC;
recorder.setAudioSource(mic);
recorder.setOutputFormat(output_formats[currentFormat]); //MediaRecorder.OutputFormat.AAC_ADTS
recorder.setAudioEncoder(encoder); //MediaRecorder.AudioEncoder.AAC;
recorder.setAudioSamplingRate(sampleRate); //16000
recorder.setOutputFile(CurrentAudioFilePath);
recorder.setOnErrorListener(errorListener);
recorder.setOnInfoListener(infoListener);

Creating Text Stream Using Spring WebFlux

I've been using Spring WebFlux to create a text stream, here is the code.
#SpringBootApplication
#RestController
public class ReactiveServer {
private static final String FILE_PATH = "c:/test/";
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE, value = "/events")
Flux<String> events() {
Flux<String> eventFlux = Flux.fromStream(Stream.generate(() -> FileReader.readFile()));
Flux<Long> durationFlux = Flux.interval(Duration.ofMillis(500));
return Flux.zip(eventFlux, durationFlux).map(Tuple2::getT1);
}
public static void main(String[] args) {
SpringApplication.run(ReactiveServer.class, args);
}
}
When I access the /events url on the browser I get this, that's almost what I want to get:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
What I need to do is to insert a "ping:" in between iterations to get:
ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
But, the best I could get was:
data: ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data: ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
Does anyone know of a way to what I need?
You could try returning a Flux<ServerSentEvent> and specify the type of event you're trying to send. Like this:
#RestController
public class TestController {
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE, path = "/events")
Flux<ServerSentEvent> events() {
Flux<String> events = Flux.interval(Duration.ofMillis(200)).map(String::valueOf);
Flux<ServerSentEvent<String>> sseData = events.map(event -> ServerSentEvent.builder(event).build());
Flux<ServerSentEvent<String>> ping = Flux.interval(Duration.ofMillis(500))
.map(l -> ServerSentEvent.builder("").event("ping").build());
return Flux.merge(sseData, ping);
}
}
With that code snippet, I'm getting the following output:
$ http --stream :8080/events
HTTP/1.1 200 OK
Content-Type: text/event-stream;charset=UTF-8
transfer-encoding: chunked
data:0
data:1
event:ping
data:
data:2
data:3
data:4
event:ping
data:
Which is consistent with Server Sent Events. Is the ping: prefix something specific to Hystrix? If it is, I don't think this is consistent with the SSE spec and that it's something supported in Spring Framework.

A processor receive multiple message with same payload multiple time

I'm starting a new project with 'spring-cloud-dataflow', developing a bunch of jar to fit my need.
One of this is a processor to untar files coming from a file source, this application use a customized version of integration-zip with features to handle tar and gunzip file compression.
So my problem is the following one: while my source send a single message with the file reference the processor receive those message multiple time, same payload but with different id.
Here the log file of both component
As you can see file produce only on message:
2017-10-02 12:38:28.013 INFO 17615 --- [ask-scheduler-3] o.s.i.file.FileReadingMessageSource : Created message: [GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={id=0b99b840-e3b3-f742-44ec-707aeea638c8, timestamp=1506940708013}]]
while producer has 3 message incoming:
2017-10-02 12:38:28.077 INFO 17591 --- [ -L-1] o.s.i.codec.kryo.CompositeKryoRegistrar : registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2017-10-02 12:38:28.080 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Message 'GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={kafka_offset=1, id=1
a4d4b9c-86fe-d3a8-d800-8013e8ae7027, kafka_receivedPartitionId=0, contentType=application/x-java-object;type=java.io.File, kafka_receivedTopic=untar.file, timestamp=1506940708079}]' unpacking started...
2017-10-02 12:38:28.080 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Check message's payload type to decompress
2017-10-02 12:38:29.106 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Message 'GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={kafka_offset=1, id=c
d611ca4-4cd9-0624-0871-dcf93a9a0051, kafka_receivedPartitionId=0, contentType=application/x-java-object;type=java.io.File, kafka_receivedTopic=untar.file, timestamp=1506940709106}]' unpacking started...
2017-10-02 12:38:29.107 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Check message's payload type to decompress
2017-10-02 12:38:31.108 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Message 'GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={kafka_offset=1, id=97171a2e-29ac-2111-b838-3da7220f5e3c, kafka_receivedPartitionId=0, contentType=application/x-java-object;type=java.io.File, kafka_receivedTopic=untar.file, timestamp=1506940711108}]' unpacking started...
2017-10-02 12:38:31.108 INFO 17591 --- [ -L-1] .w.c.s.a.c.p.AbstractCompressTransformer : Check message's payload type to decompress
2017-10-02 12:38:31.116 ERROR 17591 --- [ -L-1] o.s.integration.handler.LoggingHandler : org.springframework.integration.transformer.MessageTransformationException: failed to transform message; nested exception is org.springframework.messaging.MessageHandlingException: Failed to apply Zip transformation.; nested exception is java.io.FileNotFoundException: /tmp/patent/CNINO_im_201733_batch108.tgz (File o directory non esistente), failedMessage=GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={kafka_offset=1, id=97171a2e-29ac-2111-b838-3da7220f5e3c, kafka_receivedPartitionId=0, contentType=application/x-java-object;type=java.io.File, kafka_receivedTopic=untar.file, timestamp=1506940711108}], failedMessage=GenericMessage [payload=/tmp/patent/CNINO_im_201733_batch108.tgz, headers={kafka_offset=1, id=97171a2e-29ac-2111-b838-3da7220f5e3c, kafka_receivedPartitionId=0, contentType=application/x-java-object;type=java.io.File, kafka_receivedTopic=untar.file, timestamp=1506940711108}]
at org.springframework.integration.transformer.AbstractTransformer.transform(AbstractTransformer.java:44)
I can't find any solution to this problem, does anybody has the same problem and found a way to fix it? Or there's any configuration I miss?
EDIT:
I'm using the local version of SDFS version 1.2.2.RELEASE, so IO file operation work on the same filesystem, and I use version Ditmars.BUILD-SNAPSHOT for SCS.
Unfortunatly if I disable the file delete operation application, this app still process message multiple time. Here some code snippet, and i you like this is my project repo:
This is my processor class:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(UnTarProperties.class)
public class UnTarProcessor {
#Autowired
private UnTarProperties properties;
#Autowired
private Processor processor;
#Bean
public UncompressedResultSplitter splitter() {
return new UncompressedResultSplitter();
}
#Bean
public UnTarGzTransformer transformer() {
UnTarGzTransformer unTarGzTransformer = new UnTarGzTransformer(properties.isUseGzCompression());
unTarGzTransformer.setExpectSingleResult(properties.isSingleResult());
unTarGzTransformer.setWorkDirectory(new File(properties.getWorkDirectory()));
unTarGzTransformer.setDeleteFiles(properties.isDeleteFile());
return unTarGzTransformer;
}
#Bean
public IntegrationFlow process() {
return IntegrationFlows.from(processor.input())
.transform(transformer())
.split(splitter())
.channel(processor.output())
.get();
}
}
This is the core method used to decompress file:
#Override
protected Object doCompressTransform(final Message<?> message) throws Exception {
logger.info(String.format("Message '%s' unpacking started...", message));
try (InputStream checkMessage = checkMessage(message);
InputStream inputStream = (gzCompression ? new BufferedInputStream(new GZIPInputStream(checkMessage)) : new BufferedInputStream(checkMessage))) {
final Object payload = message.getPayload();
final Object unzippedData;
try (TarArchiveInputStream tarIn = new TarArchiveInputStream(inputStream)){
TarArchiveEntry entry = null;
final SortedMap<String, Object> uncompressedData = new TreeMap<String, Object>();
while ((entry = (TarArchiveEntry) tarIn.getNextEntry()) != null) {
final String zipEntryName = entry.getName();
final Date zipEntryTime = entry.getLastModifiedDate();
final long zipEntryCompressedSize = entry.getSize();
final String type = entry.isDirectory() ? "directory" : "file";
final File tempDir = new File(workDirectory, message.getHeaders().getId().toString());
tempDir.mkdirs(); // NOSONAR false positive
final File destinationFile = new File(tempDir, zipEntryName);
if (entry.isDirectory()) {
destinationFile.mkdirs(); // NOSONAR false positive
}
else {
unpackEntries(tarIn, entry, tempDir);
uncompressedData.put(zipEntryName, destinationFile);
}
}
if (uncompressedData.isEmpty()) {
unzippedData = null;
}
else {
if (this.expectSingleResult) {
if (uncompressedData.size() == 1) {
unzippedData = uncompressedData.values().iterator().next();
}
else {
throw new MessagingException(message, String.format("The UnZip operation extracted %s "
+ "result objects but expectSingleResult was 'true'.", uncompressedData.size()));
}
}
else {
unzippedData = uncompressedData;
}
}
logger.info("Payload unpacking completed...");
}
finally {
if (payload instanceof File && this.deleteFiles) {
final File filePayload = (File) payload;
if (!filePayload.delete() && logger.isWarnEnabled()) {
if (logger.isWarnEnabled()) {
logger.warn("failed to delete File '" + filePayload + "'");
}
}
}
}
return unzippedData;
}
catch (Exception e) {
throw new MessageHandlingException(message, "Failed to apply Zip transformation.", e);
}
}
Exception is thrown by method checkmessage()
protected InputStream checkMessage(Message<?> message) throws FileNotFoundException {
logger.info("Check message's payload type to decompress");
InputStream inputStream;
Object payload = message.getPayload();
if (payload instanceof File) {
final File filePayload = (File) payload;
if (filePayload.isDirectory()) {
throw new UnsupportedOperationException(String.format("Cannot unzip a directory: '%s'",
filePayload.getAbsolutePath()));
}
inputStream = new FileInputStream(filePayload);
}
else if (payload instanceof InputStream) {
inputStream = (InputStream) payload;
}
else if (payload instanceof byte[]) {
inputStream = new ByteArrayInputStream((byte[]) payload);
}
else {
throw new IllegalArgumentException(String.format("Unsupported payload type '%s'. " +
"The only supported payload types are java.io.File, byte[] and java.io.InputStream",
payload.getClass().getSimpleName()));
}
return inputStream;
}
I really appreciate any help.
Thanks a lot
We would need more information. Version of SCDF and SCS apps. Your DSL on how you deployed your apps at very least.
Just checked your logs, did you realize your consumer is failing to consume the message due a FileNotFoundException? You are not receiving the same message multiple times, SCS is just trying to redeliver it before failing. Check your full logs and how you are failing to open the file in the specified location
The exception is happening at your transformer, you receive the message multiple times due retry configuration of SCS, since the error is in your logic, is hard to follow. It says FileNotFoundException I don't know what in your process puts the file there, that may be the reason. It does not appear to be anything with SCS

struts2-rest-plugin: sending json data to PUT/POST

I'm stuck trying to send JSON data to by Struts2 REST server using the struts2-rest-plugin.
It works with XML, but I can't seem to figure out the right JSON format to send it in.
Anybody has any experience with this?
Thanks,
Shaun
Update:
Sorry I wasn't clear. The problem is that Struts2 doesn't seem to be mapping the JSON data I send in to my model in the controller.
Here's the code:
Controller:
public class ClientfeatureController extends ControllerParent implements ModelDriven<Object> {
private ClientFeatureService clientFeatureService;
private ClientFeature clientFeature = new ClientFeature();
private List<ClientFeature> clientFeatureList;
//Client ID
private String id;
public ClientfeatureController() {
super(ClientfeatureController.class);
}
#Override
public Object getModel() {
return (clientFeatureList != null ? clientFeatureList : clientFeature);
}
/**
* #return clientFeatureList through Struts2 model-driven design
*/
public HttpHeaders show() {
//logic to return all client features here. this works fine..
//todo: add ETag and lastModified information for client caching purposes
return new DefaultHttpHeaders("show").disableCaching();
}
// PUT request
public String update() {
logger.info("client id: " + clientFeature.getClientId());
logger.info("clientFeature updated: " + clientFeature.getFeature().getDescription());
return "update";
}
public HttpHeaders create() {
logger.info("client id: " + clientFeature.getClientId());
logger.info("feature description: " + clientFeature.getFeature().getDescription());
return new DefaultHttpHeaders("create");
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public void setClientFeatureService(ClientFeatureService clientFeatureService) {
this.clientFeatureService = clientFeatureService;
}
public List<ClientFeature> getClientFeatureList() {
return clientFeatureList;
}
public void setClientFeatureList(List<ClientFeature> clientFeatureList) {
this.clientFeatureList = clientFeatureList;
}
public ClientFeature getClientFeature() {
return clientFeature;
}
public void setClientFeature(ClientFeature clientFeature) {
this.clientFeature = clientFeature;
}
}
This is the URL I'm making the request to:
..http://localhost:8080/coreserviceswrapper/clientfeature.json
-Method: POST or PUT (tried both, POST maps to create() and PUT maps to update())
-Header: Content-Type: application/json
Payload:
{"clientFeature":{
"feature": {
"id": 2,
"enabled": true,
"description": "description1",
"type": "type1"
},
"countries": ["SG"],
"clientId": 10}
}
And the output in the Struts2 logs when I make the request:
1356436 [http-bio-8080-exec-5] WARN net.sf.json.JSONObject - Tried to assign property clientFeature:java.lang.Object to bean of class com.foo.bar.entity.ClientFeature
1359043 [http-bio-8080-exec-5] INFO com.foo.bar.rest.ClientfeatureController - client id: null
Let me also add that XML requests work just fine:
URL: ..http://localhost:8080/coreserviceswrapper/clientfeature.xml
Method: POST/PUT
Content-Type: text/xml
Payload:
<com.foo.bar.entity.ClientFeature>
<clientId>100</clientId>
<feature>
<description>test</description>
</feature>
</com.foo.bar.entity.ClientFeature>
Output:
1738685 [http-bio-8080-exec-7] INFO com.foo.bar.rest.ClientfeatureController - client id: 100
1738685 [http-bio-8080-exec-7] INFO com.foo.bar.rest.ClientfeatureController - feature description: test
1738717 [http-bio-8080-exec-7] INFO org.apache.struts2.rest.RestActionInvocation - Executed action [/clientfeature!create!xml!200] took 1466 ms (execution: 1436 ms, result: 30 ms)
I also encounter same issue, my environment is:
Structs 2.3.16.3, Jquery 1.11, Struts-rest-plugin
symptom: post json data, rest controller not parse json data to model.
solution:
since the controller is modeldriven, browser client just post Json string is OK. but seems you have to force jquery to change conenttype of ajax call.
_self.update= function(model, callback) {
$.ajax({
beforeSend: function(xhrObj){
xhrObj.setRequestHeader("Content-Type","application/json");
xhrObj.setRequestHeader("Accept","application/json");
},
type: 'PUT',
url: this.svrUrl+"/"+ model.id + this.extension,
data: JSON.stringify(model), // '{"name":"' + model.name + '"}',
//contentType: this.contentType,
//dataType: this.dataType,
processData: false,
success: callback,
error: function(req, status, ex) {},
timeout:60000
});
};
the model data format is :
var model = {"id":"2",
"name":"name2",
"author":"author2",
"key":"key2"
}
when you put or post data whit "Content-Type"="application/json", the plugin will handle it with Jsonhandler automatically.
I got such a problem. Strange but got solved by changing the name 'clientFeature' to 'model'

Categories