Spring Webflux and Amazon SDK 2.x: S3AsyncClient timeout - java

I'm implementing a Reactive project with Spring boot 2.3.1, Webflux, Spring Data with reactive mongodb driver and Amazon SDk 2.14.6.
I have a CRUD that persist an entity on MongoDB and must upload a file to S3. I'm using the SDK reactive method s3AsyncClient.putObject and I facing some issues. The CompletableFuture throws the following exception:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution did not complete before the specified timeout configuration: 60000 millis
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) ~[na:na]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoMapFuseable] :
reactor.core.publisher.Mono.map(Mono.java:3054)
br.com.wareline.waredrive.service.S3Service.uploadFile(S3Service.java:94)
The file that I trying to upload have about 34kb, It is a simple text file.
The upload method is in my S3Service.java class which is autowired at DocumentoService.java
#Component
public class S3Service {
#Autowired
private final ConfiguracaoService configuracaoService;
public Mono<PutObjectResponse> uploadFile(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final String cliente) {
return configuracaoService.findByClienteId(cliente)
.switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("Configuração com id %s não encontrada", cliente))))
.map(configuracao -> uploadFileToS3(headers, body, fileKey, configuracao))
.doOnSuccess(response -> {
checkResult(response);
});
}
private PutObjectResponse uploadFileToS3(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final Configuracao configuracao) {
final long length = headers.getContentLength();
if (length < 0) {
throw new UploadFailedException(HttpStatus.BAD_REQUEST.value(), Optional.of("required header missing: Content-Length"));
}
final Map<String, String> metadata = new HashMap<>();
final MediaType mediaType = headers.getContentType() != null ? headers.getContentType() : MediaType.APPLICATION_OCTET_STREAM;
final S3AsyncClient s3AsyncClient = getS3AsyncClient(configuracao);
return s3AsyncClient.putObject(
PutObjectRequest.builder()
.bucket(configuracao.getBucket())
.contentLength(length)
.key(fileKey)
.contentType(mediaType)
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body))
.whenComplete((resp, err) -> s3AsyncClient.close())
.join();
}
public S3AsyncClient getS3AsyncClient(final Configuracao s3Props) {
final SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
.readTimeout(Duration.ofMinutes(1))
.writeTimeout(Duration.ofMinutes(1))
.connectionTimeout(Duration.ofMinutes(1))
.maxConcurrency(64)
.build();
final S3Configuration serviceConfiguration = S3Configuration.builder().checksumValidationEnabled(false).chunkedEncodingEnabled(true).build();
return S3AsyncClient.builder()
.httpClient(httpClient)
.region(Region.of(s3Props.getRegion()))
.credentialsProvider(() -> AwsBasicCredentials.create(s3Props.getAccessKey(), s3Props.getSecretKey()))
.serviceConfiguration(serviceConfiguration)
.overrideConfiguration(builder -> builder.apiCallTimeout(Duration.ofMinutes(1)).apiCallAttemptTimeout(Duration.ofMinutes(1)))
.build();
}
I based my implementation in Amazon SDK documentation and the code examples at https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/s3/src/main/java/com/example/s3/S3AsyncOps.java
I can't figured out what is the cause of the async client timeout problem. The weird thing is that when I use the same S3AsyncClient, to download files from bucket, it works. I tried to increase the timeout in S3AsyncClient to about 5 min without success. I don't know what I'm doing wrong.

I found the error.
When I am defining the contentLength in PutObjectRequest.builder().contentLength(length) I use the headers.getContentLength() which is the size of whole request. In my request other informations is passed together, making the content length being greater than the real file length.
I found this in Amazon documentation:
The number of bytes set in the "Content-Length" header is more than
the actual file size
When you send an HTTP request to Amazon S3, Amazon S3 expects to
receive the amount of data specified in the Content-Length header. If
the expected amount of data isn't received by Amazon S3, and the
connection is idle for 20 seconds or longer, then the connection is
closed. Be sure to verify that the actual file size that you're
sending to Amazon S3 aligns with the file size that is specified in
the Content-Length header.
https://aws.amazon.com/pt/premiumsupport/knowledge-center/s3-socket-connection-timeout-error/
The timeout error occurred because S3 waits until the content length sended reach the size informed in client, the file ends be transmited before to reach the content length informed. Then the connection stays idle and S3 closes the socket.
I change the content length to the real file size and the upload was successful.

Related

Unable to subscribe web-hook for SharePoint online

We are unable to subscribe web-hook for SharePoint online from our Spring-Boot application.
Providing valid notification URL(https enabled, publicly accessible, valid domain name, Post method) as parameter while consuming rest API in order to subscribe web-hook.
#PostMapping(value = "/spnotification")
#ResponseBody
public ResponseEntity<String> handleSPValidation(#RequestParam final String validationtoken) {
LOG.info("validationToken : " + validationtoken);
return ResponseEntity.ok().contentType(MediaType.TEXT_PLAIN)
.body(validationtoken);
}
And on this notification URL end-point, we are able to receive validation string from share-point as parameter and same string we are retiring in less then 5 sec with content-type text/plain and http status code 200 as response.
still getting 400 bad request with below error message.
400 Bad Request: [{"error":{"code":"-1, System.InvalidOperationException","message":{"lang":"en-US","value":"Failed to validate the notification URL 'https://example.com/notification-listener-service/api/webhook/spnotification'."}}}]
Note : We are following this API documentation to subscribe web-hook.
We tried Graph API also for the same purpose but in that case getting below error.
"error": {
"code": "InvalidRequest",
"message": "The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF"
}
Please find this diagram for more understanding on this issue.
We really appreciate if someone can help us on the same.
Please check the #PostMapping(value = "/notification", headers = { "content-type=text/plain" })
#PostMapping(value = "/notification", headers = { "content-type=text/plain" })
#ResponseBody
public ResponseEntity<String> handleSPValidation(#RequestParam final String validationtoken) {
LOG.info("validationToken : " + validationtoken);
return ResponseEntity.ok().contentType(MediaType.TEXT_PLAIN)
.body(validationtoken);
}
GitHub Code

Combine file upload and request body on a single endpoint in rest controller

The UI for my webapp has the ability to either upload a file(csv), or send the data as json in request body. However either a file upload, or a json request would be present in the request and not both. I am creating a spring rest controller which combine file upload and also accepts the request json values as well.
With the below endpoint tested from postman, I am not getting exception:
org.apache.tomcat.util.http.fileupload.FileUploadException: the request was rejected because no multipart boundary was found
#RestController
public class MovieController {
private static final Logger LOGGER = LoggerFactory.getLogger(MovieController.class);
#PostMapping(value="/movies", consumes = {"multipart/form-data", "application/json"})
public void postMovies( #RequestPart String movieJson, #RequestPart(value = "moviesFile") MultipartFile movieFile ) {
// One of the below value should be present and other be null
LOGGER.info("Movies Json Body {}", movieJson);
LOGGER.info("Movies File Upload {}", movieFile);
}
}
Appreciate any help in getting this issue solved?
Note: I was able to build two separate endpoint for file upload and json request, but that won't suffice my requirement. Hence I'm looking for a solution to combine both
Try something like:
#RequestMapping(value = "/movies", method = RequestMethod.POST, consumes = { "multipart/form-data", "application/json" })
public void postMovies(
#RequestParam(value = "moviesFile", required = false) MultipartFile file,
UploadRequestBody request) {
In RequestBody you can add the parameters you want to send.
This will not send the data as JSON.
Edit:- I forgot to add the variable for the Multipart file and I mistakenly used the RequestBody which is reserved keyword in spring.
Hope it helps.
I would suggest to create two separate endpoints. This splits and isolates the different functionality and reduces the complexity of your code. In addition testing would be easier and provides better readability.
Your client actually has to know which variable to use. So just choose different endpoints for your request instead of using different variables for the same endpoint.
#PostMapping(value="/movies-file-upload", consumes = {"multipart/form-data"})
public void postMoviesFile(#RequestPart(value = "moviesFile") MultipartFile movieFile ) {
LOGGER.info("Movies File Upload {}", movieFile);
}
#PostMapping(value="/movies-upload", consumes = {"application/json"})
public void postMoviesJson( #RequestPart String movieJson) {
LOGGER.info("Movies Json Body {}", movieJson);
}

API call with Java + STS returning "Content type 'application/octet-stream' not supported"

I am working on part of an API, which requires making a call to another external API to retrieve data for one of its functions. The call was returning an HTTP 500 error, with description "Content type 'application/octet-stream' not supported." The call is expected to return a type of 'application/json."
I found that this is because the response received doesn't explicitly specify a content type in its header, even though its content is formatted as JSON, so my API defaulted to assuming it was an octet stream.
The problem is, I'm not sure how to adjust for this. How would I get my API to treat the data it receives from the other API as an application/json even if the other API doesn't specify a content type? Changing the other API to include a contenttype attribute in its response is infeasible.
Code:
The API class:
#RestController
#RequestMapping(path={Constants.API_DISPATCH_PROFILE_CONTEXT_PATH},produces = {MediaType.APPLICATION_JSON_VALUE})
public class GetProfileApi {
#Autowired
private GetProfile GetProfile;
#GetMapping(path = {"/{id}"})
public Mono<GetProfileResponse> getProfile(#Valid #PathVariable String id){
return GetProfile.getDispatchProfile(id);
}
The service calling the external API:
#Autowired
private RestClient restClient;
#Value("${dispatch.api.get_profile}")
private String getDispatchProfileUrl;
#Override
public Mono<GetProfileResponse> getDispatchProfile(String id) {
return Mono.just(id)
.flatMap(aLong -> {
MultiValueMap<String, String> headers = new HttpHeaders();
headers.add(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE);
return restClient.get(getDispatchProfileUrl, headers);
}).flatMap(clientResponse -> {
HttpStatus status = clientResponse.statusCode();
log.info("HTTP Status : {}", status.value());
return clientResponse.bodyToMono(GetProfileClientResponse.class);
// the code does not get past the above line before returning the error
}).map(GetProfileClientResponse -> {
log.debug("Response : {}",GetProfileClientResponse);
String id = GetProfileClientResponse.getId();
log.info("SubscriberResponse Code : {}",id);
return GetProfileResponse.builder()
// builder call to be completed later
.build();
});
}
The GET method for the RestClient:
public <T> Mono<ClientResponse> get(String baseURL, MultiValueMap<String,String> headers){
log.info("Executing REST GET method for URL : {}",baseURL);
WebClient client = WebClient.builder()
.baseUrl(baseURL)
.defaultHeaders(httpHeaders -> httpHeaders.addAll(headers))
.build();
return client.get()
.exchange();
}
One solution I had attempted was setting produces= {MediaType.APPLICATION_JSON_VALUE} in the #RequestMapping of the API to produces= {MediaType.APPLICATION_OCTET_STREAM_VALUE}, but this caused a different error, HTTP 406 Not Acceptable. I found that the server could not give the client the data in a representation that was requested, but I could not figure out how to correct it.
How would I be able to treat the response as JSON successfully even though it does not come with a content type?
Hopefully I have framed my question well enough, I've kinda been thrust into this and I'm still trying to figure out what's going on.
Are u using jackson library or jaxb library for marshalling/unmarshalling?
Try annotating Mono entity class with #XmlRootElement and see what happens.

OkHttp POST error "connection reset by peer" for unauthorized call and large payload

I've been struggling with the following issue:
I have a spring boot application which allows a user to post JSON content to an API endpoint. To use this endpoint, the user has to authenticate himself via basic authentication. Moreover, I use OkHttp (3.6.0) as an HTTP client.
Now if I post a large payload (> 4 MB) while being unauthorized, the OkHttp client fails with the following error:
java.net.SocketException: Connection reset by peer: socket write error
To reproduce the issue, I created a minimal client and server:
Server (Spring Boot Web App)
#SpringBootApplication
#RestController
public class App {
#PostMapping
public String create(#RequestBody Object obj) {
System.out.println(obj);
return "success";
}
public static void main(String[] args) {
SpringApplication.run(App.class);
}
}
Client (OkHttp 3.6.0)
public class Main {
public static void main(String[] args) {
OkHttpClient client = new OkHttpClient
.Builder()
.build();
Request request = new Request.Builder()
.url("http://localhost:8080")
.header("Content-Type", "application/json")
.post(RequestBody.create(MediaType.parse("application/json"), new File("src/main/java/content.json")))
// .post(RequestBody.create(MediaType.parse("application/json"), new File("src/main/java/content-small.json")))
.build();
try {
Response response = client.newCall(request).execute();
System.out.println(response);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Instead of the previously mentioned exception ("java.net.SocketException: Connection reset by peer: socket write error"), I would expect the response to be a default error message with HTTP status code 401, e.g. {"timestamp":1508767498068,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/"}. This is the result I get when using cURL and Postman as clients.
When I'm using less payload (content-small.json; approx. 1KB) instead of the large payload (content.json; approx. 4881KB), I receive the expected response, i.e. Response{protocol=http/1.1, code=401, message=, url=http://localhost:8080/}.
The issue is actually embedded in a larger project with Eureka and Feign clients. Threfore, I would like to continue using OkHttp client and I need the expected behavior.
My problem analysis
Of course, I tried to solve this problem myself for quite some time now. The IOException occurs when the request body is written to the HTTP stream:
if (permitsRequestBody(request) && request.body() != null) {
Sink requestBodyOut = httpStream.createRequestBody(request, request.body().contentLength());
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
}
My assumption is that the server closes the connection as soon as it receives the headers (as the request is unauthorized), but the client continues trying to write to the stream although it is already closed.
Update
I've also implemented a simple client with Unirest which shows the same behavior. Implementation:
public class UnirestMain {
public static void main(String[] args)
throws IOException, UnirestException {
HttpResponse response = Unirest
.post("http://localhost:8080")
.header("Content-Type", "aplication/json")
.body(Files.readAllBytes(new File("src/main/java/content.json").toPath()))
// .body(Files.readAllBytes(new File("src/main/java/content-small.json").toPath()))
.asJson();
System.out.println(response.getStatus());
System.out.println(response.getStatusText());
System.out.println(response.getBody());
}
}
Expected output: {"path":"/","error":"Unauthorized","message":"Full authentication is required to access this resource","timestamp":1508769862951,"status":401}
Actual output: java.net.SocketException: Connection reset by peer: socket write error

POST Streaming Audio over HTTP/2 in Android

Some background:
I am trying to develop a voice-related feature on the android app where a user can search using voice and the server sends intermediate results while user is speaking (which in turn updates the UI) and the final result when the query is complete. Since the server accepts only HTTP/2 single socket connection and Android HTTPUrlConnection doesn't support HTTP/2 yet, I am using Retrofit2.
I have looked at this, this and this but each example has fixed length data or the size can be determined beforehand... which is not the case for audio search.
Here's what my method for POST looks like:
public interface Service{
#Streaming
#Multipart
#POST("/api/1.0/voice/audio")
Call<ResponseBody> post(
#Part("configuration") RequestBody configuration,
#Part ("audio") RequestBody audio);
}
The method sends configuration file(containing audio parameters - JSON structure) and streaming audio in the following manner. (Expected POST request)
Content-Type = multipart/form-data;boundary=----------------------------41464684449247792368259
//HEADERS
----------------------------414646844492477923682591
Content-Type: application/json; charset=utf-8
Content-Disposition: form-data; name="configuration"
//JSON data structure with different audio parameters.
----------------------------414646844492477923682591
Content-Type: audio/wav; charset=utf-8
Content-Disposition: form-data; name="audio"
<audio_data>
----------------------------414646844492477923682591--
Not really sure about how to send streaming(!!) <audio_data> . I tried using Okio to create multipart for audio in this way (From: https://github.com/square/okhttp/wiki/Recipes#post-streaming)
public RequestBody createPartForAudio(final byte[] samples){
RequestBody requestBody = new RequestBody() {
#Override
public MediaType contentType() {
return MediaType.parse("audio/wav; charset=utf-8");
}
#Override
public void writeTo(BufferedSink sink) throws IOException {
//Source source = null;
sink.write(samples);
}
};
return requestBody;
}
This didn't work of course. Is this a right way to keep on writing audio samples to ResponseBody? Where exactly should I call Service.post(config, audio) method so that I don't end up posting configuration file every time there is something in the audio buffer.
Also, since I have to keep on sending streaming audio, how can I keep the same POST connection open and not close it until user has stopped speaking?
I am basically new to OkHttp and Okio. If I have missed anything or part of the code is not clear please let me know and I'll upload that snippet. Thank you.
You might be able to use a Pipe to produce data from your audio thread and consume it on your networking thread.
From a newly-created OkHttp recipe:
/**
* This request body makes it possible for another
* thread to stream data to the uploading request.
* This is potentially useful for posting live event
* streams like video capture. Callers should write
* to {#code sink()} and close it to complete the post.
*/
static final class PipeBody extends RequestBody {
private final Pipe pipe = new Pipe(8192);
private final BufferedSink sink = Okio.buffer(pipe.sink());
public BufferedSink sink() {
return sink;
}
#Override public MediaType contentType() {
...
}
#Override public void writeTo(BufferedSink sink) throws IOException {
sink.writeAll(pipe.source());
}
}
This approach will work best if your data can be written as a continuous stream. If it can’t, you might be better off doing something similar with a BlockingQueue<byte[]> or similar.

Categories