I'm trying to load an s3 bucket with my Kinesis data streams (at the closure of each stream) but I am receiving an error preventing me from doing this. Is there a way to do this with the java sdk?
Error -
Error: The website redirect location must have a prefix of 'http://' or 'https://' or '/'. (Service: Amazon S3; Status Code: 400; Error Code: InvalidRedirectLocation; Request ID: ***; S3 Extended Request ID: ***; Proxy: null)
Method to load s3 bucket with stream -
public void uploadStreamToS3Bucket() {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(String.valueOf(awsRegion))
.build();
try {
String fileName = connectionRequestRepository.findStream() +".json";
String bucketName = "downloadable-cases";
String locationData = "arn:aws-us-***-1:kinesis:***:stream/" + connectionRequestRepository.findStream();
s3Client.putObject(new PutObjectRequest(bucketName, fileName, locationData));
} catch (AmazonServiceException ex) {
System.out.println("Error: " + ex.getMessage());
}
}
Related
we have S3 storage ,there are a lot of some files, jpg,mp3 and others
what i need to do?
i need to redirect client to get the file from our s3 without uploading it on our server
and i want that clien get the file on his pc with name and extension
so it looks like clien send us uuid - we find link of this file on s3 and redirect it like this
#GetMapping("/test/{uuid}")
public ResponseEntity<Void> getFile(#PathVariable UUID uuid) {
var url = storageServiceS3.getUrl(uuid);
try {
var name = storageServiceS3.getName(uuid);
return ResponseEntity.status(HttpStatus.MOVED_PERMANENTLY)
.header(HttpHeaders.LOCATION, url)
.header(HttpHeaders.CONTENT_TYPE, new MimetypesFileTypeMap().getContentType(name))
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=" + name)
.build();
} catch (NoSuchKeyException ex) {
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.build();
}
}
everything works good ,the file is downloading but one problem - the file has no name (its name still is key from s3) and no extension.
i think this code not works correctly
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=" + name)
is there any way to do this or i still need upload file to server and then send it to client ?
Finally i found solution- i use S3Presigner ,make presigned url and redirect it with simple Http response
#Bean
public S3Presigner getS3Presigner() {
return S3Presigner.builder()
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(ACCESS_KEY, SECRET_KEY)))
.region(Region.of(REGION))
.endpointOverride(URI.create(END_POINT))
.build();
}
public String getPresignedURL(UUID uuid) {
var name = getName(uuid);
var contentDisposition = "attachment;filename=" + name;
var contentType = new MimetypesFileTypeMap().getContentType(name);
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(BUCKET)
.key(uuid.toString())
.responseContentDisposition(contentDisposition)
.responseContentType(contentType)
.build();
GetObjectPresignRequest getObjectPresignRequest =
GetObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(5))
.getObjectRequest(getObjectRequest)
.build();
PresignedGetObjectRequest presignedGetObjectRequest =
s3Presigner.presignGetObject(getObjectPresignRequest);
return presignedGetObjectRequest.url().toString();
}
#GetMapping("/redirect/{uuid}")
public void redirectToS3(#PathVariable UUID uuid, HttpServletResponse response) {
try {
var URI = storageServiceS3.getPresignedURL(uuid);
response.sendRedirect(URI);
} catch (NoSuchKeyException | IOException e) {
response.setStatus(404);
}
}
It works pretty good ;)
#Алексеев станислав
Some work arround for this is consuming your rest service by javascript and add file's name in a new header response and rename file when download by client.
// don't forget to allow X-File-Name header on CORS in spring
headers.add("X-File-Name", nameToBeDownloaded );
Example on ANGULAR but can be parsed to other language
this.http.get(uri_link_spring_redirecting_to_S3, {
responseType: 'blob',
observe: 'response'
}).subscribe(
(response) => {
var link = document.createElement('a');
var file = new Blob([response.body], {
type: 'text/csv;charset=utf-8;'
});
link.href = window.URL.createObjectURL(file);
link.download = response?.headers?.get('X-File-Name');; 'download.csv';
link.click();
},
error => {
...
}
)
An article, AWS S3 with Java – Reactive, describes how to use the AWS SDK 2.0 client with Webflux.
In the example, they use the following handler to upload to S3 then return a HTTP Created response:
#PostMapping
public Mono<ResponseEntity<UploadResult>> uploadHandler(#RequestHeader HttpHeaders headers,
#RequestBody Flux<ByteBuffer> body) {
long length = headers.getContentLength();
String fileKey = UUID.randomUUID().toString();
Map<String, String> metadata = new HashMap<String, String>();
CompletableFuture future = s3client
.putObject(PutObjectRequest.builder()
.bucket(s3config.getBucket())
.contentLength(length)
.key(fileKey.toString())
.contentType(MediaType.APPLICATION_OCTET_STREAM.toString())
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body));
return Mono.fromFuture(future)
.map((response) -> {
checkResult(response);
return ResponseEntity
.status(HttpStatus.CREATED)
.body(new UploadResult(HttpStatus.CREATED, new String[] {fileKey}));
});
}
This works as intended. Trying to learn WebFlux, I expected that the following would complete the HTTP upload to S3 asynchronously in the same thread that the subscribe method is called on:
#PostMapping
public Mono<ResponseEntity<UploadResult>> uploadHandler(#RequestHeader HttpHeaders headers, #RequestBody Flux<ByteBuffer> body) {
long length = headers.getContentLength();
String fileKey = UUID.randomUUID().toString();
Map<String, String> metadata = new HashMap<String, String>();
Mono<PutObjectResponse> putObjectResponseMono = Mono.fromFuture(s3client
.putObject(PutObjectRequest.builder()
.bucket(s3config.getBucket())
.contentLength(length)
.key(fileKey.toString())
.contentType(MediaType.APPLICATION_OCTET_STREAM.toString())
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body)));
putObjectResponseMono
.doOnError((e) -> {
log.error("Error putting object to S3 " + Thread.currentThread().getName(), e);
})
.subscribe((response) -> {
log.info("Response from S3: " + response.toString() + "on " + Thread.currentThread().getName());
});
return Mono.just(ResponseEntity
.status(HttpStatus.CREATED)
.body(new UploadResult(HttpStatus.CREATED, new String[]{fileKey})));
}
The HTTP POST completes as expected, but the S3 put request fails with this log message:
2020-06-10 12:31:22.275 ERROR 800 --- [tyEventLoop-0-4] c.b.aws.reactive.s3.UploadResource : Error happened on aws-java-sdk-NettyEventLoop-0-4
software.amazon.awssdk.core.exception.SdkClientException: 400 BAD_REQUEST "Request body is missing: public reactor.core.publisher.Mono<org.springframework.http.ResponseEntity<com.baeldung.aws.reactive.s3.UploadResult>> com.baeldung.aws.reactive.s3.UploadResource.uploadHandler(org.springframework.http.HttpHeaders,reactor.core.publisher.Flux<java.nio.ByteBuffer>)"
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.util.ThrowableUtils.asSdkException(ThrowableUtils.java:98) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryIfNeeded(AsyncRetryableStage.java:125) ~[sdk-core-2.10.27.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:107) ~[sdk-core-2.10.27.jar:na]
........
I suspect the explanation involves the request to S3 being run on its own thread, but I'm stumped working out what is going wrong, can you shed any light on it?
try this
#RequestBody Flux<ByteBuffer> body
>>> replace #RequestBody byte[]
and
AsyncRequestBody.fromPublisher(body)
>>> replace .fromBytes(body)
and if you want to subscribe from another thread, use: .subscribeOn({Schedulers})
I am writing a Java method that takes 3 string parameters: bucketName, objectKey, objectContent. The method then puts the object into the bucket. The following code works with no problems.
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(REGION).build();
s3.putObject(bucketName, objectKey, content);
Now I want to set a content type for the objects, because I will be using the method to store e.g. "text/plain" or "text/xml" files. So I use the following code.
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(REGION).build();
byte[] fileContentBytes = content.getBytes(StandardCharsets.UTF_8);
InputStream fileInputStream = new ByteArrayInputStream(fileContentBytes);
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentType(contentType);
metaData.setContentLength(fileContentBytes.length);
PutObjectRequest putObjReq = new PutObjectRequest(bucketName, objectKey, content);
putObjReq.setMetadata(metaData);
s3.putObject(putObjReq);
When I run this code, I get an exception, as listed below. Why?
com.amazonaws.services.s3.model.AmazonS3Exception: The website redirect location must have a prefix of 'http://' or 'https://' or '/'. (Service: Amazon S3; Status Code: 400; Error Code: InvalidRedirectLocation; Request ID: F8032DFF52EBF6F2; S3 Extended Request ID: vZX1/oTjeWU0Fok6twiyB5mEi2d0GDXYWT+akeETrapXo9CUbG+DgcabAaiFVlGXOu072vGghD4=), S3 Extended Request ID: vZX1/oTjeWU0Fok6twiyB5mEi2d0GDXYWT+akeETrapXo9CUbG+DgcabAaiFVlGXOu072vGghD4=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4926)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4872)
at com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:390)
at com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:5806)
at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1794)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1754)
at util.DataUtils.saveContentToS3(DataUtils.java:155)
at builder.SEOGenerator.main(SEOGenerator.java:53)
I should note that I use this S3 bucket to host a static website. I use CloudFront in front of S3 and then Route 53 for my domain. My S3 bucket policy is as follows.
{
"Version": "2012-10-17",
"Id": "http referer policy - my-domain.com",
"Statement": [
{
"Sid": "Allow get requests originating from my domain",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.my-domain.com/*",
"http://my-domain.com/*",
"https://www.my-domain.com/*",
"https://my-domain.com/*"
]
}
}
}
]
}
There are 3 different ways to call the PutObjectRequest constructor. You're using this one:
PutObjectRequest(String bucketName, String key, String redirectLocation)
So, your 'content' is being treated as a redirect location, hence that error.
I think your intent is to use this one instead:
PutObjectRequest(String bucketName, String key, InputStream input, ObjectMetadata metadata)
Which means you'd have to do something like:
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(REGION).build();
byte[] fileContentBytes = content.getBytes(StandardCharsets.UTF_8);
InputStream in = new ByteArrayInputStream(fileContentBytes);
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentType(contentType);
metaData.setContentLength(fileContentBytes.length);
PutObjectRequest putObjReq = new PutObjectRequest(bucketName, objectKey, in, metaData);
s3.putObject(putObjReq);
I trying to connect to s3 bucket to upload/download images.
My code to create s3 client as follows:
AmazonS3 s3 = AmazonS3ClientBuilder
.standard()
.withRegion("EU-WEST-2")
.build();
I getting exceptions as follows:
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 8574612863BD8DC2; S3 Extended Request ID: ueyZy/RLMerNtHeYaOTlRVAqD7w1CksWrjfNLuMgxPWXQbNGDF1Y04RUs4Gh9HeHMwLXxjBc+5o=), S3 Extended Request ID: ueyZy/RLMerNtHeYaOTlRVAqD7w1CksWrjfNLuMgxPWXQbNGDF1Y04RUs4Gh9HeHMwLXxjBc+5o=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1630)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1302)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4330)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4277)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1410)
at uk.nhs.digital.cid.pyi.services.paycasso.PaycassoService.registerDocument(PaycassoService.java:80)
at uk.nhs.digital.cid.pyi.harness.PaycassoClientTestHarness.testVeriSure(PaycassoClientTestHarness.java:61)
at uk.nhs.digital.cid.pyi.harness.PaycassoClientTestHarness.main(PaycassoClientTestHarness.java:36)
Try this, you need to change env.getProperty("amazon.accessKey") as per your access key and secret.
public AmazonS3 getAmazonS3Client() {
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
AmazonS3 s3client = new AmazonS3Client(getAmazonCredentials(), clientConfig);
s3client.setS3ClientOptions(S3ClientOptions
.builder()
.setPathStyleAccess(true)
.disableChunkedEncoding().build());
return s3client;
}
public AWSCredentials getAmazonCredentials() {
AWSCredentials credentials = new BasicAWSCredentials(
env.getProperty("amazon.accessKey"),
env.getProperty("amazon.secretKey")
);
return credentials;
}
To check bucket exists and upload file check this.
AmazonS3 s3client = amazonS3ClientService.getAmazonS3Client();
if (!s3client.doesBucketExistV2(env.getProperty("amazon.bucket"))) {
System.out.println("Bucket is not Exist.");
return RepeatStatus.FINISHED;
}
// Upload Dir
TransferManager transferManager = new TransferManager(amazonS3ClientService.getAmazonCredentials());
MultipleFileUpload upload =
transferManager.uploadDirectory(env.getProperty("amazon.bucket"), file.getName(), file,true);
if you want to upload a single file then try this,
s3client .putObject(bucket_name, key_name, new File(file_path));
You have two problems.
You are using a string for the region. You need to use .withRegion(Regions.EU_WEST_2).
From the comments to your question, I understand that you are not using credentials. Even if your bucket is public, you must use AWS credentials to use AWS APIs. Anonymous credentials are not supported.
If you want to use anonymous credentials (which means no credentials) use the normal HTTP URL: https://s3.amazonaws.com/bucket/object with a library such as HttpUrlConnection.
In some cases you are allowed to use a string for .withRegion(), but only if the region is not in the Regions enum.
For your IAM role provide Programmable access, Also in bucket policy give write permission
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"mybucketpolicy",
"Effect":"Allow",
"Principal": {"Service": "s3.amazonaws.com"},
"Action":["s3:PutObject"],
"Resource":["arn:aws:s3:::destination-bucket/*"],
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:::source-bucket"
},
"StringEquals": {
"aws:SourceAccount": "accid",
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
I have tried with this as well
`AWSCredentials credentials;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException("Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your correct credentials file is at the correct "
+ "location (/Users/userid/.aws/credentials), and is in valid format.", e);
}
AWSSecurityTokenServiceClient stsClient = new AWSSecurityTokenServiceClient(credentials);
AssumeRoleRequest assumeRequest = new AssumeRoleRequest().withRoleArn(ROLE_ARN).withDurationSeconds(3600)
.withRoleSessionName("demo");
AssumeRoleResult assumeResult = stsClient.assumeRole(assumeRequest);
BasicSessionCredentials temporaryCredentials = new BasicSessionCredentials(
assumeResult.getCredentials().getAccessKeyId(), assumeResult.getCredentials().getSecretAccessKey(),
assumeResult.getCredentials().getSessionToken());
s3Client = new AmazonS3Client(temporaryCredentials).withRegion(Regions.EU_WEST_2)`
I have to send file from Server (from its file system) to Cliente (another pc and store file in a particularly folder) through java code and Rest web service. These files can be even 120MB.
I used MultipartFile to upload file from my web page but I don't know how to download from the client.
It would be good the possibility to use a REST web service that returns both file and message with result of method (true or false if there was an error).
Do you have an idea?
At the moment I use this code in server:
and the best way would be
#Override
#RequestMapping(value = "/oldmethod", method = RequestMethod.GET)
public #ResponseBody Response getAcquisition(#RequestParam(value="path", defaultValue="/home") String path){
File file;
try {
file = matlabClientServices.getFile(path);
if (file.exists()){
InputStream inputStream = new FileInputStream(path);
byte[]out=org.apache.commons.io.IOUtils.toByteArray(inputStream);
return new Response(true, true, out, null);
}
else
return new Response(false, false, "File doesn't exist!", null);
} catch (Exception e) {
ErrorResponse errorResponse= ErrorResponseBuilder.buildErrorResponse(e);
LOG.error("Threw exception in MatlabClientControllerImpl::getAcquisition :" + errorResponse.getStacktrace());
return new Response(false, false, "Error during file retrieving!", errorResponse);
}
}
but to the client the code below doesn't work:
public Response getFileTest(#RequestParam(value="path", defaultValue="/home") String path){
RestTemplate restTemplate = new RestTemplate();
Response response = restTemplate.getForObject("http://localhost:8086/ATS/client/file/oldmethod/?path={path}", Response.class, path);
if (response.isStatus() && response.isSuccess()){
try {
Files.write(Paths.get("PROVIAMOCI.txt"),org.apache.commons.io.IOUtils.toByteArray(response.getResult().toString()));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return response;
}
it writes the byte[] characters and not the original text
As per my understanding FileSystemResource used for get the file system using file system URI but it is not over the HTTP. Using FileSystemResource you can access files from your local machine to local accessible file system and from server to your server accesiable file system. But could not access from local to server file system.