I have two buckets, one private and one public. Private will have file with CannedAccessControlList.Private and public will have file with CannedAccessControlList.PublicRead. Apart from these they are all same.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
AmazonS3 s3client = new AmazonS3Client(new BasicAWSCredentials(AWS_ACCESS_KEY,AWS_SECRET_KEY));
generatePresignedUrlRequest = new GeneratePresignedUrlRequest(AWS_BUCKET_PRIVATE_NAME, path,HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration);
generatePresignedUrlRequest.putCustomRequestHeader("x-amz-acl", CannedAccessControlList.Private.toString());
generatePresignedUrlRequest.putCustomRequestHeader("content-type", fileType);
url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
I able to upload file to s3 in below scenarios. All generated URL are by default https.
1. private bucket with https
2. public bucket failed over https, replaced https to http it worked.
Problem why public bucket upload is failing with https. I can't work with http on production system as it have ssl installed.
There are two things which I have learned.
S3 have two different styles of writing URL Path style and virtual host style. (You will have to be careful when your bucket looks like hostname)
Virtual Host Style
https://xyz.com.s3.amazonaws.com/myObjectKey
Path style
https://s3.amazonaws.com/xyz.com/myObjectKey
Ajax call to upload file fails in first case if you are on https, since SSL certificate is valid only for s3.amazonaws.com and if bucket name like hostname SSL check will fail and block ajax upload call.
Solution for this in Java
s3client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
I am still not able to figure out how S3client which region to pick for URL formation, but I found some time it is picking proper "s3-ap-southeast-1.amazonaws.com" and sometimes it picks "s3.amazonaws.com".
In later case you upload will again fail mentioning CORS issues, If you presigned URL is s3.amazonaws.com and even if you have enabled CORS in your buckets its not gonna pick "Access-Control-Allow-Origin". So you need make to make sure you are giving proper region name using below code.
s3client.setEndpoint("s3-ap-southeast-1.amazonaws.com");//or whatever region you bucket is in.
Reference :http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html
I resolved this issue by creating a folder inside my bucket and
generated pre-signed url for "my-bucket/folder" instead of "my-bucket".
Related
I have two buckets, each with a Private ACL.
I have an authenticated link to the source:
String source = "https://bucket-name.s3.region.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=...&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=...Signature=..."
and have been trying to use the Java SDK CopyObjectRequest to copy it into another bucket using:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey)
AWSCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials)
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(provider)
AmazonS3URI sourceURI = new AmazonS3URI(URI(source))
CopyObjectRequest request = new CopyObjectRequest(sourceURI.getBucket, sourceURI.getKey, destinationBucket, destinationKey);
s3Client.copyObject(request);
However I get AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied because My AWS credentials I've set the SDK up with do not have access to the source file.
Is there a way I can provide an authenticated source URL instead of just the bucket and key?
This isn't supported. The PUT+Copy service API, which is used by s3Client.copyObject(), uses an internal S3 mechanism to copy of the object, and the source object is passed as /bucket/key -- not as a full URL. There is no API functionality that can be used for fetching from a URL, S3 or otherwise.
With PUT+Copy, the user making the request to S3...
must have READ access to the source object and WRITE access to the destination bucket
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
The only alternative is download followed by upload.
Doing this from EC2... or a Lambda function running in the source region would be the most cost-effective, but if the object is larger than the Lambda temp space, you'll have to write hooks and handlers to read from the stream and juggle the chunks into a multipart upload... not impossible, but requires some mental gyrations in order to understand what you're actually trying to persuade your code to do.
I am using Java OPC-UA client Eclipse Milo. Whenever I create a session using endpoint URL of server, method UaTcpStackClient.getEndpoints() changes URL to localhost.
String endpointUrl = "opc.tcp://10.8.0.104:48809";
EndpointDescription[] endpoints = UaTcpStackClient.getEndpoints(endpointUrl).get();
EndpointDescription endpoint = Arrays.stream(endpoints)
.filter(e -> e.getSecurityPolicyUri().equals(securityPolicy.getSecurityPolicyUri()))
.findFirst().orElseThrow(() -> new Exception("no desired endpoints returned"));
However value of endpoint.getEndpointUrl() returns opc.tcp://127.0.0.1:4880/ which results failure in connection.
I have no idea why my OPC URL gets changed?
This is a pretty common problem when implementing a UA client.
The server is ultimately responsible for the contents of the endpoints you get back, and the one you're connecting to is (mis)configured to return 127.0.0.1 in the endpoint URLs, apparently.
You need to check the endpoints you get from the server and then depending on the nature of your application either just replace them right away with new copied EndpointDescriptions containing URLs that you've modified or let the user know and ask them for permission first.
Either way, you need to create a new set of EndpointDescriptions in which you've corrected the URL before you go on to create the OpcUaClient.
Alternatively... you could figure out how to get your server configured properly so it returns endpoints containing a publicly reachable hostname or IP address.
Update 2:
Since v0.2.2 there has been EndpointUtil.updateUrl that can be used to modify EndpointDescriptions.
Update:
The code to replace the endpoint URL could be some variation of this:
private static EndpointDescription updateEndpointUrl(
EndpointDescription original, String hostname) throws URISyntaxException {
URI uri = new URI(original.getEndpointUrl()).parseServerAuthority();
String endpointUrl = String.format(
"%s://%s:%s%s",
uri.getScheme(),
hostname,
uri.getPort(),
uri.getPath()
);
return new EndpointDescription(
endpointUrl,
original.getServer(),
original.getServerCertificate(),
original.getSecurityMode(),
original.getSecurityPolicyUri(),
original.getUserIdentityTokens(),
original.getTransportProfileUri(),
original.getSecurityLevel()
);
}
Caveat: this works in most cases, but one notable case it does not work is when the remote endpoint URL contains characters that aren't allowed in a URL hostname (according to the RFC), such as underscore (a '_'), which unfortunately IS allowed in e.g. the hostname of a Windows machine. So you may need to use some other method of parsing the endpoint URL rather than relying on the URI class to do it.
I'm trying to generate a pre-signed URL a client can use to upload an image to a specific S3 bucket. I've succesfully generated requests to GET files, like so:
GeneratePresignedUrlRequest urlRequest = new GeneratePresignedUrlRequest(bucket, filename);
urlRequest.setMethod(method);
urlRequest.setExpiration(expiration);
where expiration and method are Date and HttpMethod objects respectively.
Now I'm trying to create a URL to allow users to PUT a file, but I can't figure out how to set the maximum content-length. I did find information on POST policies, but I'd prefer to use PUT here - I'd also like to avoid constructing the JSON, though that doesn't seem possible.
Lastly, an alternative answer could be some way to pass an image upload from the API Gateway to Lambda so I can upload it from Lambda to S3 after validating file type and size (which isn't ideal).
While I haven't managed to limit the file size on upload, I ended up creating a Lambda function that is activated on upload to a temporary bucket. The function has a signature like the below
public static void checkUpload(S3EventNotification event) {
(this is notable because all the guides I found online refer to a S3Event class that doesn't seem to exist anymore)
The function pulls the file's metadata (not the file itself, as that potentially counts as a large download) and checks the file size. If it's acceptable, it downloads the file then uploads it to the destination bucket. If not, it simply deletes the file.
This is far from ideal, as uploads failing to meet the criteria will seem to work but then simply never show up (as S3 will issue a 200 status code on upload without caring what Lambda's response is).
This is effectively a workaround rather than a solution, so I won't be accepting this answer.
I found the follow note, which describes exactly what I'd like to do:
Note: If your users are only uploading resources (writing) to an access-controlled bucket, you can use the resumable uploads functionality of Google Cloud Storage, and avoid signing URLs or requiring a Google account. In a resumable upload scenario, your (server-side) code authenticates and initiates an upload to Google Cloud Storage without actually uploading any data. The initiation request returns an upload ID, which can then be used in a client request to upload the data. The client request does not need to be signed because the upload ID, in effect, acts as an authentication token. If you choose this path, be sure to transmit the upload ID over HTTPS.
https://cloud.google.com/storage/docs/access-control#Signed-URLs
However, I cannot figure out how to do this with the Google Cloud Storage Library for Java.
https://developers.google.com/resources/api-libraries/documentation/storage/v1/java/latest/
I can't find any reference to resumable files, or getting the URL for a file anywhere in this API. How can I do this?
That library does not expose the URLs that it creates to its caller, which means you can't use it to accomplish this. If you want to use either signed URLs or the trick you mention above, you'll need to implement it manually.
I would advise going with the signed URL solution over the solution where the server initializes the resumable upload, if possible. It's more flexible and easier to get right, and there are some odd edge cases with the latter method that you could run into.
Someone wrote a up a quick example of signing a URL from App Engine a while back in another question: Cloud storage and secure download strategy on app engine. GCS acl or blobstore
You can build the url yourself. Here is an example :
OkHttpClient client = new OkHttpClient();
AppIdentityService appIdentityService = credential.getAppIdentityService();
Collection<String> scopes = credential.getScopes();
String accessToken = appIdentityService.getAccessToken(scopes).getAccessToken();
Request request = new Request.Builder()
.url("https://www.googleapis.com/upload/storage/v1/b/" + bucket + "/o?name=" + fileName + "&uploadType=resumable")
.post(RequestBody.create(MediaType.parse(mimeType), new byte[0]))
.addHeader("X-Upload-Content-Type", mimeType)
.addHeader("X-Upload-Content-Length", "" + length)
.addHeader("Origin", "http://localhost:8080")
.addHeader("Origin", "*")
.addHeader("authorization", "Bearer "+accessToken)
.build();
Response response = client.newCall(request).execute();
return response.header("location");
It took some digging, but I came up with the following which does the right thing. Some official documentation on how to do this would have been nice, especially because the endpoint for actually triggering the resumable upload is different from what the docs call out. What is here came from using the gsutil tool to sign requests and then working out what was being done. The under-documented additional thing is that the code which POSTs to this URL to get a resumable session URL must include the "x-goog-resumable: start" header to trigger the upload. From there, everything is the same as the docs for performing a resumable upload to GCS.
import base64
import datetime
import time
import urllib
from google.appengine.api import app_identity
SIGNED_URL_EXPIRATION = datetime.timedelta(days=7)
def SignResumableUploadUrl(gcs_resource_path):
"""Generates a signed resumable upload URL.
Note that documentation on this ability is sketchy. The canonical source
is derived from running the gsutil program to generate a RESUMABLE URL
with the "-m RESUMABLE" argument. Run "gsutil help signurl" for info and
the following for an example:
gsutil -m RESUMABLE -d 10m keyfile gs://bucket/file/name
Note that this generates a URL different from the standard mechanism for
deriving a resumable start URL and the initiator needs to add the header:
x-goog-resumable:start
Args:
gcs_resource_path: The path of the GCS resource, including bucket name.
Returns:
A full signed URL.
"""
method = "POST"
expiration = datetime.datetime.utcnow() + SIGNED_URL_EXPIRATION
expiration = int(time.mktime(expiration.timetuple()))
signature_string = "\n".join([
method,
"", # content md5
"", # content type
str(expiration),
"x-goog-resumable:start",
gcs_resource_path
])
_, signature_bytes = app_identity.sign_blob(signature_string)
signature = base64.b64encode(signature_bytes)
query_params = {
"GoogleAccessId": app_identity.get_service_account_name(),
"Expires": str(expiration),
"Signature": signature,
}
return "{endpoint}{resource}?{querystring}".format(
endpoint="https://storage.googleapis.com",
resource=gcs_resource_path,
querystring=urllib.urlencode(query_params))
I'm trying to upload a file with the Amazon Java SDK, via multipart upload. The idea is to pass an upload-id to an applet, which puts the file parts into a readonly-bucket. Going this way, I avoid to store AWS credentials in the applet.
In my tests, I generate an upload-id with boto (python) and store a file into the bucket. That works well.
My Applet gets a "403 Access denied" from the S3, and I have no idea why.
Here's my code (which is partially taken from http://docs.amazonwebservices.com/AmazonS3/latest/dev/llJavaUploadFile.html):
AmazonS3 s3Client = new AmazonS3Client();
List<PartETag> partETags = new ArrayList<PartETag>();
long contentLength = file.length();
long partSize = Config.getInstance().getInt("part_size");
String bucketName = Config.getInstance().getString("bucket");
String keyName = "mykey";
String uploadId = getParameter("upload_id");
try {
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucket).withKey(keyName)
.withUploadId(uploadId).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
System.out.println("Completing upload");
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bucket,
keyName,
uploadId,
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
bucketName, keyName, uploadId));
}
In the applet debug log, I find this, then:
INFO: Sending Request: PUT https://mybucket.s3.amazonaws.com /mykey Parameters: (uploadId: V4hwobOLQ1rYof54zRW0pfk2EfhN7B0fpMJTOpHOcmaUl8k_ejSo_znPI540.lpO.ZO.bGjh.3cx8a12ZMODfA--, partNumber: 1, ) Headers: (Content-Length: 4288546, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
24.01.2012 16:48:42 com.amazonaws.http.AmazonHttpClient handleErrorResponse
INFO: Received error response: Status Code: 403, AWS Service: null, AWS Request ID: DECF32CCFEE9EBF0, AWS Error Code: AccessDenied, AWS Error Message: Access Denied, S3 Extended Request ID: xtL1ixsGM2/vsxJ+cZRHpkPZ23SMfP8hZZjQCQnp8oWGwdS2/aGfYgomihyqaDCQ
Do you find any obvious failures in the code?
Thanks,
Stefan
While your use case is sound and this is an obvious attempt indeed, I don't think the Multipart Upload API has been designed to allow this and you are actually violating a security barrier:
The upload ID is merely an identifier to assist the Multipart Upload API in assembling the parts together (i.e. more like a temporary object key) not a dedicated security mechanism (see below). Consequently you still require proper access credentials in place, but since you are calling AmazonS3Client(), which Constructs a new Amazon S3 client that will make anonymous requests to Amazon S3, your request yields a 403 Access denied accordingly.
What you are trying to achieve is possible via Uploading Objects Using Pre-Signed URLs, albeit only without the multipart functionality, unfortunately:
A pre-signed URL gives you access to the object identified in the URL,
provided that the creator of the pre-signed URL has permissions to
access that object. That is, if you receive a pre-signed URL to upload
an object, you can upload the object only if the creator of the
pre-signed URL has the necessary permissions to upload that object.
[...] The pre-signed URLs
are useful if you want your user/customer to be able upload a specific
object [...], but you don't require them to have AWS security
credentials or permissions. When you create a pre-signed URL, you must
provide your security credentials, specify a bucket name an object
key, an HTTP method (PUT of uploading objects) and an expiration date
and time. [...]
The lenghty quote illustrates, why a system like this likely needs a more complex security design than 'just' handing out an upload ID (as similar as both might appear at first sight).
Obviously one would like to be able to use both features together, but this doesn't appear to be available yet.