Azure Blob Copy from presigned to presigned url - java

I'm trying to copy an azure blob from one account to another.
I have a destination and a source url with their shared access key.
the request that i've done are:
PUT on [url with and without sastoken]
Authorization: "SharedAccessSignature [sas token encoded and decoded]"
x-ms-copy-source: "[sourceUri with and without sastoken]"
x-ms-copy-source-authorization: "SharedAccessSignature [sas token encoded and decoded]"
x-ms-requires-sync: "true"
x-ms-date: [example: 2023-01-03T17:27:02Z]
x-ms-version: [taken from destination sastoken]
empty body
In the documentations is not specified the content of the authorization header i've found it on internet.
i've tried with and without x-ms-copy-source-authorization header that is reported in the documentation for copy from url but not in copy documentation
i'm sure that source url is valid why befor these request i put a file on this blob.
destination url is retrieved from an external service and i have no resolution from my machine i can test only after deploy.
if you have any ideas, you are welcome!

If you have SAS URL for both source and destination blobs, then you would do a PUT operation using the destination blob SAS URL.
You don't need Authorization header as the destination blob URL already has the authorization information (in the SAS URL).
You would need to specify source blob SAS URL for x-ms-copy-source.
You need not specify x-ms-copy-source-authorization as the source blob URL already has the authorization information (in the SAS URL).
You need not specify x-ms-date and x-ms-version.
To summarize, your request should look something like:
PUT on [destination blob SAS URL]
x-ms-copy-source: "[source blob SAS URL]"
x-ms-requires-sync: "true"
empty body

Related

What are the proper arguments for the generation of a SAS token to connect to Azure eventhub?

While reading this HTTP POST between Postman and EventHub, I was directed to this:
https://learn.microsoft.com/en-us/rest/api/eventhub/generate-sas-token#java
I don't know what resourceUri, keyName, and key to use. Do I use the full URL for the eventHub?
Maybe somebody here could clarify where to get these three parameters.
But I don't know what resourceUri, keyName, and key to use. Do I use the full url for the eventHub?
You can get these three parameters from the azure portal as shown in the below screenshot:
--> Goto your EventHub Namespace -->shared access policies--> Select default shared access policy(RootManageSharedAccessKey)-->copy Connection string–primary key.
These are the parameters we need to pass to that method for generating the SAS token.
Endpoint=sb://mmxxxxxxxdows.net/;
SharedAccessKeyName=RootManageSharedAccessKey;
SharedAccessKey=K4Qxxxxxxxxxxxx9o=

Uploading image withV4Signature() defined with MD5 hash is permitted, even though hashes don't match

I'm using the Google Cloud Storage Java library to create a signed URL with the withV4Signature() method, it seems to work well. However, I was under the impression that if you specify the MD5 hash of an image when creating the signed URL, any attempt to upload an image with that URL that did NOT match the specified MD5 hash would be rejected.
This does not seem to be the case...As I can specify any MD5 hash I want in the "setMD5()" method, and Google storage will accept my uploaded file.
BlobInfo blobInfo = BlobInfo.newBuilder(
BlobId.of("mybucket, "myobject.jpeg"))
.setMd5("some-Random-Md5-Hash-Unrelated-To-The-Image")
.setContentType("image/jpeg")
.build();
URL url = storage.signUrl(blobInfo,
30,
TimeUnit.SECONDS,
Storage.SignUrlOption.withV4Signature(),
Storage.SignUrlOption.signWith(myServiceAccountCredentials),
Storage.SignUrlOption.withMd5(),
Storage.SignUrlOption.httpMethod(com.google.cloud.storage.HttpMethod.PUT),
Storage.SignUrlOption.withContentType());
And then, using an image with a totally different MD5 hash:
curl -X PUT --upload-file myobject.jpeg "https://storage.googleapis.com/mybucket/myobject.jpeg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=[My Service Account Credential]%2F20210501%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210501T035401Z&X-Goog-Expires=30&X-Goog-SignedHeaders=host&X-Goog-Signature=[Google API Provided Signature]"
Google Storage accepts this image upload without complaint. My question is, does anyone else have experience with explicitly setting the MD5 hash on Signed URLs for the Google Storage API and can see where I've gone wrong? Perhaps I've misunderstood the nature of this feature and encoding an MD5 restriction into a singed URL just isn't possible?
A bit late, but by looking at the description of Storage.SignUrlOption.withMd5() here at the docs:
Use it if signature should include the blob's md5. When used, users of the signed URL should include the blob's md5 with their request.
What's missing on your code is a canonical header on your Signed URL. Adding Content-MD5 header to your Signed URL ensures that any wrong MD5 hash or missing header on the request won't execute the upload request.
Note that you must convert your MD5 hash from Hex to Base64. Here's an updated version of your code:
Map <String, String> extHeaders = new HashMap<String, String>(); // create map header
extHeaders.put("Content-MD5", "BASE64_MD5");
BlobInfo blobInfo = BlobInfo.newBuilder(
BlobId.of("my-bucket", "myobject.jpeg"))
.setMd5("BASE64_MD5")
.setContentType("image/jpeg")
.build();
URL url = storage.signUrl(blobInfo,
30,
TimeUnit.SECONDS,
Storage.SignUrlOption.withV4Signature(),
Storage.SignUrlOption.signWith(myServiceAccountCredentials),
Storage.SignUrlOption.withMd5(),
Storage.SignUrlOption.withExtHeaders(extHeaders), // attach the header as canonical header
Storage.SignUrlOption.httpMethod(com.google.cloud.storage.HttpMethod.PUT),
Storage.SignUrlOption.withContentType());
Then finally, your curl request:
curl -X PUT --header "Content-MD5: BASE64_MD5" --upload-file myobject.jpeg "SIGNED_URL"

Unable to authenticate the request. Provided 'signature' is not valid for the provided client ID. Java, Camel, Google Maps API, Signature, Geocode

I am trying to use the Google Maps API for getting location, and subsequently nearby places.
I am sending the encoded URL to the API, for example, if I search for "Las Vegas, Nevada" the URL sent to API is: https://maps.googleapis.com/maps/api/geocode/json?address=Las+Vegas%2C+Nevada&client=gme-XXXXXXXXXX&signature=xxxxxxxxxxxxx.
Here, clientID is fixed and does not change, but the signature is generated on the basis of the address "Las+Vegas%2C+Nevada", or whatever is searched.
Note that in the URL, the keyword, written as the address, is UTF-8 encoded (space replaced by '+', and comma by '%2C').
However, the URL for which the API performs the search after the request is sent is: https://maps.googleapis.com/maps/api/geocode/json?address=Las+Vegas,+Nevada&client=gme-XXXXXXXXXX&signature=zzzzzzzzzzzzz.
Note that the spaces are still encoded as '+' but there is a comma present in this URL instead of '%2C' which results in a different signature being generated by the API, as the signature is generated on the basis of address.
I'm getting the following error because of this:
"Unable to authenticate the request. Provided 'signature' is not valid for the provided client ID, or the provided 'client' is not valid.
The signature was checked against the URL: /maps/api/geocode/json?address=Las+Vegas,+Nevada&clientID=gme-XXXXXXXXXX&signature=zzzzzzzzzzzzz.
If this does not match the URL you requested, please ensure that your request is URL encoded correctly. Learn more: developers.google.com/maps/documentation/business/webservices/auth"
Why is the comma not encoded in the URL that Maps API is using?
And is there any way to resolve this issue?
Simply wrap each paramater in the following
URLEncoder.encode(VARIABLE_NAME, "UTF-8")
this will cause it to be sent "url safe"!
Example:
URL url = new URL("https://maps.googleapis.com/maps/api/geocode/json?address=" + URLEncoder.encode("Las Vegas, Nevada", "UTF-8") + &client=gme-XXXXXXXXXX&signature=xxxxxxxxxxxxx");

S3 presigned URL file upload failing with secured/https URL

I have two buckets, one private and one public. Private will have file with CannedAccessControlList.Private and public will have file with CannedAccessControlList.PublicRead. Apart from these they are all same.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
AmazonS3 s3client = new AmazonS3Client(new BasicAWSCredentials(AWS_ACCESS_KEY,AWS_SECRET_KEY));
generatePresignedUrlRequest = new GeneratePresignedUrlRequest(AWS_BUCKET_PRIVATE_NAME, path,HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration);
generatePresignedUrlRequest.putCustomRequestHeader("x-amz-acl", CannedAccessControlList.Private.toString());
generatePresignedUrlRequest.putCustomRequestHeader("content-type", fileType);
url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
I able to upload file to s3 in below scenarios. All generated URL are by default https.
1. private bucket with https
2. public bucket failed over https, replaced https to http it worked.
Problem why public bucket upload is failing with https. I can't work with http on production system as it have ssl installed.
There are two things which I have learned.
S3 have two different styles of writing URL Path style and virtual host style. (You will have to be careful when your bucket looks like hostname)
Virtual Host Style
https://xyz.com.s3.amazonaws.com/myObjectKey
Path style
https://s3.amazonaws.com/xyz.com/myObjectKey
Ajax call to upload file fails in first case if you are on https, since SSL certificate is valid only for s3.amazonaws.com and if bucket name like hostname SSL check will fail and block ajax upload call.
Solution for this in Java
s3client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
I am still not able to figure out how S3client which region to pick for URL formation, but I found some time it is picking proper "s3-ap-southeast-1.amazonaws.com" and sometimes it picks "s3.amazonaws.com".
In later case you upload will again fail mentioning CORS issues, If you presigned URL is s3.amazonaws.com and even if you have enabled CORS in your buckets its not gonna pick "Access-Control-Allow-Origin". So you need make to make sure you are giving proper region name using below code.
s3client.setEndpoint("s3-ap-southeast-1.amazonaws.com");//or whatever region you bucket is in.
Reference :http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html
I resolved this issue by creating a folder inside my bucket and
generated pre-signed url for "my-bucket/folder" instead of "my-bucket".

How do I create a Google Cloud Storage resumable upload URL with Google Client Library for Java on App Engine?

I found the follow note, which describes exactly what I'd like to do:
Note: If your users are only uploading resources (writing) to an access-controlled bucket, you can use the resumable uploads functionality of Google Cloud Storage, and avoid signing URLs or requiring a Google account. In a resumable upload scenario, your (server-side) code authenticates and initiates an upload to Google Cloud Storage without actually uploading any data. The initiation request returns an upload ID, which can then be used in a client request to upload the data. The client request does not need to be signed because the upload ID, in effect, acts as an authentication token. If you choose this path, be sure to transmit the upload ID over HTTPS.
https://cloud.google.com/storage/docs/access-control#Signed-URLs
However, I cannot figure out how to do this with the Google Cloud Storage Library for Java.
https://developers.google.com/resources/api-libraries/documentation/storage/v1/java/latest/
I can't find any reference to resumable files, or getting the URL for a file anywhere in this API. How can I do this?
That library does not expose the URLs that it creates to its caller, which means you can't use it to accomplish this. If you want to use either signed URLs or the trick you mention above, you'll need to implement it manually.
I would advise going with the signed URL solution over the solution where the server initializes the resumable upload, if possible. It's more flexible and easier to get right, and there are some odd edge cases with the latter method that you could run into.
Someone wrote a up a quick example of signing a URL from App Engine a while back in another question: Cloud storage and secure download strategy on app engine. GCS acl or blobstore
You can build the url yourself. Here is an example :
OkHttpClient client = new OkHttpClient();
AppIdentityService appIdentityService = credential.getAppIdentityService();
Collection<String> scopes = credential.getScopes();
String accessToken = appIdentityService.getAccessToken(scopes).getAccessToken();
Request request = new Request.Builder()
.url("https://www.googleapis.com/upload/storage/v1/b/" + bucket + "/o?name=" + fileName + "&uploadType=resumable")
.post(RequestBody.create(MediaType.parse(mimeType), new byte[0]))
.addHeader("X-Upload-Content-Type", mimeType)
.addHeader("X-Upload-Content-Length", "" + length)
.addHeader("Origin", "http://localhost:8080")
.addHeader("Origin", "*")
.addHeader("authorization", "Bearer "+accessToken)
.build();
Response response = client.newCall(request).execute();
return response.header("location");
It took some digging, but I came up with the following which does the right thing. Some official documentation on how to do this would have been nice, especially because the endpoint for actually triggering the resumable upload is different from what the docs call out. What is here came from using the gsutil tool to sign requests and then working out what was being done. The under-documented additional thing is that the code which POSTs to this URL to get a resumable session URL must include the "x-goog-resumable: start" header to trigger the upload. From there, everything is the same as the docs for performing a resumable upload to GCS.
import base64
import datetime
import time
import urllib
from google.appengine.api import app_identity
SIGNED_URL_EXPIRATION = datetime.timedelta(days=7)
def SignResumableUploadUrl(gcs_resource_path):
"""Generates a signed resumable upload URL.
Note that documentation on this ability is sketchy. The canonical source
is derived from running the gsutil program to generate a RESUMABLE URL
with the "-m RESUMABLE" argument. Run "gsutil help signurl" for info and
the following for an example:
gsutil -m RESUMABLE -d 10m keyfile gs://bucket/file/name
Note that this generates a URL different from the standard mechanism for
deriving a resumable start URL and the initiator needs to add the header:
x-goog-resumable:start
Args:
gcs_resource_path: The path of the GCS resource, including bucket name.
Returns:
A full signed URL.
"""
method = "POST"
expiration = datetime.datetime.utcnow() + SIGNED_URL_EXPIRATION
expiration = int(time.mktime(expiration.timetuple()))
signature_string = "\n".join([
method,
"", # content md5
"", # content type
str(expiration),
"x-goog-resumable:start",
gcs_resource_path
])
_, signature_bytes = app_identity.sign_blob(signature_string)
signature = base64.b64encode(signature_bytes)
query_params = {
"GoogleAccessId": app_identity.get_service_account_name(),
"Expires": str(expiration),
"Signature": signature,
}
return "{endpoint}{resource}?{querystring}".format(
endpoint="https://storage.googleapis.com",
resource=gcs_resource_path,
querystring=urllib.urlencode(query_params))

Categories