Enabling versioning in Amazon S3 bucket - java

I am trying to enable versioning for Amazon S3 bucket using Java. But, I am not able to do. I get an exception
"Exception Status Code: 400, AWS Request ID: DC53C8220CEC7D4C, AWS
Error Code: MalformedXML, AWS Error Message: The XML you provided was not well-formed or did not validate against our published schema, S3 Extended Request ID: qAdibjSkoFltjoYTFZSdTOnh8JXwZrxkjgrTcgaXqZYGIgVdbRxr8VXzwkO4ilaG"
Can somebody please point out the error in the code. I am attaching the portion of the code responsible for enabling bucket versioning.
public void enableVersioning(String bucketName) {
SetBucketVersioningConfigurationRequest request =
new SetBucketVersioningConfigurationRequest(bucketName,
new BucketVersioningConfiguration("ENABLED"));
AmazonS3 s3 = new AmazonS3Client(credentials); // I have the credentials
s3.setBucketVersioningConfiguration(request);
}
Thanks in advance.

They should be the same, but I would use BucketVersioningConfiguration.ENABLED instead of the String literal if I were you. Do an import static if you think it clutters up the code too much. (Who knows, it might even mysteriously fix your problem)
Just did pretty much exactly this myself and it worked, this was the only difference I could find.

OhHiThere is correct - you should be using the LITERAL:
SetBucketVersioningConfigurationRequest request =
new SetBucketVersioningConfigurationRequest(bucketName,
new BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED));
The error is almost certainly because "ENABLED" is not the same as BucketVersioningConfiguration.ENABLED (which is defined as "Enabled").
I have also seen this error:
The XML you provided was not well-formed
message trying to turn versioning off when it has been on (only a suspend is allowed in that case).

Related

SAML implementation using OneLogin in ColdFusion throwing error

As part of learning how to integrate OneLogin SSO in my ColdFusion app I pulled this git repo -
https://github.com/GiancarloGomez/ColdFusion-OneLogin and set up locally. But, while sending the auth request to OneLogin we are getting an error message saying "We're sorry, but something went wrong.
We've been notified about this issue and we'll take a look at it shortly."
I could not find the root cause of this issue. Appreciate your timely help on this.
Configuration on OneLogin looks like below. Note that consumer URL I modified to http://127.0.0.1:8500/coldfusion-onelogin/consume.cfm instead of actual format mentioned (http://127.0.0.1:8500/coldfusion-onelogin/consume/) in the YouTube video provided in the readme file of this git repo. I had tried changing the consumer URL format as this http://127.0.0.1:8500/coldfusion-onelogin/consume/ but we are still getting the error message.
Access Tab in OneLogin looks like below,
Below is the code which sends auth request to OneLogin.
<cfscript>
try{
// used to encode string - chose to use Java version just in case CF did not encode correctly
// encodeForURL appears to work but to keep the same as the samples from OneLogin I will use the Java reference
urlEncoder = createObject("java","java.net.URLEncoder");
// the appSettings object contain application specific settings used by the SAML library
appSettings = createObject("java","com.onelogin.AppSettings");
// set the URL of the consume file for this app. The SAML Response will be posted to this URL
appSettings.setAssertionConsumerServiceUrl(request.company.getConsumeUrl());
// set the issuer of the authentication request. This would usually be the URL of the issuing web application
appSettings.setIssuer(request.company.getIssuerUrl());
// the accSettings object contains settings specific to the users account.
accSettings = createObject("java","com.onelogin.AccountSettings");
// The URL at the Identity Provider where to the authentication request should be sent
accSettings.setIdpSsoTargetUrl("https://app.onelogin.com/saml/signon/" & request.company.getIssuerID());
// Generate an AuthRequest and send it to the identity provider
authReq = createObject("java","com.onelogin.saml.AuthRequest").init(appSettings, accSettings);
// now send to one login
location ( accSettings.getIdp_sso_target_url() & "?SAMLRequest=" & authReq.getRidOfCRLF(urlEncoder.encode(authReq.getRequest(authReq.base64),"UTF-8")), false);
}
catch(Any e){
writeDump(e);
}
</cfscript>
Below is the format of auth request URL ,
https://app.onelogin.com/saml/signon/[issuerId]?SAMLRequest=[SamlRequest].
I am not providing the actual URL here since I am not sure whether someone can tamper it or not. But please do let us know if it is really required to solve this issue.
Below is the screenshot of the SAML Login Page , from here I am clicking on the button and send auth request to OneLogin.
Also, In the index.cfm , form action attribute is "/post/". Since it was throwing an error I had to replace it with "/coldfusion-onelogin/post.cfm". Here coldfusion-onelogin is a folder under wwwroot. Any settings in ColdFusion to be modified so that it will not throw any error if we keep the form action attribute as "/post/" ?.
Hmmm. The consumer URL validator is supposed to be a regex expression, and I'm not sure how it's going to handle a literal HTTP value (since it'll try to evaluate it as regex)
So try changing URL validator to be something dumb like *. (match everything)
That should hopefully clear the error until you can sort out what you want the validation to be in production.
You need to first logout from the OneLogin Admin Panel
https://app.onelogin.com/logout
To successfully test the demo app.

Generating a pre-signed PUT url for Amazon S3 with a maximum content-length

I'm trying to generate a pre-signed URL a client can use to upload an image to a specific S3 bucket. I've succesfully generated requests to GET files, like so:
GeneratePresignedUrlRequest urlRequest = new GeneratePresignedUrlRequest(bucket, filename);
urlRequest.setMethod(method);
urlRequest.setExpiration(expiration);
where expiration and method are Date and HttpMethod objects respectively.
Now I'm trying to create a URL to allow users to PUT a file, but I can't figure out how to set the maximum content-length. I did find information on POST policies, but I'd prefer to use PUT here - I'd also like to avoid constructing the JSON, though that doesn't seem possible.
Lastly, an alternative answer could be some way to pass an image upload from the API Gateway to Lambda so I can upload it from Lambda to S3 after validating file type and size (which isn't ideal).
While I haven't managed to limit the file size on upload, I ended up creating a Lambda function that is activated on upload to a temporary bucket. The function has a signature like the below
public static void checkUpload(S3EventNotification event) {
(this is notable because all the guides I found online refer to a S3Event class that doesn't seem to exist anymore)
The function pulls the file's metadata (not the file itself, as that potentially counts as a large download) and checks the file size. If it's acceptable, it downloads the file then uploads it to the destination bucket. If not, it simply deletes the file.
This is far from ideal, as uploads failing to meet the criteria will seem to work but then simply never show up (as S3 will issue a 200 status code on upload without caring what Lambda's response is).
This is effectively a workaround rather than a solution, so I won't be accepting this answer.

jclouds IOExpection: Error writing request body to server

We are using jclouds with Rackspace and when uploading lots of files via cloudfile api (multi threaded)
Once in while we are getting an exception on objectApi.put line (see example code at bottom)
Exception
16-Jul-2015 11:58:00.811 SEVERE [threadsPool-1] org.jclouds.logging.jdk.JDKLogger.logError error after writing 8192/streaming bytes to https://*****/****.jpg
java.io.IOException: Error writing request body to server
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3478)
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3461)
at com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53)
at com.google.common.io.ByteStreams.copy(ByteStreams.java:74)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.writePayloadToConnection(JavaUrlHttpCommandExecutorService.java:297)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:160)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:64)
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:91)
at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
at org.jclouds.reflect.FunctionalReflection$FunctionalInvocationHandler.handleInvocation(FunctionalReflection.java:117)
at com.google.common.reflect.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:87)
at com.sun.proxy.$Proxy176.put(Unknown Source)
at
Similar issue with S3
can be found here
Example Code
ObjectApi objectApi = cloudFiles.getObjectApi(REGION, container);
ByteSource byteSource = Files.asByteSource(file);
Payload payload = Payloads.newByteSourcePayload(byteSource);
objectApi.put(hashedName, payload);
The question:
Any one has experience some behavior like that? maybe someone has workaround for that kind of issue?
Thanks
Alon
Networks are unreliable, so expect some exceptions when using cloud services, especially when dealing with many files. Specifically for jclouds uploads, we have some example code here:
https://github.com/jclouds/jclouds-examples/tree/master/blobstore-uploader
Edit: I have also added a JIRA issue to make sure we add a test specifically for this situation in swift:
https://issues.apache.org/jira/browse/JCLOUDS-965

google app engine java error using blobstore api to retrieve file from cloud storage

I started running into the 30MB filesize limitation when using the cloud storage client API. Upon researching I found the GAE documentation suggesting I use the BlobStore API.
I created an initial test to upload to a bucket in my cloud service. One thing I notice is that the filenames are lost and instead a key is used. No biggie, I now persist the mapping between the blob key and the file metadata I want to store.
I then tried to download the file using the blobstoreService.createGsBlobKey() method. I am passing in the following:
"/gs/" + bucketName + "/" + fBlob.getBlobKey()) where fBlob is my mapping object that contains the file info and the blob key.
** Edit ** I also tried using the actual name of the file instead of the blob key (I wasn't quite sure what the documentation was asking for) but the results were the same.
The method getBlobKey() does just that. It returns the string I retrieved from the BlobKey.getKeyString() method.
All seems well until I access the servlet that passes in my parameters to retrieve the file. All of my log dumps look good. Making things more frustrating is that there aren't any errors in the logs. The only indication there is a problem is the generic web page that shows indicating a 500 error has occurred and to post a message in the support forum if I continue to encounter this error.
So here I am :)
Any assistance would be greatly appreciated. I would provide more info (i.e. stack traces) but as I mentioned there aren't any. Here is a summary of the servlet:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String iopsKey = request.getParameter("iopsId");
String itemId = request.getParameter("itemId");
String digitalId = request.getParameter("resourceId");
Logger.getAnonymousLogger().info("Site Id: " + iopsKey);
Logger.getAnonymousLogger().info("Item Id: " + itemId);
Logger.getAnonymousLogger().info("Digital Good Id: " + digitalId);
final DigitalResource resource = delegate.getDigitalResource(iopsKey, itemId, digitalId);
Logger.getAnonymousLogger().info("Contents of Digital Resource: " + resource);
FileBlobMap fBlob = delegate.findBLOBByFilename(resource.getInternalName());
Logger.getAnonymousLogger().info("Contents of FileBlogMap: " + fBlob);
BlobstoreService blobstoreService = BlobstoreServiceFactory.getBlobstoreService();
BlobKey blobKey = blobstoreService.createGsBlobKey("/gs/vendor-imports/" + fBlob.getBlobKey());
blobstoreService.serve(blobKey, response);
}
Edit #2 **
After some playing around I realize that the key/name generated by the blobstore API does not seem to correlate with what is actually stored in the cloud storage.
When using the blobstore api I am aware of only 2 fields that would be useful in mapping back to the stored file: The key string from the blob key object and the actual name of the file. However, the name/key value that is created in the cloud storage is yet another key string and doesn't seem to match anything AFAIK.
Now if I copy the key/name from the cloud store and hard code it to my object that stores the filename/blobkey mapping via the DataStore Viewer then everything works! So it seems that there is a disconnect between the reference in the cloud store and what I am getting in my call back handler.
Resolved ** There is an additional method that is part of the FileInfo object I was using that I had not noticed. This method is called getGsObjectName() and it returns a fully mapped string that includes the "/gs/" prefix already along with the token string I see in the cloud store. This wasn't immediately obvious to me so hopefully this post will save someone else's time in the future.
I think you might have problems with creating the blobKey. Your fBlob.getBlobKey() already returns blobKey but then you try to put it into another blobKey. Did you try just passing that key into blobstoreService.serve method:
blobstoreService.serve(fBlob.getBlobKey(), response);
Also your generic page that you see is the error page served by AppEngine when there is a server error. You should check your console for any exceptions. If you're running in production you can check logs in AppEngine console to see the details of the error.
Resolved ** There is an additional method that is part of the FileInfo object I was using that I had not noticed. This method is called getGsObjectName() and it returns a fully mapped string that includes the "/gs/" prefix already along with the token string I see in the cloud store. This wasn't immediately obvious to me so hopefully this post will save someone else's time in the future.
Sorry if my formatting is off and the fact that I have "solutions" in multiple places. This is my first time posting so and I got a little ahead of myself.

Amazon MWS - error in requesting inventory report

I am fully capable of generating LookupItem requests with the Product Advertising API, including building the URL string with parameters and signing the request, but when I tried to take the model I had I modify it for the MWS RequestReport requests, I get this error message:
"Invalid Section name or version provided - onca/2011-01-01"
For some mysterious reason, it keeps adding "onca/" to the beginning of the date when clearly my parameters are:
Map<String, String> params = new HashMap<String, String>();
params.put("Action", "RequestReport");
params.put("Version", "2011-01-01"); //NOT "onca/2011-01-01" (version may be old)
params.put("SellerId", MERCHANT_ID);
params.put("SignatureVersion", "2");
params.put("SignatureMethod", "HmacSHA256");
params.put("ReportType", "_GET_MERCHANT_LISTINGS_DATA_");
//timestamp and signature params are added in the method that signs this request
requestUrl = helper.sign(params);
What am I missing here? The method signs this "canonical query string" does not add it either, as evident in its success at signing LookupItem requests as I mentioned early. Does this have something to do with the way Amazon interprets the signature? But then wouldn't it say the URL/encoding don't match? Any theories? Need any more code or info?
I discovered the solution: With AWS, requests start with ecs.amazonaws.com/onca/xml? and MWS with mws.amazonservices.com? (in US). When I had changed the endpoint to the MWS endpoint I failed to remove the concatenation of "/onca/xml" immediately after, located somewhere in my code. For some reason, Amazon interpreted my "Version" parameter as beginning with "/onca/xml" despite that not being the case in the URL or it being the first parameter in the signature. Oh well.
For anyone modifying an AWS signed request helper, make sure to remove any concatenation of "/onca/xml" after changing endpoint!

Categories