This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I am using third party server which return me following things.
1)url
2)acl
3)policy
4)awsAccesskeyID
5)Signature
6)key
I can upload file using following code
final File localFile = new File(localFilePath);
final Part[] parts = { new StringPart("acl", acl),
new StringPart("policy", policy),
new StringPart("AWSAccessKeyId", awsAccessKeyId),
new StringPart("signature", signature),
new StringPart("key", key, HTTP.UTF_8),
new FilePart("file", localFile) };
final MultipartRequestEntity mpRequestEntity = new MultipartRequestEntity(parts, filePost.getParams());
filePost.setRequestEntity(mpRequestEntity);
final HttpClient client = new HttpClient();
try
{
status = client.executeMethod(filePost);
}
But now I want to use AmazonS3Client using following code but its throwing exception that
10-31 16:21:36.070: INFO/com.amazonaws.request(13882): Received error
response: Status Code: 403, AWS Request ID: 51F7CB27E58F88FD, AWS
Error Code: SignatureDoesNotMatch, AWS Error Message: The request
signature we calculated does not match the signature you provided.
Check your key and signing method., S3 Extended Request ID:
YwNNsWOXg71vXY1VS0apHnHpHp4YVWRJ63xm8C7w36SYg1MNuIykw75YhQco5Lk7
final AmazonS3Client s3Client = new AmazonS3Client(new BasicAWSCredentials(awsAccessKeyId, key));
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
final List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
final InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(targetURL, key);
final InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);
final File file = new File(localFilePath);
final long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try
{
// Step 2: Upload parts.
long filePosition = 0;
for ( int i = 1; filePosition < contentLength; i++ )
{
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
final UploadPartRequest uploadRequest = new UploadPartRequest().withBucketName(targetURL).withKey(key)
.withUploadId(initResponse.getUploadId()).withPartNumber(i).withFileOffset(filePosition)
.withFile(file).withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: complete.
final CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(targetURL, key,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
}
catch ( final Exception e )
{
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(targetURL, key, initResponse.getUploadId()));
return false;
}
return true;
am I missing something here?
I found that server sending signature to upload file in one shot. In case of multipart upload multiple signature needed and that will needed at various steps..
There is no way to upload file in multiple part until server shares the key :(.
http://dextercoder.blogspot.in/2012/02/multipart-upload-to-amazon-s3-in-three.html
Related
I'm trying to copy files from folder to folder in the same bucket in s3 storage.
I need to copy files that are greater than 5GB and I saw in the docs that regular copy doesn't support this kind of copy see here
This link in the docs shows how to do it, but this code is for version 1.x and not 2.x.
I searched the new docs but I found only this and there is no code that shows how to multipart copy only regular copy.
It should be noted that another user asked about this but with no replies.
This code will help you to copy object with multipart in Java S3 SDK v2.
private final S3Client s3Client = S3Client.builder().build();
public void copyObjectWithMultiPart() {
String destBucketName = "destination-bucket";
String destObjectKey = "destination-object-key";
String sourceBucketName = "source-bucket";
String sourceObjectKey = "source-object-key";
// Initiate the multipart upload.
CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder()
.bucket(destBucketName)
.key(destObjectKey)
.build();
CreateMultipartUploadResponse multipartUploadResponse = s3Client.createMultipartUpload(createMultipartUploadRequest);
// Get the object size to track the end of the copy operation.
HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
.bucket(sourceBucketName)
.key(sourceObjectKey)
.build();
long objectSize = s3Client.headObject(headObjectRequest).contentLength();
// Copy the object using 5 MB parts.
long partSize = 5 * 1024 * 1024;
long bytePosition = 0;
int partNum = 1;
List<CompletedPart> etags = new ArrayList<>();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);
// Copy this part.
UploadPartCopyRequest uploadPartCopyRequest = UploadPartCopyRequest.builder()
.sourceBucket(sourceBucketName)
.sourceKey(sourceObjectKey)
.destinationBucket(destBucketName)
.destinationKey(destObjectKey)
.uploadId(multipartUploadResponse.uploadId())
.partNumber(partNum)
.copySourceRange(String.format("bytes=%d-%d", bytePosition, lastByte))
.build();
UploadPartCopyResponse uploadPartCopyResponse = s3Client.uploadPartCopy(uploadPartCopyRequest);
etags.add(
CompletedPart.builder()
.partNumber(partNum++)
.eTag(uploadPartCopyResponse.copyPartResult().eTag())
.build()
);
bytePosition += partSize;
}
// Complete the upload request to concatenate all uploaded parts and make the copied object available.
CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
.parts(etags)
.build();
CompleteMultipartUploadRequest completeMultipartUploadRequest =
CompleteMultipartUploadRequest.builder()
.bucket(destBucketName)
.key(destObjectKey)
.uploadId(multipartUploadResponse.uploadId())
.multipartUpload(completedMultipartUpload)
.build();
s3Client.completeMultipartUpload(completeMultipartUploadRequest);
}
I'm trying to upload a large file to a server which uses a token and the token expires after 10 minutes, so if I upload a small file it will work therefore if the file is big than I will get some problems and will be trying to upload for ever while the access is denied
So I need refresh the token in the BasicAWSCredentials which is than used for the AWSStaticCredentialsProvider therefore I'm not sure how can i do it, please help =)
Worth to mention that we use a local server (not amazon cloud) with provides the token and for convenience we use amazon's code.
here is my code:
public void uploadMultipart(File file) throws Exception {
//this method will give you a initial token for a given user,
//than calculates when a new token is needed and will refresh it just when necessary
String token = getUsetToken();
String existingBucketName = myTenant.toLowerCase() + ".package.upload";
String endPoint = urlAPI + "s3/buckets/";
String strSize = FileUtils.byteCountToDisplaySize(FileUtils.sizeOf(file));
System.out.println("File size: " + strSize);
AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endPoint, null);//note: Region has to be null
//AWSCredentialsProvider
BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");//secretKey should be set to NOT_USED
AmazonS3 s3 = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
.withEndpointConfiguration(endpointConfiguration)
.enablePathStyleAccess()
.build();
int maxUploadThreads = 5;
TransferManager tm = TransferManagerBuilder
.standard()
.withS3Client(s3)
.withMultipartUploadThreshold((long) (5 * 1024 * 1024))
.withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
.build();
PutObjectRequest request = new PutObjectRequest(existingBucketName, file.getName(), file);
//request.putCustomRequestHeader("Access-Token", token);
ProgressListener progressListener = progressEvent -> System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred());
request.setGeneralProgressListener(progressListener);
Upload upload = tm.upload(request);
LocalDateTime uploadStartedAt = LocalDateTime.now();
log.info("Starting upload at: " + uploadStartedAt);
try {
upload.waitForCompletion();
//upload.waitForUploadResult();
log.info("Upload completed. " + strSize);
} catch (Exception e) {//AmazonClientException
log.error("Error occurred while uploading file - " + strSize);
e.printStackTrace();
}
}
Solution found !
I found a way to get this working and for to be honest I quite happy about the result, I've done so many tests with big files (50gd.zip) and in every scenario worked very well
My solution is, remove the line: BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");
AWSCredentials is a interface so we can override it with something dynamic, the the logic of when the token is expired and needs a new fresh token is held inside the getToken() method meaning you can call every time with no harm
AWSCredentials sessionCredentials = new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
try {
return getToken(); //getToken() method return a string
} catch (Exception e) {
return null;
}
}
#Override
public String getAWSSecretKey() {
return "NOT_USED";
}
};
When uploading a file (or parts of a multi-part file), the credentials that you use must last long enough for the upload to complete. You CANNOT refresh the credentials as there is no method to update AWS S3 that you are using new credentials for an already signed request.
You could break the upload into smaller files that upload quicker. Then only upload X parts. Refresh your credentials and upload Y parts. Repeat until all parts are uploaded. Then you will need to finish by combining the parts (which is a separate command). This is not a perfect solution as transfer speeds cannot be accurately controlled AND this means that you will have to write your own upload code (which is not hard).
How to write java code for egnyte chunked upload and send to rest service of egnyte api.
https://developers.egnyte.com/docs/read/File_System_Management_API_Documentation#Chunked-Upload
long size = f.getTotalSpace();
int sizeOfFiles = 1024 * 1024;// 1MB
byte[] buffer = new byte[sizeOfFiles];
ResponseEntity<String> responseEntity = null;
String fileName = f.getName();
String url = DOWNLOAD_OR_UPLOAD + "-chunked" + egnyteSourcePath + f.getName();
HttpHeaders headers = buildEgnyteEntity();
HttpEntity entity = new HttpEntity<>(headers);
//try-with-resources to ensure closing stream
try (FileInputStream fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis)) {
int bytesAmount = 0;
while ((bytesAmount = bis.read(buffer)) > 0) {
//write each chunk of data into separate file with different number in name
String filePartName = String.format("%s.%03d", fileName, partCounter++);
File newFile = new File(f.getParent(), filePartName);
responseEntity = restTemplate.exchange(url, HttpMethod.POST, entity, String.class);
}
}
return responseEntity;
I think there's a couple of things missing in your code.
First thing is that you don't specify required headers. You should provide X-Egnyte-Chunk-Num with int value with number of your chunk, starting from 1. In X-Egnyte-Chunk-Sha512-Checksum header you should provide SHA512 checksum.
Second thing is that first request will give you an UploadId in response header in X-Egnyte-Upload-Id. You need to specify that as a header in your second and following requests.
Third thing is that I don't see you use your bytesAmount in the request. I'm not sure you're providing the data.
I'm not a Java guy, more of a C# one, but I've written a post how to upload and download big files with Egnyte API on my blog: http://www.michalbialecki.com/2018/02/11/sending-receiving-big-files-using-egnyte-api-nuget-package/. This can give you an idea how sending loop can be structured.
Hope that helps.
Sample C# code:
static void UploadFile(string sasUrl, string filepath)
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("x-ms-version", Version);
client.DefaultRequestHeaders.Add("x-ms-client-request-id", SessionGuid);
StringBuilder sb = new StringBuilder("<?xml version=\"1.0\" encoding=\"utf-8\"?><BlockList>");
foreach (byte[] chunk in GetFileChunks(filepath))
{
var blockid = GetHash(chunk);
HttpRequestMessage chunkMessage = new HttpRequestMessage()
{
Method = HttpMethod.Put,
RequestUri = new Uri(sasUrl + "&timeout=90&comp=block&blockid=" + WebUtility.UrlEncode(blockid)),
Content = new ByteArrayContent(chunk)
};
chunkMessage.Headers.Add("x-ms-blob-type", "BlockBlob");
chunkMessage.Content.Headers.Add("MD5-Content", blockid);
TimeAction("Uploading chunk " + blockid + " took {0} ms", () =>
{
var response = client.SendAsync(chunkMessage).Result;
});
sb.Append("<Latest>");
sb.Append(blockid);
sb.Append("</Latest>");
}
sb.Append("</BlockList>");
Trace.WriteLine(sb.ToString());
HttpRequestMessage commitMessage = new HttpRequestMessage()
{
Method = HttpMethod.Put,
RequestUri = new Uri(sasUrl + "&timeout=90&comp=blocklist"),
Content = new StringContent(sb.ToString())
};
TimeAction("Commiting the blocks took {0} ms", () =>
{
var commit = client.SendAsync(commitMessage).Result;
});
}
}
I am stuck at the point where I've to upload a file. Also want to know what the reason is to commit in given code?
my progress so far is :
public static void uploadFile(String sasUrl , String filepath , String sessionGuid)
{
File file = new File(filepath);
FileInputStream fileInputStream=null;
Response reply = new Response();
HttpClient client = HttpClientBuilder.create().build();
HttpPost request = new HttpPost(sasUrl);
request.setHeader("x-ms-version", "2013-08-15");
request.setHeader("x-ms-client-request-id", sessionGuid);
StringBuilder sb = new StringBuilder("<?xml version=\"1.0\" encoding=\"utf-8\"?><BlockList>");
}
}
Note: I cannot run the code multiple times as I cannot spam the server. Any suggestions will be appreciated
Referring to : https://msdn.microsoft.com/en-us/library/windows/hardware/dn800660(v=vs.85).aspx
According to the reference code in C#, it seems to be using the REST API Put Block List to upload a file as a block blob.
So you can refer to the REST API reference without refering to the C# sample to use httpclient to construct the request for uploading.
However, the simple way is using Azure Storage SDK for Java. To upload a file, you just need to use the class CloudBlockBlob to upload a file with the function upload(InputStream sourceStream, long length), please refer to the tutorial https://azure.microsoft.com/en-us/documentation/articles/storage-java-how-to-use-blob-storage/#upload-a-blob-into-a-container.
The sas url seems like https://myaccount.blob.core.windows.net/mycontainer/myblob?comp=blocklist&...
Here is the code as example.
URL sasUrl = new URL("<sas-url>");
try
{.
CloudBlockBlob blob = new CloudBlockBlob(sasUrl)
File source = new File(filePath);
blob.upload(new FileInputStream(source), source.length());
}
catch (Exception e)
{
// Output the stack trace.
e.printStackTrace();
}
As reference, please see the javadocs for Azure Java Storage SDK.
I am trying to upload files to Amazon S3, nothing special. I have managed to do the actual upload, and the file uploads successfully. The only issue that's left is that I cannot change the name of the file in S3. It seems that by default, the name of the file is being set the same as the secret key. It could be that I am sending the secret key as a parameter where I should send the name of the file instead. However, I tried changing the parameters around and errors prop up.
Below please find the code I am using:
Bucket bucket = client.createBucket("testBucket", Region.EU_Ireland);
List<PartETag> partTags = new ArrayList<>();
InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(
bucket.getName(), secretAmazonKey);
InitiateMultipartUploadResult result = client
.initiateMultipartUpload(request);
File file = new File(filePath);
long contentLength = file.length();
long partSize = 8 * 1024 * 1024;
try {
// Uploading the file, part by part.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 8 MB therefore the partSize needs
// to be adjusted accordingly
partSize = Math.min(partSize, (contentLength - filePosition));
// Creating the request for a part upload
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucket.getName()).withKey(secretAmazonKey)
.withUploadId(result.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition).withFile(file)
.withPartSize(partSize);
// Upload part and add response to the result list.
partTags.add(client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
}
catch (Exception e) {
client.abortMultipartUpload(new AbortMultipartUploadRequest(bucket
.getName(), secretAmazonKey, result.getUploadId()));
}
CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(
bucket.getName(), secretAmazonKey, result.getUploadId(), partTags);
client.completeMultipartUpload(compRequest);
Any help is much appreciated.
Thanks a lot :)
The key in your upload requests is actually your object (file) key (name), and not your AWS secret key. Whenever you instantiate your client instance, this is the time where you specify your AWS credentials.
Could you be more specific regarding the errors you are seeing when doing this?
Well, I used Amazon S3 for the first time recently and was able to upload a file as below:
public void saveMinutes(Minutes minutes, byte [] data)
{
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials(amazonS3AccessKey, amazonS3SecretAccessKey));
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(data.length);
metaData.setContentType("application/pdf");
s3.putObject(new PutObjectRequest(amazonS3MinutesBucketName, minutes.getFileName(), new ByteArrayInputStream(data), metaData));
}