I am trying to upload files to Amazon S3, nothing special. I have managed to do the actual upload, and the file uploads successfully. The only issue that's left is that I cannot change the name of the file in S3. It seems that by default, the name of the file is being set the same as the secret key. It could be that I am sending the secret key as a parameter where I should send the name of the file instead. However, I tried changing the parameters around and errors prop up.
Below please find the code I am using:
Bucket bucket = client.createBucket("testBucket", Region.EU_Ireland);
List<PartETag> partTags = new ArrayList<>();
InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(
bucket.getName(), secretAmazonKey);
InitiateMultipartUploadResult result = client
.initiateMultipartUpload(request);
File file = new File(filePath);
long contentLength = file.length();
long partSize = 8 * 1024 * 1024;
try {
// Uploading the file, part by part.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 8 MB therefore the partSize needs
// to be adjusted accordingly
partSize = Math.min(partSize, (contentLength - filePosition));
// Creating the request for a part upload
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucket.getName()).withKey(secretAmazonKey)
.withUploadId(result.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition).withFile(file)
.withPartSize(partSize);
// Upload part and add response to the result list.
partTags.add(client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
}
catch (Exception e) {
client.abortMultipartUpload(new AbortMultipartUploadRequest(bucket
.getName(), secretAmazonKey, result.getUploadId()));
}
CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(
bucket.getName(), secretAmazonKey, result.getUploadId(), partTags);
client.completeMultipartUpload(compRequest);
Any help is much appreciated.
Thanks a lot :)
The key in your upload requests is actually your object (file) key (name), and not your AWS secret key. Whenever you instantiate your client instance, this is the time where you specify your AWS credentials.
Could you be more specific regarding the errors you are seeing when doing this?
Well, I used Amazon S3 for the first time recently and was able to upload a file as below:
public void saveMinutes(Minutes minutes, byte [] data)
{
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials(amazonS3AccessKey, amazonS3SecretAccessKey));
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(data.length);
metaData.setContentType("application/pdf");
s3.putObject(new PutObjectRequest(amazonS3MinutesBucketName, minutes.getFileName(), new ByteArrayInputStream(data), metaData));
}
Related
I'm trying to copy files from folder to folder in the same bucket in s3 storage.
I need to copy files that are greater than 5GB and I saw in the docs that regular copy doesn't support this kind of copy see here
This link in the docs shows how to do it, but this code is for version 1.x and not 2.x.
I searched the new docs but I found only this and there is no code that shows how to multipart copy only regular copy.
It should be noted that another user asked about this but with no replies.
This code will help you to copy object with multipart in Java S3 SDK v2.
private final S3Client s3Client = S3Client.builder().build();
public void copyObjectWithMultiPart() {
String destBucketName = "destination-bucket";
String destObjectKey = "destination-object-key";
String sourceBucketName = "source-bucket";
String sourceObjectKey = "source-object-key";
// Initiate the multipart upload.
CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder()
.bucket(destBucketName)
.key(destObjectKey)
.build();
CreateMultipartUploadResponse multipartUploadResponse = s3Client.createMultipartUpload(createMultipartUploadRequest);
// Get the object size to track the end of the copy operation.
HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
.bucket(sourceBucketName)
.key(sourceObjectKey)
.build();
long objectSize = s3Client.headObject(headObjectRequest).contentLength();
// Copy the object using 5 MB parts.
long partSize = 5 * 1024 * 1024;
long bytePosition = 0;
int partNum = 1;
List<CompletedPart> etags = new ArrayList<>();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);
// Copy this part.
UploadPartCopyRequest uploadPartCopyRequest = UploadPartCopyRequest.builder()
.sourceBucket(sourceBucketName)
.sourceKey(sourceObjectKey)
.destinationBucket(destBucketName)
.destinationKey(destObjectKey)
.uploadId(multipartUploadResponse.uploadId())
.partNumber(partNum)
.copySourceRange(String.format("bytes=%d-%d", bytePosition, lastByte))
.build();
UploadPartCopyResponse uploadPartCopyResponse = s3Client.uploadPartCopy(uploadPartCopyRequest);
etags.add(
CompletedPart.builder()
.partNumber(partNum++)
.eTag(uploadPartCopyResponse.copyPartResult().eTag())
.build()
);
bytePosition += partSize;
}
// Complete the upload request to concatenate all uploaded parts and make the copied object available.
CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
.parts(etags)
.build();
CompleteMultipartUploadRequest completeMultipartUploadRequest =
CompleteMultipartUploadRequest.builder()
.bucket(destBucketName)
.key(destObjectKey)
.uploadId(multipartUploadResponse.uploadId())
.multipartUpload(completedMultipartUpload)
.build();
s3Client.completeMultipartUpload(completeMultipartUploadRequest);
}
I am looking for some help/example to perform a resumeable upload to Google Drive using the new v3 REST API in Java.
I know there is a low level description here: Upload files | Google Drive API. But at the moment I am not willing to understand any of these low level requests, if there isn't another, simpler method ( like former MediaHttpUploader, which is deprecated now...)
What I currently do is:
File fileMetadata = new File();
fileMetadata.setName(name);
fileMetadata.setDescription(...);
fileMetadata.setParents(parents);
fileMetadata.setProperties(...);
FileContent mediaContent = new FileContent(..., file);
drive.files().create(fileMetadata, mediaContent).execute();
But for large files, this isn't good if the connection interrupts.
I've just created an implementation on that recently. It will create a new file on your DriveFolder and return its metadata when the task succeeds. While uploading, it will also update the listener with uploading info. I added comments to make it auto explanable:
public Task<File> createFile(java.io.File yourfile, MediaHttpUploaderProgressListener uploadListener) {
return Tasks.call(mExecutor, () -> {
//Generates an input stream with your file content to be uploaded
FileContent mediaContent = new FileContent("yourFileMimeType", yourfile);
//Creates an empty Drive file
File metadata = new File()
.setParents(parents)
.setMimeType(yourFileMimeType)
.setName(yourFileName);
//Builds up the upload request
Drive.Files.Create uploadFile = mDriveService.files().create(metadata, mediaContent);
//This will handle the resumable upload
MediaHttpUploader uploader = uploadBackup.getMediaHttpUploader();
//choose your chunk size and it will automatically divide parts
uploader.setChunkSize(MediaHttpUploader.MINIMUM_CHUNK_SIZE);
//according to Google, this enables gzip in future (optional)
uploader.setDisableGZipContent(false); versions
//important, this enables resumable upload
uploader.setDirectUploadEnabled(false);
//listener to be updated
uploader.setProgressListener(uploadListener);
return uploadFile.execute();
});
}
And make your Activity extends MediaHttpUploaderProgressListener so you have real time updates on the file progress:
#Override
public void progressChanged(MediaHttpUploader uploader) {
String sizeTemp = "Uploading"
+ ": "
+ Formatter.formatShortFileSize(this, uploader.getNumBytesUploaded())
+ "/"
+ Formatter.formatShortFileSize(this, totalFileSize);
runOnUiThread(() -> textView.setText(sizeTemp));
}
For calculating the progress percentage, you simply do:
double percentage = uploader.getNumBytesUploaded() / totalFileSize
Or use this one:
uploader.getProgress()
It gives you the percentage of bytes that have been uploaded, represented between 0.0 (0%) and 1.0 (100%). But be sure to have your content length specified, otherwise it will throw IllegalArgumentException.
I'm trying to upload a large file to a server which uses a token and the token expires after 10 minutes, so if I upload a small file it will work therefore if the file is big than I will get some problems and will be trying to upload for ever while the access is denied
So I need refresh the token in the BasicAWSCredentials which is than used for the AWSStaticCredentialsProvider therefore I'm not sure how can i do it, please help =)
Worth to mention that we use a local server (not amazon cloud) with provides the token and for convenience we use amazon's code.
here is my code:
public void uploadMultipart(File file) throws Exception {
//this method will give you a initial token for a given user,
//than calculates when a new token is needed and will refresh it just when necessary
String token = getUsetToken();
String existingBucketName = myTenant.toLowerCase() + ".package.upload";
String endPoint = urlAPI + "s3/buckets/";
String strSize = FileUtils.byteCountToDisplaySize(FileUtils.sizeOf(file));
System.out.println("File size: " + strSize);
AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endPoint, null);//note: Region has to be null
//AWSCredentialsProvider
BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");//secretKey should be set to NOT_USED
AmazonS3 s3 = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
.withEndpointConfiguration(endpointConfiguration)
.enablePathStyleAccess()
.build();
int maxUploadThreads = 5;
TransferManager tm = TransferManagerBuilder
.standard()
.withS3Client(s3)
.withMultipartUploadThreshold((long) (5 * 1024 * 1024))
.withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
.build();
PutObjectRequest request = new PutObjectRequest(existingBucketName, file.getName(), file);
//request.putCustomRequestHeader("Access-Token", token);
ProgressListener progressListener = progressEvent -> System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred());
request.setGeneralProgressListener(progressListener);
Upload upload = tm.upload(request);
LocalDateTime uploadStartedAt = LocalDateTime.now();
log.info("Starting upload at: " + uploadStartedAt);
try {
upload.waitForCompletion();
//upload.waitForUploadResult();
log.info("Upload completed. " + strSize);
} catch (Exception e) {//AmazonClientException
log.error("Error occurred while uploading file - " + strSize);
e.printStackTrace();
}
}
Solution found !
I found a way to get this working and for to be honest I quite happy about the result, I've done so many tests with big files (50gd.zip) and in every scenario worked very well
My solution is, remove the line: BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");
AWSCredentials is a interface so we can override it with something dynamic, the the logic of when the token is expired and needs a new fresh token is held inside the getToken() method meaning you can call every time with no harm
AWSCredentials sessionCredentials = new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
try {
return getToken(); //getToken() method return a string
} catch (Exception e) {
return null;
}
}
#Override
public String getAWSSecretKey() {
return "NOT_USED";
}
};
When uploading a file (or parts of a multi-part file), the credentials that you use must last long enough for the upload to complete. You CANNOT refresh the credentials as there is no method to update AWS S3 that you are using new credentials for an already signed request.
You could break the upload into smaller files that upload quicker. Then only upload X parts. Refresh your credentials and upload Y parts. Repeat until all parts are uploaded. Then you will need to finish by combining the parts (which is a separate command). This is not a perfect solution as transfer speeds cannot be accurately controlled AND this means that you will have to write your own upload code (which is not hard).
I have a shape file and i need to read the shape file from my java code. I used below code for reading shape file.
public class App {
public static void main(String[] args) {
File file = new File("C:\\Test\\sample.shp");
Map<String, Object> map = new HashMap<>();//
try {
map.put("url", URLs.fileToUrl(file));
DataStore dataStore = DataStoreFinder.getDataStore(map);
String typeName = dataStore.getTypeNames()[0];
SimpleFeatureSource source = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = source.getFeatures();
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next();
SimpleFeatureType schema = feature.getFeatureType();
Class<?> geomType = schema.getGeometryDescriptor().getType().getBinding();
String type = "";
if (Polygon.class.isAssignableFrom(geomType) || MultiPolygon.class.isAssignableFrom(geomType)) {
MultiPolygon geom = (MultiPolygon) feature.getDefaultGeometry();
type = "Polygon";
if (geom.getNumGeometries() > 1) {
type = "MultiPolygon";
}
} else if (LineString.class.isAssignableFrom(geomType)
|| MultiLineString.class.isAssignableFrom(geomType)) {
} else {
}
System.out.println(feature.getDefaultGeometryProperty().getValue().toString());
}
}
} catch (Exception e) {
// TODO: handle exception
}
}
}
I got the desired output. But my requirement is write an aws lambda function to read shape file. For this
1. I created a Lambda java project of s3 event. I wrote the same code inside the handleRequest. I uploaded the java lambda project as a lanbda function and added one trigger. When I am uploading a .shp file to as s3 bucket lmbda function will automatically invoked. But I am getting an error like below
java.lang.RuntimeException: java.io.FileNotFoundException: /sample.shp (No such file or directory)
I have sample.shp file inside my s3 bucket. I go through below link.
How to write an S3 object to a file?
I am getting the same error. I tried to change my code like below
S3Object object = s3.getObject(new GetObjectRequest(bucket, key));
InputStream objectData = object.getObjectContent();
map.put("url", objectData );
instead of
File file = new File("C:\\Test\\sample.shp");
map.put("url", URLs.fileToUrl(file));
:-( Now i am getting an error like below
java.lang.NullPointerException
Also I tried the below code
DataStore dataStore = DataStoreFinder.getDataStore(objectData);
instead of
DataStore dataStore = DataStoreFinder.getDataStore(map);
the error was like below
java.lang.ClassCastException:
com.amazonaws.services.s3.model.S3ObjectInputStream cannot be cast to
java.util.Map
Also I tried to add key directly to the map and also as DataStore object. Everything went wrong..:-(
Is there anyone who can help me?
It will be very helpful if someone can do it for me...
The DataStoreFinder.getDataStore method in geotools requires you to provide a map containing a key/value pair with key "url". The value associated with that "url" key needs to be a file URL like "file://host/path/my.shp".
You're trying to insert a Java input stream into the map. That won't work, because it's not a file URL.
The geotools library does not accept http/https URLs (see the geotools code here and here), so you need a file:// URL. That means you will need to download the file from S3 to the local Lambda filesystem and then provide a file:// URL pointing to that local file. To do that, here's Java code that should work:
// get the shape file from S3 to local filesystem
File localshp = new File("/tmp/download.shp");
s3.getObject(new GetObjectRequest(bucket, key), localshp);
// now store file:// URL in the map
map.put("url", localshp.getURI().getURL().toString());
If the geotools library had accepted real URLs (not just file:// URLs) then you could have avoided the download and simply created a time-limited, pre-signed URL for the S3 object and put that URL into the map.
Here's an example of how to do that:
// get current time and add one hour
java.util.Date expiration = new java.util.Date();
long msec = expiration.getTime();
msec += 1000 * 60 * 60;
expiration.setTime(msec);
// request pre-signed URL that will allow bearer to GET the object
GeneratePresignedUrlRequest gpur = new GeneratePresignedUrlRequest(bucket, key);
gpur.setMethod(HttpMethod.GET);
gpur.setExpiration(expiration);
// get URL that will expire in one hour
URL url = s3.generatePresignedUrl(gpur);
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I am using third party server which return me following things.
1)url
2)acl
3)policy
4)awsAccesskeyID
5)Signature
6)key
I can upload file using following code
final File localFile = new File(localFilePath);
final Part[] parts = { new StringPart("acl", acl),
new StringPart("policy", policy),
new StringPart("AWSAccessKeyId", awsAccessKeyId),
new StringPart("signature", signature),
new StringPart("key", key, HTTP.UTF_8),
new FilePart("file", localFile) };
final MultipartRequestEntity mpRequestEntity = new MultipartRequestEntity(parts, filePost.getParams());
filePost.setRequestEntity(mpRequestEntity);
final HttpClient client = new HttpClient();
try
{
status = client.executeMethod(filePost);
}
But now I want to use AmazonS3Client using following code but its throwing exception that
10-31 16:21:36.070: INFO/com.amazonaws.request(13882): Received error
response: Status Code: 403, AWS Request ID: 51F7CB27E58F88FD, AWS
Error Code: SignatureDoesNotMatch, AWS Error Message: The request
signature we calculated does not match the signature you provided.
Check your key and signing method., S3 Extended Request ID:
YwNNsWOXg71vXY1VS0apHnHpHp4YVWRJ63xm8C7w36SYg1MNuIykw75YhQco5Lk7
final AmazonS3Client s3Client = new AmazonS3Client(new BasicAWSCredentials(awsAccessKeyId, key));
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
final List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
final InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(targetURL, key);
final InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);
final File file = new File(localFilePath);
final long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try
{
// Step 2: Upload parts.
long filePosition = 0;
for ( int i = 1; filePosition < contentLength; i++ )
{
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
final UploadPartRequest uploadRequest = new UploadPartRequest().withBucketName(targetURL).withKey(key)
.withUploadId(initResponse.getUploadId()).withPartNumber(i).withFileOffset(filePosition)
.withFile(file).withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: complete.
final CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(targetURL, key,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
}
catch ( final Exception e )
{
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(targetURL, key, initResponse.getUploadId()));
return false;
}
return true;
am I missing something here?
I found that server sending signature to upload file in one shot. In case of multipart upload multiple signature needed and that will needed at various steps..
There is no way to upload file in multiple part until server shares the key :(.
http://dextercoder.blogspot.in/2012/02/multipart-upload-to-amazon-s3-in-three.html