I'm trying to read the files available on Amazon S3, as the question explains the problem. I couldn't find an alternative call for the deprecated constructor.
Here's the code:
private String AccessKeyID = "xxxxxxxxxxxxxxxxxxxx";
private String SecretAccessKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
private static String bucketName = "documentcontainer";
private static String keyName = "test";
//private static String uploadFileName = "/PATH TO FILE WHICH WANT TO UPLOAD/abc.txt";
AWSCredentials credentials = new BasicAWSCredentials(AccessKeyID, SecretAccessKey);
void downloadfile() throws IOException
{
// Problem lies here - AmazonS3Client is deprecated
AmazonS3 s3client = new AmazonS3Client(credentials);
try {
System.out.println("Downloading an object...");
S3Object s3object = s3client.getObject(new GetObjectRequest(
bucketName, keyName));
System.out.println("Content-Type: " +
s3object.getObjectMetadata().getContentType());
InputStream input = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
while (true) {
String line = reader.readLine();
if (line == null) break;
System.out.println(" " + line);
}
System.out.println();
} catch (AmazonServiceException ase) {
//do something
} catch (AmazonClientException ace) {
// do something
}
}
Any help? If more explanation is needed please mention it.
I have checked on the sample code provided in .zip file of SDK, and it's the same.
You can either use AmazonS3ClientBuilder or
AwsClientBuilder as alternatives.
For S3, simplest would be with AmazonS3ClientBuilder.
BasicAWSCredentials creds = new BasicAWSCredentials("access_key", "secret_key");
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(creds))
.build();
Use the code listed below to create an S3 client without credentials:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
An usage example would be a lambda function calling S3.
You need to pass the region information through the
com.amazonaws.regions.Region object.
Use AmazonS3Client(credentials, Region.getRegion(Regions.REPLACE_WITH_YOUR_REGION))
You can create S3 default client as follows(with aws-java-sdk-s3-1.11.232):
AmazonS3ClientBuilder.defaultClient();
Deprecated with only creentials in constructor, you can use something like this:
val awsConfiguration = AWSConfiguration(context)
val awsCreds = CognitoCachingCredentialsProvider(context, awsConfiguration)
val s3Client = AmazonS3Client(awsCreds, Region.getRegion(Regions.EU_CENTRAL_1))
Using the AWS SDK for Java 2.x, one can also build its own credentialProvider like so:
// Credential provider
package com.myproxylib.aws;
import software.amazon.awssdk.auth.credentials.AwsCredentials;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
public class CustomCredentialsProvider implements AwsCredentialsProvider {
private final String accessKeyId;
private final String secretAccessKey;
public CustomCredentialsProvider(String accessKeyId, String secretAccessKey) {
this.secretAccessKey = secretAccessKey;
this.accessKeyId = accessKeyId;
}
#Override
public AwsCredentials resolveCredentials() {
return new CustomAwsCredentialsResolver(accessKeyId, secretAccessKey);
}
}
// Crenditals resolver
package com.myproxylib.aws;
import software.amazon.awssdk.auth.credentials.AwsCredentials;
public class CustomAwsCredentialsResolver implements AwsCredentials {
private final String accessKeyId;
private final String secretAccessKey;
CustomAwsCredentialsResolver(String accessKeyId, String secretAccessKey) {
this.secretAccessKey = secretAccessKey;
this.accessKeyId = accessKeyId;
}
#Override
public String accessKeyId() {
return accessKeyId;
}
#Override
public String secretAccessKey() {
return secretAccessKey;
}
}
// Usage of the provider
package com.myproxylib.aws.s3;
public class S3Storage implements IS3StorageCapable {
private final S3Client s3Client;
public S3Storage(String accessKeyId, String secretAccessKey, String region) {
this.s3Client = S3Client.builder().credentialsProvider(new CustomCredentialsProvider(accessKeyId, secretAccessKey)).region(of(region)).build();
}
NOTE:
of course, the library user can get the credentials from wherever he wants, parse it into a java Properties before calling the S3 constructor.
When possible, favour the other methods mentionned in other answers and doc. My use case was necessary for this.
implementation 'com.amazonaws:aws-android-sdk-s3:2.16.12'
val key = "XXX"
val secret = "XXX"
val credentials = BasicAWSCredentials(key, secret)
val s3 = AmazonS3Client(
credentials, com.amazonaws.regions.Region.getRegion(
Regions.US_EAST_2
)
)
val expires = Date(Date().time + 1000 * 60 * 60)
val keyFile = "13/thumbnail_800x600_13_photo.jpeg"
val generatePresignedUrlRequest = GeneratePresignedUrlRequest(
"bucket_name",
keyFile
)
generatePresignedUrlRequest.expiration = expires
val url: URL = s3.generatePresignedUrl(generatePresignedUrlRequest)
GlideApp.with(this)
.load(url.toString())
.apply(RequestOptions.centerCropTransform())
.into(image)
Related
I have a user defined object, say MyClass, saved in an Amazon S3 bucket inside a folder. The bucket structure looks like the following:
<bucket>
-<folder>
--<my unique numbered file containing MyClass>
I need to read this file and deserialize it to MyClass.
I went through AWS docs and it suggested to use something like the following
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String key = "*** Object key ***";
S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
Example: https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html
However, I need to deserialize it to MyClass instead of S3Object.
I couldn't get an example of how to do this. Can anyone suggest how this is done?
You are looking in the correct Guide - wrong topic for latest Java API to use. The topic you are looking at is old V1 code.
Look at this topic to use AWS SDK for Java V2 - which is considered best practice. V2 packages start with software.amazon.awssdk.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_GetObject_section.html
Once you get an object from an S3 bucket, you can convert it into a byte array and from there - do what ever you need to. This V2 example shows how to get a byte array from an object.
package com.example.s3;
// snippet-start:[s3.java2.getobjectdata.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.ResponseBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
// snippet-end:[s3.java2.getobjectdata.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GetObjectData {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <keyName> <path>\n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket name. \n\n"+
" keyName - The key name. \n\n"+
" path - The path where the file is written to. \n\n";
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
String path = args[2];
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
getObjectBytes(s3,bucketName,keyName, path);
s3.close();
}
// snippet-start:[s3.java2.getobjectdata.main]
public static void getObjectBytes (S3Client s3, String bucketName, String keyName, String path ) {
try {
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
byte[] data = objectBytes.asByteArray();
// Write the data to a local file.
File myFile = new File(path );
OutputStream os = new FileOutputStream(myFile);
os.write(data);
System.out.println("Successfully obtained bytes from an S3 object");
os.close();
} catch (IOException ex) {
ex.printStackTrace();
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
// snippet-end:[s3.java2.getobjectdata.main]
}
I am using a Spring Boot app to expose an API tasked with uploading file to AWS S3.
I am using a single instance of S3 client in the application:
#Bean
public AmazonS3 s3Client() {
BasicAWSCredentials credentials = new BasicAWSCredentials(awsAccessKey, awsSecretAccessKey);
AmazonS3ClientBuilder amazonS3ClientBuilder = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpointUrl, Regions.EU_WEST_1.getName()));
return amazonS3ClientBuilder.build();
}
Can I create a bean for TransferManager in a similar fashion and reuse the same instance across requests or do I need to create a fresh instance for every API call to upload a file?
What is the recommended practice here?
The recommended practice is to move away from AWS SDK for Java V1. Amazon recommends that you use AWS SDK for Java V2. Now to upload content to an Amazon S3 bucket, you can use the S3TransferManager.
If you are not familiar with using the V2 SDK, I recommend that you read the Developer Guide.
Here is a code example of how to use this to upload an object. And you can use the same instance to upload different objects. All you need to do is to set to pass the correct the values to transferManager.uploadFile().
package com.example.transfermanager;
import software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.transfer.s3.FileUpload;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import java.nio.file.Paths;
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class UploadObject {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <objectPath> \n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket to upload an object into.\n" +
" objectKey - The object to upload (for example, book.pdf).\n" +
" objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). \n\n" ;
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
long mb = 1024;
String bucketName = args[0];
String objectKey = args[1];
String objectPath = args[2];
System.out.println("Putting an object into bucket "+bucketName +" using the S3TransferManager");
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3TransferManager transferManager = S3TransferManager.builder()
.s3ClientConfiguration(cfg ->cfg.region(region)
.credentialsProvider(credentialsProvider)
.targetThroughputInGbps(20.0)
.minimumPartSizeInBytes(10 * mb))
.build();
uploadObjectTM(transferManager, bucketName, objectKey, objectPath);
System.out.println("Object was successfully uploaded using the Transfer Manager.");
transferManager.close();
}
public static void uploadObjectTM( S3TransferManager transferManager, String bucketName, String objectKey, String objectPath) {
FileUpload upload =
transferManager.uploadFile(u -> u.source(Paths.get(objectPath))
.putObjectRequest(p -> p.bucket(bucketName).key(objectKey)));
upload.completionFuture().join();
}
}
UPDATE
As the above API still appears to be in Preview Mode (judging from Maven repo):
https://mvnrepository.com/artifact/software.amazon.awssdk/s3-transfer-manager
You can use the Amazon S3 V2 Service Client to upload objects. V2 is still better practice then V1.
You can use this code:
package com.example.s3;
// snippet-start:[s3.java2.s3_object_upload.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;
import software.amazon.awssdk.services.s3.model.S3Exception;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
// snippet-end:[s3.java2.s3_object_upload.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class PutObject {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <objectPath> \n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket to upload an object into.\n" +
" objectKey - The object to upload (for example, book.pdf).\n" +
" objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). \n\n" ;
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
String bucketName =args[0];
String objectKey = args[1];
String objectPath = args[2];
System.out.println("Putting object " + objectKey +" into bucket "+bucketName);
System.out.println(" in bucket: " + bucketName);
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
String result = putS3Object(s3, bucketName, objectKey, objectPath);
System.out.println("Tag information: "+result);
s3.close();
}
// snippet-start:[s3.java2.s3_object_upload.main]
public static String putS3Object(S3Client s3,
String bucketName,
String objectKey,
String objectPath) {
try {
Map<String, String> metadata = new HashMap<>();
metadata.put("x-amz-meta-myVal", "test");
PutObjectRequest putOb = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.metadata(metadata)
.build();
PutObjectResponse response = s3.putObject(putOb,
RequestBody.fromBytes(getObjectFile(objectPath)));
return response.eTag();
} catch (S3Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
// Return a byte array.
private static byte[] getObjectFile(String filePath) {
FileInputStream fileInputStream = null;
byte[] bytesArray = null;
try {
File file = new File(filePath);
bytesArray = new byte[(int) file.length()];
fileInputStream = new FileInputStream(file);
fileInputStream.read(bytesArray);
} catch (IOException e) {
e.printStackTrace();
} finally {
if (fileInputStream != null) {
try {
fileInputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return bytesArray;
}
// snippet-end:[s3.java2.s3_object_upload.main]
}
I created app for getting info from upwork.com. I use java lib and Upwork OAuth 1.0. The problem is local request to API works fine, but when I do deploy to Google Cloud, my code does not work. I get ({"error":{"code":"503","message":"Exception: IOException"}}).
I create UpworkAuthClient for return OAuthClient and next it is used for requests in JobClient.
run() {
UpworkAuthClient upworkClient = new UpworkAuthClient();
upworkClient.setTokenWithSecret("USER TOKEN", "USER SECRET");
OAuthClient client = upworkClient.getOAuthClient();
//set query
JobQuery jobQuery = new JobQuery();
jobQuery.setQuery("query");
List<JobQuery> jobQueries = new ArrayList<>();
jobQueries.add(jobQuery);
// Get request of job
JobClient jobClient = new JobClient(client, jobQuery);
List<Job> result = jobClient.getJob();
}
public class UpworkAuthClient {
public static final String CONSUMERKEY = "UPWORK KEY";
public static final String CONSUMERSECRET = "UPWORK SECRET";
public static final String OAYTŠ CALLBACK = "https://my-app.com/main";
OAuthClient client ;
public UpworkAuthClient() {
Properties keys = new Properties();
keys.setProperty("consumerKey", CONSUMERKEY);
keys.setProperty("consumerSecret", CONSUMERSECRET);
Config config = new Config(keys);
client = new OAuthClient(config);
}
public void setTokenWithSecret (String token, String secret){
client.setTokenWithSecret(token, secret);
}
public OAuthClient getOAuthClient() {
return client;
}
public String getAuthorizationUrl() {
return this.client.getAuthorizationUrl(OAYTŠ CALLBACK);
}
}
public class JobClient {
private JobQuery jobQuery;
private Search jobs;
public JobClient(OAuthClient oAuthClient, JobQuery jobQuery) {
jobs = new Search(oAuthClient);
this.jobQuery = jobQuery;
}
public List<Job> getJob() throws JSONException {
JSONObject job = jobs.find(jobQuery.getQueryParam());
jobList = parseResponse(job);
return jobList;
}
}
Local dev server works fine, I get resilts on local machine, but in Cloud not.
I will be glad to any ideas, thanks!
{"error":{"code":"503","message":"Exception: IOException"}}
doesn't seem like a response return by Upwork API. Could you please provide the full response including the returned headers? So, we will take a more precise look into it.
I got code in c# or code for blob storage. I need some reference code in java to have SAS token for file storage in azure. The SAS may be applicable for account or services.
Code 1 :
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify for your account.
const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
// Create a new access policy for the account.
SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read | SharedAccessAccountPermissions.Write | SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};
// Return the SAS token.
return storageAccount.GetSharedAccessSignature(policy);
}
This code is about creating SAS token for account verification and expiry time.I need the same in java. I am not getting few things like, in first code how I can write the 'Permission' in java? I mean multiple in one line. Please provide equivalent java code for this.
Code 2 :
#Test
public String testFileSAS(CloudFileShare share, CloudFile file) throws InvalidKeyException,
IllegalArgumentException, StorageException, URISyntaxException, InterruptedException {
SharedAccessFilePolicy policy = createSharedAccessPolicy(EnumSet.of(SharedAccessFilePermissions.READ,
SharedAccessFilePermissions.LIST, SharedAccessFilePermissions.WRITE), 24);
FileSharePermissions perms = new FileSharePermissions();
// SharedAccessProtocols protocol = SharedAccessProtocols.HTTPS_ONLY;
perms.getSharedAccessPolicies().put("readperm", policy);
share.uploadPermissions(perms);
// Thread.sleep(30000);
CloudFile sasFile = new CloudFile(
new URI(file.getUri().toString() + "?" + file.generateSharedAccessSignature(null, "readperm")));
sasFile.download(new ByteArrayOutputStream());
// do not give the client and check that the new file's client has the
// correct permissions
CloudFile fileFromUri = new CloudFile(
PathUtility.addToQuery(file.getStorageUri(), file.generateSharedAccessSignature(null, "readperm")));
assertEquals(StorageCredentialsSharedAccessSignature.class.toString(),
fileFromUri.getServiceClient().getCredentials().getClass().toString());
// create credentials from sas
StorageCredentials creds = new StorageCredentialsSharedAccessSignature(
file.generateSharedAccessSignature(policy, null, null));
System.out.println("Generated SAS token is : " + file.generateSharedAccessSignature(policy, null, null));
String token = file.generateSharedAccessSignature(policy, null, null);
CloudFileClient client = new CloudFileClient(sasFile.getServiceClient().getStorageUri(), creds);
CloudFile fileFromClient = client.getShareReference(file.getShare().getName()).getRootDirectoryReference()
.getFileReference(file.getName());
assertEquals(StorageCredentialsSharedAccessSignature.class.toString(),
fileFromClient.getServiceClient().getCredentials().getClass().toString());
assertEquals(client, fileFromClient.getServiceClient());
// self written
// String sharedUri =
// file.generateSharedAccessSignature(policy,null,null);
// System.out.println("token created is : "+sharedUri);
return token;
}
private final static SharedAccessFilePolicy createSharedAccessPolicy(EnumSet<SharedAccessFilePermissions> sap,
int expireTimeInSeconds) {
Calendar calendar = new GregorianCalendar(TimeZone.getTimeZone("UTC"));
calendar.setTime(new Date());
calendar.add(Calendar.HOUR, expireTimeInSeconds);
SharedAccessFilePolicy policy = new SharedAccessFilePolicy();
policy.setPermissions(sap);
policy.setSharedAccessExpiryTime(calendar.getTime());
return policy;
}
This code is a jUnit test. I don' want to import jUnit library. Want to do it in pure java.How I can convert the code? What I can use instead of assertEqauls?
Please consider the following code snippet in Java.
public static final String storageConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
public String getAccountSASToken() throws InvalidKeyException, URISyntaxException, StorageException {
CloudStorageAccount account = CloudStorageAccount.parse(storageConnectionString);
SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy();
policy.setPermissions(EnumSet.of(SharedAccessAccountPermissions.READ, SharedAccessAccountPermissions.WRITE, SharedAccessAccountPermissions.LIST));
policy.setServices(EnumSet.of(SharedAccessAccountService.BLOB, SharedAccessAccountService.FILE) );
policy.setResourceTypes(EnumSet.of(SharedAccessAccountResourceType.SERVICE));
policy.setSharedAccessExpiryTime(Date.from(ZonedDateTime.now(ZoneOffset.UTC).plusHours(24L).toInstant()));
policy.setProtocols(SharedAccessProtocols.HTTPS_ONLY);
return account.generateSharedAccessSignature(policy);
}
I'm working on the application where user will upload ZIP file to my server, on the server that ZIP file will be expanded and then I need to upload it to the server. Now my questions is: how to upload directory with multiple files and sub-folders using Java to S3 bucket? Is there any examples for that? Currently i'm using JetS3t to manage all my communications with S3.
HI This is the simple way to upload the Directory into S3 bucket.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(access_key_id,
secret_access_key);
AmazonS3 s3Client = new AmazonS3Client(awsCreds);
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build();
MultipleFileUpload upload = tm.uploadDirectory(existingBucketName,
"BuildNumber#1", "FilePathYouWant", true);
I built something very similar. After expanding the zip on the server call FileUtils.listFiles() which will recursively list files in a folder. Just iterate the list and create s3objects and upload the files to s3. Make use of the threadedstorage service so that multiple files can be uploaded at the same time. Also ensure you process the upload events. If some files couldn't be uploaded the jets3t library will tell you.
I could post the code I wrote once get into the office.
EDIT: CODE:
Here's the code:
private static ProviderCredentials credentials;
private static S3Service s3service;
private static ThreadedS3Service storageService;
private static S3Bucket bucket;
private List<S3Object> s3Objs=new ArrayList<S3Object>();
private Set<String> s3ObjsCompleted=new HashSet<String>();
private boolean isErrorOccured=true;
private final ByteFormatter byteFormatter = new ByteFormatter();
private final TimeFormatter timeFormatter = new TimeFormatter();
private void initialise() throws ServiceException, S3ServiceException {
credentials=<create your credentials>;
s3service = new RestS3Service(credentials);
bucket = new S3Bucket(<bucket details>);
storageService=new ThreadedS3Service(s3service, this);
}
}
private void uploadFolder(File folder) throws NoSuchAlgorithmException, IOException {
readFolderContents(folder);
uploadFilesInList(folder);
}
private void readFolderContents(File folder) throws NoSuchAlgorithmException, IOException {
Iterator<File> filesinFolder=FileUtils.iterateFiles(folder,null,null);
while(filesinFolder.hasNext()) {
File file=filesinFolder.next();
String key = <create your key from the filename or something>;
S3Object s3Obj=new S3Object(bucket, file);
s3Obj.setKey(key);
s3Obj.setContentType(Mimetypes.getInstance().getMimetype(s3Obj.getKey()));
s3Objs.add(s3Obj);
}
}
private void uploadFilesInList(File folder) {
logger.debug("Uploading files in folder "+folder.getAbsolutePath());
isErrorOccured=false;
s3ObjsCompleted.clear();
storageService.putObjects(bucket.getName(), s3Objs.toArray(new S3Object[s3Objs.size()]));
if(isErrorOccured || s3Objs.size()!=s3ObjsCompleted.size()) {
logger.debug("Have to try uploading a few objects again for folder "+folder.getAbsolutePath()+" - Completed = "+s3ObjsCompleted.size()+" and Total ="+s3Objs.size());
List<S3Object> s3ObjsRemaining=new ArrayList<S3Object>();
for(S3Object s3Obj : s3Objs) {
if(!s3ObjsCompleted.contains(s3Obj.getKey())) {
s3ObjsRemaining.add(s3Obj);
}
}
s3Objs=s3ObjsRemaining;
uploadFilesInList(folder);
}
}
#Override
public void event(CreateObjectsEvent event) {
super.event(event);
if (ServiceEvent.EVENT_IGNORED_ERRORS == event.getEventCode()) {
Throwable[] throwables = event.getIgnoredErrors();
for (int i = 0; i < throwables.length; i++) {
logger.error("Ignoring error: " + throwables[i].getMessage());
}
}else if(ServiceEvent.EVENT_STARTED == event.getEventCode()) {
logger.debug("**********************************Upload Event Started***********************************");
}else if(event.getEventCode()==ServiceEvent.EVENT_ERROR) {
isErrorOccured=true;
}else if(event.getEventCode()==ServiceEvent.EVENT_IN_PROGRESS) {
StorageObject[] storeObjs=event.getCreatedObjects();
for(StorageObject storeObj : storeObjs) {
s3ObjsCompleted.add(storeObj.getKey());
}
ThreadWatcher watcher = event.getThreadWatcher();
if (watcher.getBytesTransferred() >= watcher.getBytesTotal()) {
logger.debug("Upload Completed.. Verifying");
}else {
int percentage = (int) (((double) watcher.getBytesTransferred() / watcher.getBytesTotal()) * 100);
long bytesPerSecond = watcher.getBytesPerSecond();
StringBuilder transferDetailsText=new StringBuilder("Uploading.... ");
transferDetailsText.append("Speed: " + byteFormatter.formatByteSize(bytesPerSecond) + "/s");
if (watcher.isTimeRemainingAvailable()) {
long secondsRemaining = watcher.getTimeRemaining();
if (transferDetailsText.length() > 0) {
transferDetailsText.append(" - ");
}
transferDetailsText.append("Time remaining: " + timeFormatter.formatTime(secondsRemaining));
}
logger.debug(transferDetailsText.toString()+" "+percentage);
}
}else if(ServiceEvent.EVENT_COMPLETED==event.getEventCode()) {
logger.debug("**********************************Upload Event Completed***********************************");
if(isErrorOccured) {
logger.debug("**********************But with errors, have to retry failed uploads**************************");
}
}
}
Here is how I did it in December of 2021 since BasicAWSCredentials is deprecated now.
AWSCredentials = new BasicAWSCredentials(env.getProperty("AWS_ACCESS_KEY_ID"),
env.getProperty("AWS_SECRET_ACCESS_KEY"));
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).withCredentials(new AWSStaticCredentialsProvider(AWSCredentials))
.build();
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build();
MultipleFileUpload upload = tm.uploadDirectory(existingBucketName,
"BuildNumber#1", "FilePathYouWant", true);