creating a folder/uploading a file in amazon S3 bucket using API - java

I am a total newbie to amazon and java trying two things:
I am trying to create a folder in my Amazon S3 bucket that i have already created and have got the credentials for.
I am trying to upload a file to this bucket.
As per my understanding i can use putObjectRequest() method for acheiving both of my tasks.
PutObjectRequest(bucketName, keyName, file)
for uploading a file.
I am not sure if i should use this method
PutObjectRequest(String bucketName, String key, InputStream input,
ObjectMetadata metadata)
for just creating a folder. I am struggling with InputSteam and ObjectMetadata. I don't know what exactly is this for and how I can use it.

You do not need to create a folder in Amazon S3. In fact, folders do not exist!
Rather, the Key (filename) contains the full path and the object name.
For example, if a file called cat.jpg is in the animals folder, then the Key (filename) is: animals/cat.jpg
Simply Put an object with that Key and the folder is automatically created. (Actually, this isn't true because there are no folders, but it's a nice simple way to imagine the concept.)
As to which function to use... always use the simplest one that meets your needs. Therefore, just use PutObjectRequest(bucketName, keyName, file).

Yes, you can use PutObjectRequest(bucketName, keyName, file) to achive both task.
1, create S3 folder
With AWS S3 Java SDK , just add "/" at the end of the key name, it will create empty folder.
var folderKey = key + "/"; //end the key name with "/"
Sample code:
final InputStream im = new InputStream() {
#Override
public int read() throws IOException {
return -1;
}
};
final ObjectMetadata om = new ObjectMetadata();
om.setContentLength(0L);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectName, im, om);
s3.putObject(putObjectRequest);
2, Uploading file
Just similar, you can get input stream from your local file.

Alternatively you can use [minio client] java library
You can follow MakeBucket.java example to create a bucket & PutObject.java example to add an object.
Hope it help.

Related

How to copy from one minio server to another?

I am using Minio's JAVA SDK. I managed to copy objects within the same Minio Server. Is there a way to copy the objects from one Minio server to another?
I have tried using the below code:
InputStream inputStream = minioClientServer1.getObject(getBucket(), fileName);
minioClientServer2.putObject(getBucket(), fileName, inputStream, (long) inputStream.available(), null, null, contentType);
That is I got the object from one server and then uploaded to the next. The problem that I'm facing is that the contentType is unknown.
Is there a way to do this without hard coding the content type?
Or downloading the object to a file then uploading is a better way?
I'm not sure that this applies to you because you don't provide enough information for me to be sure we're talking about the same SDK. But from what you show, you should be able to call statObject in the same way that you call getObject and get an ObjectStat instance rather than an InputStream instance for a particular S3 object. Once you have an ObjectStat instance, you should be able to call the contentType method on it to get the content type of the S3 object.
This should work to do what you're asking:
ObjectStat objectStat = minioClientServer1.statObject(getBucket(), fileName);
InputStream inputStream = minioClientServer1.getObject(getBucket(), fileName);
minioClientServer2.putObject(getBucket(), fileName, inputStream, (long) inputStream.available(), null, null, objectStat.contentType());

Uploading to OpenStack Multipart With Path

I'm using java Jclouds to upload to a container inside an OpenStack Swift, the upload is going fine on the root, but when i pass a path(contains folders then the file), the file is uploaded but also creates another folder with the same name of the file. the original file name is 8mb.bin
the code is:
try {
ByteSource fileBytes = Files.asByteSource(file);
File file = new File(filePath);
String name = "test/test2/" + file.getName();
Blob blob = blobStore.blobBuilder(name)
.userMetadata(ImmutableMap.of("ContentType", contentType, "test", String.valueOf(test)))
.payload(fileBytes)
.contentLength(file.length())
.contentType(contentType)
.build();
///sednig the request
blobStore.putBlob(ContainerName, blob, multipart());
return contentLength;
}
and in the designated path its like this:
the folder 8mb.bin has the path inside it /slo/1522766773.076000/8200000/33554432 and then a file called 00000000 with the same size of the original file size.
how to solve this?
Thanks
jclouds implements Swift multipart using Static Large Objects. This has the limitation that parts exist in the same namespace as the manifest and modifying or deleting the parts invalidates the manifest. JCLOUDS-1285 suggests putting the parts in another container to clean up object listing although this requires some extra logic for deletes and overwrites.

Download file to stream instead of File

I'm implementing an helper class to handle transfers from and to an AWS S3 storage from my web application.
In a first version of my class I was using directly a AmazonS3Client to handle upload and download, but now I discovered TransferManager and I'd like to refactor my code to use this.
The problem is that in my download method I return the stored file in form of byte[]. TransferManager instead has only methods that use File as download destination (for example download(GetObjectRequest getObjectRequest, File file)).
My previous code was like this:
GetObjectRequest getObjectRequest = new GetObjectRequest(bucket, key);
S3Object s3Object = amazonS3Client.getObject(getObjectRequest);
S3ObjectInputStream objectInputStream = s3Object.getObjectContent();
byte[] bytes = IOUtils.toByteArray(objectInputStream);
Is there a way to use TransferManager the same way or should I simply continue using an AmazonS3Client instance?
The TransferManager uses File objects to support things like file locking when downloading pieces in parallel. It's not possible to use an OutputStream directly. If your requirements are simple, like downloading small files from S3 one at a time, stick with getObject.
Otherwise, you can create a temporary file with File.createTempFile and read the contents into a byte array when the download is done.

How do you make an S3 object public via the aws SDK Android?

I need to make my upload as public. I was able to upload a image into s3 bucket. But i was unable to make that public by programmatically I am using AWS SDK 2.1.10 Below is my code for uploading image into s3 bucket
mUpload = getTransferManager().upload( AmazonConstants.BUCKET_NAME.toLowerCase(Locale.US), /* getPrefix(getContext()) // "android_uploads/" */ locationPath + super.getFileName() /** + "." + mExtension */ , mFile);
mUpload.waitForCompletion();
Please help me. Thank you.
You need to use PutObjectRequest in upload().
getTransferManager().upload(
new PutObjectRequest(String bucketName, String key, File file)
.withCannedAcl(CannedAccessControlList.PublicRead)
);

Replace specific file inside Zip archive without extracting the whole archive in Java

I'm trying to get a specific file inside a Zip Archive, extract it, Encrypt it, and then get it back inside the archive replacing the origial one.
here's what I've tried so far..
public static boolean encryptXML(File ZipArchive, String key) throws ZipException, IOException, Exception {
ZipFile zipFile = new ZipFile(ZipArchive);
List<FileHeader> fileHeaderList = zipFile.getFileHeaders();
for (FileHeader fh : fileHeaderList)
{
if (fh.getFileName().equals("META-INF/file.xml"))
{
Path tempdir = Files.createTempDirectory("Temp");
zipFile.extractFile(fh, tempdir.toString());
File XMLFile = new File(tempdir.toFile(), fh.getFileName());
// Encrypting XMLFile, Ignore this part
// Here, Replace the original XMLFile inside ZipArchive with the encrypted one <<<<<<<<
return true;
}
}
return false;
}
I stuck at the replacing part of the code is there anyway I can do this without having to extract the whole Zip Archive?
Any help is appreciated, thanks in advance.
Not sure if this will help you as you are using a different library but the solution in ZT Zip would be the following.
ZipUtil.unpackEntry(new File("/tmp/demo.zip"), "foo.txt", new File("foo.txt"));
// encrypt the foo.txt
ZipUtil.replaceEntry(new File("/tmp/demo.zip"), "foo.txt", new File("foo.txt"));
This will unpack the foo.txt file and then after you encrypt it you can replace the previous entry with the new one.
You may use the ZipFilesystem (as of Java 7) as explained in the Oracle documentation to read/write within a zip file as if it were its own file system.
However, on my machine, this unpacks and re-packs the zip file under the hood anyway (tested with 7 and 8). I am not sure if there is a way to reliably change zip files like you describe.
Bingo!
I'm able to do it that way
ZipParameters parameters = new ZipParameters();
parameters.setIncludeRootFolder(true);
zipFile.removeFile(fh);
zipFile.addFolder(new File(tempdir.toFile(), "META-INF"), parameters);

Categories