I am familiar with AWS Java SDK, I also tried to browse the corresponding Javadoc, but I could not realize how do I create a sub directory, i.e., a directory object within a bucket, and how do I upload files to it.
Assume bucketName and dirName correspond to already existing bucket (with public permission) and a new (object) directory which needs to be created within the bucket (i.e. bucketName/dirName/)
I have tried the following:
AmazonS3Client s3 = new AmazonS3Client(
new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY));
s3.createBucket(bucketName + "/" + dirName); //throws exception
which throws an exception on the second line.
A short snippet which creates a sub-directory and uploads files to it will be deeply appreciated.
There are no "sub-directories" in S3. There are buckets and there are keys within buckets.
You can emulate traditional directories by using prefix searches. For example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2
and then do a prefix search for foo/ and you will get back:
foo/bar1
foo/bar2
foo/bar3
See AmazonS3.listObjects for more details.
Update: Assuming you already have an existing bucket, the key under that bucket would contain the /:
s3.putObject("someBucket", "foo/bar1", file1);
s3.putObject("someBucket", "foo/bar2", file2);
...
Then you can list all keys starting with foo/:
ObjectListing listing = s3.listObjects("someBucket", "foo/");
S3 doesn't see directories in the traditional way we do this on our operating systems.
Here is how you can create a directory:
public static void createFolder(String bucketName, String folderName, AmazonS3 client) {
// create meta-data for your folder and set content-length to 0
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
// create empty content
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
// create a PutObjectRequest passing the folder name suffixed by /
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,
folderName + SUFFIX, emptyContent, metadata);
// send request to S3 to create folder
client.putObject(putObjectRequest);
}
As casablanca already said you can upload files to directories like this:
s3.putObject("someBucket", "foo/bar1", file1);
Read the whole tutorial here for details, and the most important thing is you will find info how to delete the directories.
In newer versions of the SDK, you can do something like this (no need to create empty InputStream) to create an empty folder:
String key = parentKey + newFolderName;
if (!StringUtils.endsWith(key, "/")) {
key += "/";
}
PutObjectRequest putRequest = PutObjectRequest.builder()
.bucket(parent.getBucket())
.key(key)
.acl("public-read")
.build();
s3Client.putObject(putRequest, RequestBody.empty());
Leaving this answer here just in case someone stumbles upon this. I have been using aws sdk version - 1.11.875 and the following successfully created a folder for me when trying to upload a file into S3 bucket. I did not have to explicitly create the folder as mentioned in the earlier answer.
private void uploadFileToS3Bucket(final String bucketName, final File file) {
final String fileName = "parent/child/" + file.getName();
final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, fileName, file);
amazonS3.putObject(putObjectRequest);
}
This will create the parent and parent/child folders in the specified S3 bucket and upload the file into child folder.
This worked for me. I used spring boot and file uploaded according to Multipart mechanism. I wanted to save my images inside the photos folder in my aws s3 bucket. My need is save like this photos/mypic.jpg
----controller class method----
#PostMapping("/uploadFile")
public String uploadFile(#RequestPart(value = "file") MultipartFile file) throws IOException {
return this.amazonClient.uploadFile(file);
}
----service class (Implementation of controller)----
public String uploadFile(MultipartFile multipartFile) throws IOException {
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = "photoes/"+generateFileName(multipartFile); //here give any folder name you want
uploadFileTos3bucket(fileName, file);
} catch (AmazonServiceException ase) {
logger.info("Caught an AmazonServiceException from GET requests, rejected reasons:");
}
return fileName;
}
The point is concatenate the folder name you want as prefix of the fileName
additionally I will show you how to delete folder. The point is give the folder name as the keyName(key name is uploaded object name in the s3 bucket.). I will show code snippet also.
----controller class method----
#DeleteMapping("/deleteFile")
public String deleteFile(#RequestPart(value = "keyName") String keyName) {
return this.amazonClient.deleteFile(keyName);
}
----service class (Implementation of controller)----
public String deleteFile(String keyName){
try {
s3client.deleteObject(new DeleteObjectRequest(bucketName, keyName));
} catch (SdkClientException e) {
e.printStackTrace();
}
return "deleted file successfully!";
}
for delete photos folder that we created , call method like this. deleteFile("photos/")
important:- / is mandatory
if You want to create folder then you need to use put command using following keys to create folder1 in:
in root of bucket -> folder1/folder1_$folder$
in path folder2/folder3/ -> folder2/folder3/folder1/folder1_$folder$
It is always all_previous_folders/folderName/folderName_$folder$
Related
I am currently storing and downloading my Thymeleaf templates in S3.
I am using the following function to retrieve the Template from S3:
public String getTemplateFile(String name, File localFile) {
ObjectMetadata object = s3Client.getObject(new GetObjectRequest(connectionProperties.getBucket(), name), localFile);
boolean success = localFile.exists() && localFile.canRead();
return localFile.getPath();
}
After doing this the file is successfully downloaded in the desired location.
But when trying to access the file from the FlyingSaucer PDF generator the file doesn't exist, despite it is already downloaded in FILE_LOCATION_PATH. (I can open the file... the file is there but the function doesn't see it)
String xHtmlStringDocument =
convertHtmlToXhtml(templateEngine
.process(FILE_LOCATION_PATH,
initializeLetterHtmlTemplateContext(letter)));
When I run the program again and again I get the same result. But when I STOP the program and RUN it AGAIN then everything works because the file form the last execution is now recognized by the program.
This sounds to me like an asynchronous function issue.
Does anybody know how can I fix this?
Thanks in advance.
EDITED (following suggestion)
New function: Same result:
(And the file was created, the Download from S3 was successful)
java.io.FileNotFoundException: ClassLoader resource "static/templates/template.html" could not be resolved
public String getTemplateFileN(String name, File localFile) throws IOException {
S3Object fullObject = null;
InputStream in = null;
try {
fullObject = s3Client.getObject(new GetObjectRequest(connectionProperties.getBucket(), name));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());
in = fullObject.getObjectContent();
System.out.println(localFile.toPath());
Files.copy(in, localFile.toPath());
} //then later
finally {
// To ensure that the network connection doesn't remain open, close any open input streams.
if (fullObject != null) {
fullObject.close();
}
if (in != null) {
in.close();
}
}
return localFile.getPath();
}
Checking javadoc
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#getObject-com.amazonaws.services.s3.model.GetObjectRequest-java.io.File-
I see not method signature ObjectMetadata getObject(GetObjectRequest getObjectRequest,String file)
There is
ObjectMetadata getObject(GetObjectRequest getObjectRequest,
File destinationFile)
Where you provide File (not String) as second argument. Make sure the file is not opened for write before you try reading it!
I want to upload simple image file to google cloud storage
when the upload is to root bucket upload happens smoothly but when i try to upload image to folder within a bucket it fails
following is my code to do so
static Storage sStorage;
public static void uploadFileToServer(Context context, String filePath) {
Storage storage = getStorage(context);
StorageObject object = new StorageObject();
object.setBucket(BUCKET_NAME);
File sdCard = Environment.getExternalStorageDirectory();
File file = new File(sdCard + filePath);
try {
InputStream stream = new FileInputStream(file);
String contentType = URLConnection.guessContentTypeFromStream(stream);
InputStreamContent content = new InputStreamContent(contentType, stream);
Storage.Objects.Insert insert = storage.objects().insert(BUCKET_NAME, null, content);
insert.setName(file.getName());
insert.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
i tried putting bucket name like [PARENT_BUCKET]/[CHILD_FOLDER]
but it doesn't work
GCS has a flat namespace, with "folders" being just a client-supported abstraction (basically, treating "/" characters in object names as the folder delimiter).
So, to upload to a folder you should put just the bucket name in the bucket field of the request, and put the folder at the beginning of of the object field. For example, to upload to bucket 'my-bucket', folder 'my-folder' and file-within-folder 'file', you'd set bucket name to 'my-bucket' and object name to 'my-folder/file'.
trying to rename internal file within a zip file without having to extract and then re-zip programatically.
example. test.zip contains test.txt, i want to change it so that test.zip will contain newtest.txt(test.txt renamed to newtest.txt, contents remain the same)
came across this link that works but unfortunately it expects test.txt to exist on the system. In the example the srcfile should exist on the server.
Blockquote Rename file in zip with zip4j
Then icame across zipnote on Linux that does the trick but unfortunately the version i have doesnt work for files >4GB.
Any suggestions on how to accomplish this? prefereably in java.
This should be possible using Java 7 Zip FileSystem provider, something like:
// syntax defined in java.net.JarURLConnection
URI uri = URI.create("jar:file:/directoryPath/file.zip");
try (FileSystem zipfs = FileSystems.newFileSystem(uri, Collections.<String, Object>emptyMap())) {
Path sourceURI = zipfs.getPath("/pathToDirectoryInsideZip/file.txt");
Path destinationURI = zipfs.getPath("/pathToDirectoryInsideZip/renamed.txt");
Files.move(sourceURI, destinationURI);
}
Using zip4j, I am modifying and re-writing the file headers inside of the central directory section to avoid rewriting the entire zip file:
ArrayList<FileHeader> FHs = (ArrayList<FileHeader>) zipFile.getFileHeaders();
FHs.get(0).setFileName("namename.mp4");
FHs.get(0).setFileNameLength("namename.mp4".getBytes("UTF-8").length);
zipFile.updateHeaders ();
//where updateHeaders is :
public void updateHeaders() throws ZipException, IOException {
checkZipModel();
if (this.zipModel == null) {
throw new ZipException("internal error: zip model is null");
}
if (Zip4jUtil.checkFileExists(file)) {
if (zipModel.isSplitArchive()) {
throw new ZipException("Zip file already exists. Zip file format does not allow updating split/spanned files");
}
}
long offset = zipModel.getEndCentralDirRecord().getOffsetOfStartOfCentralDir();
HeaderWriter headerWriter = new HeaderWriter();
SplitOutputStream splitOutputStream = new SplitOutputStream(new File(zipModel.getZipFile()), -1);
splitOutputStream.seek(offset);
headerWriter.finalizeZipFile(zipModel, splitOutputStream);
splitOutputStream.close();
}
The name field in the local file header section remains unchanged, so there will be a mismatch exception in this library.
It's tricky but maybe problematic, I don't know..
I have been uploading files from a website quite happily using a combination of Play Framework (1.2.4) with Java and jQuery/javascript.
On the client side I attach a blob object to a FormData object and then send that to my Play Framework controller, which accepts the file. I have written a class UploadImg to upload this file to Amazon S3. I then initiate the class by passing in a File object and filename (which is a String), and call the doUpload() method:
public static void myController(File f){
UploadImg imgToUpload = new UploadImg(File file, String filename);
imgToUpload.doUpload();
// ...
I now have a bunch of images on my desktop and I writing a 'bulk uploader'. I did something lik:
File img = new File("/pics/Repin 301.jpg");
UploadImg fileToUpload = new UploadImg(img);
fileToUpload.doUpload()
But I get an error telling me that my input is null.
The path /pics/ doesn't look like it would point to your desktop, if pics is relative to where you are running the app from then drop the leading slash.
Try this to confirm the file is being found:
File img = new File("/pics/Repin 301.jpg");
if(img.exists()) {
UploadImg fileToUpload = new UploadImg(img);
fileToUpload.doUpload()
}else{
System.out.println("File not found");
}
I've successfully modified the contents of a (existing) zip file using the FileSystem provided by java 7, but when I tried to create a NEW zip file by this method it fails, with the error message that says: "zip END header not found", it is logical because of the way I'm doing it, first I create the file (Files.createFile) which is a completely empty file, and then I try to access to its file system , and since the file is empty its impossible to find any header inside the zip, my question is is there any way to create a new zip file completely empty using this method?; the hack that I've considered is adding an empty new ZipEntry to a the zip file and then using that new empty file to crate the file system based on it, but i really want to think that the guys of oracle implemented a better (easier) way to do this with nio and the filesystems...
this is my code (the error appears when creating the file system):
if (!zipLocation.toFile().exists()) {
if (creatingFile) {
Files.createFile(zipLocation);
}else {
return false;
}
} else if (zipLocation.toFile().exists() && !replacing) {
return false;
}
final FileSystem fs = FileSystems.newFileSystem(zipLocation, null);
.
.
.
zipLocation is a Path
creatingFile is a boolean
ANSWER:
in my particular case the answer given didn't work appropriately because of the spaces in the path, therefore i have to do it the way i didn't want to:
Files.createFile(zipLocation);
ZipOutputStream out = new ZipOutputStream(
new FileOutputStream(zipLocation.toFile()));
out.putNextEntry(new ZipEntry(""));
out.closeEntry();
out.close();
it does not mean that the given answer is wrong, it just didn't work for my particular case
As described in The Oracle Site:
public static void createZip(Path zipLocation, Path toBeAdded, String internalPath) throws Throwable {
Map<String, String> env = new HashMap<String, String>();
// check if file exists
env.put("create", String.valueOf(Files.notExists(zipLocation)));
// use a Zip filesystem URI
URI fileUri = zipLocation.toUri(); // here
URI zipUri = new URI("jar:" + fileUri.getScheme(), fileUri.getPath(), null);
System.out.println(zipUri);
// URI uri = URI.create("jar:file:"+zipLocation); // here creates the
// zip
// try with resource
try (FileSystem zipfs = FileSystems.newFileSystem(zipUri, env)) {
// Create internal path in the zipfs
Path internalTargetPath = zipfs.getPath(internalPath);
// Create parent directory
Files.createDirectories(internalTargetPath.getParent());
// copy a file into the zip file
Files.copy(toBeAdded, internalTargetPath, StandardCopyOption.REPLACE_EXISTING);
}
}
public static void main(String[] args) throws Throwable {
Path zipLocation = FileSystems.getDefault().getPath("a.zip").toAbsolutePath();
Path toBeAdded = FileSystems.getDefault().getPath("a.txt").toAbsolutePath();
createZip(zipLocation, toBeAdded, "aa/aa.txt");
}