We are using a java class to dowload a file from AWS s3 bucket with the following code
inputStream = AWSFileUtil.getInputStream(
AWSConnectionUtil.getS3Object(null),
"cdn.generalsentiment.com", filePath);
AWSFileUtil is a class which check the credentials and gets the inputstream from S3bucket using the getInputStream method.The filePath is the file inside cdn.generalsentiment.com bucket.
We want to write a method which can just check whether the particular file exists or not in the AWS S3 bucket and returns a boolean or some other value.
Please suggest me a solution for this.
public static boolean isValidFile(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException {
try {
ObjectMetadata objectMetadata =
s3.getObjectMetadata("cdn.generalsentiment.com", path);
} catch (NotFoundException nfe) {
nfe.printStackTrace();
}
return true;
}
If the file exists it returns true, else it throws NotFoundException, which i want to catch and return the "isValidFile" method result as false.
Guys any other alternative for the method body or return type would be great.
The updated one
public static boolean doesFileExist(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException,
AmazonServiceException {
boolean isValidFile = true;
try {
ObjectMetadata objectMetadata =
s3.getObjectMetadata("cdn.generalsentiment.com", path);
} catch (NotFoundException nfe) {
isValidFile = false;
}
catch (Exception exception) {
exception.printStackTrace();
isValidFile = false;
}
return isValidFile;
}
Daan's answer using GET Bucket (List Objects) (via the respective wrapper from the AWS for Java, see below) is the most efficient approach to get the desired information for many objects at once (+1), you'll need to post process the response accordingly of course.
This is done most easily via one of the respective methods of Class AmazonS3Client, e.g. listObjects(String bucketName):
AmazonS3 s3 = new AmazonS3Client(); // provide credentials, if need be
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName("cdn.generalsentiment.com");
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(objectSummary.getKey());
}
Alternative
If you are only interested in a single object (file) at a time, using HEAD Object will be much more efficient, insofar you can deduce existence straight from the respective HTTP response code (see Error Responses for details), i.e. 404 Not Found for a response of NoSuchKey - The specified key does not exist.
Again, this is done most easily via Class AmazonS3Client, namely getObjectMetadata(String bucketName, String key), e.g.:
public static boolean isValidFile(AmazonS3 s3,
String bucketName,
String path) throws AmazonClientException, AmazonServiceException {
boolean isValidFile = true;
try {
ObjectMetadata objectMetadata = s3.getObjectMetadata(bucketName, path);
} catch (AmazonS3Exception s3e) {
if (s3e.getStatusCode() == 404) {
// i.e. 404: NoSuchKey - The specified key does not exist
isValidFile = false;
}
else {
throw s3e; // rethrow all S3 exceptions other than 404
}
}
return isValidFile;
}
Use the GET Bucket S3 API:
http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTBucketGET.html
and specify the full file name as a prefix.
this is simple way to find existing folder in bucket. above answer also true.
folder name contain '/' at last ,
it return true .
note: mybucket/userProfileModule/abc.pdf, it my folder structrue
boolean result1 = s3client.doesObjectExist("mybucket", "userProfileModule/");
System.out.println(result);
Related
I have the following method, which deletes a file from AWS S3 Bucket, however,
there is no exception thrown if the file doesn't exist
there is no success code or flag to see if the file has been deleted successfully
is there any workaround to deal with this situation.
#Override
public void deleteFile(String fileName) {
try {
this.client.deleteObject(builder ->
builder
.bucket(this.bucketName).key(fileName)
.build());
} catch (S3Exception ex) {
ex.printStackTrace();
}
}
If your request succeeded then your object is deleted. Note, that due to eventual consistency, the object is not guaranteed to disappear immediately. You need to check on the HTTP status code.
AmazonS3 as3 = new AmazonS3();
Status myStatus = as3.DeleteObject(<fill in paramters here>);
if (myStatus.Code >= 200 && myStatus.Code < 300)
{
// Success
}
else
{
// Delete Failed
// Handle specific Error Codes below
if (myStatus.Description == "AllAccessDisabled")
{
// Do something
}
if (myStatus.Description == "NoSuchKey")
{
// Do something
}
}
Also, there is an api available to check if the Object exists in S3
doesObjectExist
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#doesObjectExist-java.lang.String-java.lang.String-
im trying to send files to FTPS server
connection method: FTPS, ACTIVE, EXPLICIT
setFileType(FTP.BINARY_FILE_TYPE);
setFileTransferMode(FTP.BLOCK_TRANSFER_MODE);
Checking the reply string right after connect i got:
234 AUTH command ok. Expecting TLS Negotiation.
from here
234 Specifies that the server accepts the authentication mechanism specified by the client, and the exchange of security data is complete. A higher level nonstandard code created by Microsoft.
while trying to send file with storeFile or storeUniqeFile i get false
checking the reply string right after store file i got: 501 Server cannot accept argument.
what is weird i was able creating a directory to this client without any issues
with makeDirectory("test1");
i was trying both this links : link1 , link2
FOR EXAMPLE when i was trying to use ftp.enterLocalPassiveMode(); before ftp.storeFile(destinationfile, in);
i got time out error .
Does anyone have any idea how to solve it ?
public static void main(String[] args) throws Exception {
FTPSProvider ftps = new FTPSProvider();
String json = "connection details";
DeliveryDetailsFTPS details = gson.fromJson(json, DeliveryDetailsFTPS .class);
File file = File.createTempFile("test", ".txt");
FileUtils.write(file, " some test", true);
try (FileInputStream stream = new FileInputStream(file)) {
ftps.sendInternal(ftps.getClient(details), details, stream, file.getName());
}
}
protected void sendInternal(FTPClient client, DeliveryDetailsFTPS details, InputStream stream, String filename) throws Exception {
try {
// release the enc
DeliveryDetailsFTPS ftpDetails = (DeliveryDetailsFTPS) details;
setClient(client, ftpDetails);
boolean isSaved = false;
try (BufferedInputStream bis = new BufferedInputStream(stream)) {
isSaved = client.storeFile(filename, bis);
}
client.makeDirectory("test1");
client.logout();
if (!isSaved) {
throw new IOException("Unable to upload file to FTP");
}
} catch (Exception ex) {
LOG.debug("Unable to send to FTP", ex);
throw ex;
} finally {
client.disconnect();
}
}
#Override
protected FTPClient getClient(DeliveryDetails details) {
return new FTPSClient(isImplicitSSL((DeliveryDetailsFTPS ) details));
}
protected void setClient(FTPClient client, DeliveryDetailsFTPS details) throws Exception {
DeliveryDetailsFTPS ftpDetails = (DeliveryDetailsFTPS ) details;
client.setConnectTimeout(100000);
client.setDefaultTimeout(10000 * 60 * 2);
client.setControlKeepAliveReplyTimeout(300);
client.setControlKeepAliveTimeout(300);
client.setDataTimeout(15000);
client.connect(ftpDetails.host, ftpDetails.port);
client.setBufferSize(1024 * 1024);
client.login(ftpDetails.username, ftpDetails.getSensitiveData());
client.setControlEncoding("UTF-8");
int code = client.getReplyCode();
if (code == 530) {
throw new IOException(client.getReplyString());
}
// Set binary file transfer
client.setFileType(FTP.BINARY_FILE_TYPE);
client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE);
if (ftpDetails.ftpMode == FtpMode.PASSIVE) {
client.enterLocalPassiveMode();
}
client.changeWorkingDirectory(ftpDetails.path);
}
I have tried this solution as well didn't solve the problem:
they only way i was able send file is with FileZilla and it is using FTPES .
But i need my Java code to do it . can anyone give me a clue
I have tried almost any possible solution offered on different websites could not make it work with Apache FTPS CLIENT ,
had to use a different class which worked like a charm here is a snippet:
com.jscape.inet.ftps Link
private Ftps sendWithFtpsJSCAPE(ConnDetails details, InputStream stream, String filename) throws FtpException, IOException {
Ftps ftp;
FtpConnectionDetails ftpDetails = FtpConnectionDetails details;
ftp = new Ftps(ftpDetails.getHost(), ftpDetails.getUsername(), ftpDetails.getPassword());
if (ftpDetails.getSecurityMode().equals(FtpConnectionDetails.SecurityMode.EXPLICIT)) {
ftp.setConnectionType(Ftps.AUTH_TLS);
} else {
ftp.setConnectionType(Ftps.IMPLICIT_SSL);
}
ftp.setPort(ftpDetails.getPort());
if (!ftpDetails.getFtpMode().equals(FtpMode.ACTIVE)) {
ftp.setPassive(true);
}
ftp.setTimeout(FTPS_JSCAPE_TIME_OUT);
ftp.connect();
ftp.setBinary();
ftp.setDir(ftpDetails.getPath());
ftp.upload(stream, filename);
return ftp;
}
I am battling with trying to download files using the google drive API. I'm just writing code that should download files from my drive onto my computer. I've finally got to a stage where I am authenticated and can view the file metadata. For some reason, I'm still unable to download files. The downLoadURL I get looks like:
https://doc-04-as-docs.googleusercontent.com/docs/securesc/XXXXXXXXXXXXXX/0B4dSSlLzQCbOXzAxNGxuRUhVNEE?e=download&gd=true
This URl isn't downloading anything when I run my code or when I copy and paste it in a browser. But, in the browser, when i remove the "&gd=true" part of the URL it downloads the file.
My download method is straight out of the google drive API documentation:
public static InputStream downloadFile(Drive service, File file) {
if (file.getDownloadUrl() != null && file.getDownloadUrl().length() > 0) {
try {
System.out.println("Downloading: "+ file.getTitle());
return service.files().get(file.getId()).executeMediaAsInputStream();
} catch (IOException e) {
// An error occurred.
e.printStackTrace();
return null;
}
} else {
// The file doesn't have any content stored on Drive.
return null;
}
}
Anyone know whats going on here?
Thanks in advance.
Since you're using Drive v2, a different approach (also on the documentation) is for you to get the InputStream thru the HttpRequest object.
/**
* Download a file's content.
*
* #param service Drive API service instance.
* #param file Drive File instance.
* #return InputStream containing the file's content if successful,
* {#code null} otherwise.
*/
private static InputStream downloadFile(Drive service, File file) {
if (file.getDownloadUrl() != null && file.getDownloadUrl().length() > 0) {
try {
HttpResponse resp =
service.getRequestFactory().buildGetRequest(new GenericUrl(file.getDownloadUrl()))
.execute();
return resp.getContent();
} catch (IOException e) {
// An error occurred.
e.printStackTrace();
return null;
}
} else {
// The file doesn't have any content stored on Drive.
return null;
}
}
I have a cloud storage at strato namely hidrive. It uses the webdav protocol. Note that it's based on HTTP. The client application they provide is poor and buggy so I tried various other tools for synchronization but none just worked the way I need it.
I'm therefore trying to implement it in Java using the Sardine project. Is there any code for hard-copying a local source folder to an external cloud folder? I haven't found anything in that direction.
The following code is supposed to upload the file...
Sardine sardine = SardineFactory.begin("username", "password");
InputStream fis = new FileInputStream(new File("some/file/test.txt"));
sardine.put("https://webdav.hidrive.strato.com/users/username/Backup", fis);
... but throws an exception instead:
Exception in thread "main" com.github.sardine.impl.SardineException: Unexpected response (301 Moved Permanently)
at com.github.sardine.impl.handler.ValidatingResponseHandler.validateResponse(ValidatingResponseHandler.java:48)
at com.github.sardine.impl.handler.VoidResponseHandler.handleResponse(VoidResponseHandler.java:34)
at com.github.sardine.impl.handler.VoidResponseHandler.handleResponse(VoidResponseHandler.java:1)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:218)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:160)
at com.github.sardine.impl.SardineImpl.execute(SardineImpl.java:828)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:755)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:738)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:726)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:696)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:689)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:682)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:676)
Printing out the folders in that directory works so the connection/ authentication did succeed:
List<DavResource> resources = sardine.list("https://webdav.hidrive.strato.com/users/username/Backup");
for (DavResource res : resources)
{
System.out.println(res);
}
Please either help me fix my code or link me to some file synchronization library that works for my purpose.
Sardine uses (internally) HttpClient. There is similar question here where you can find an answer Httpclient 4, error 302. How to redirect?.
Try converting the InputStream obj into byte array before you call put(). Something like the below,
byte[] fisByte = IOUtils.toByteArray(fis);
sardine.put("https://webdav.hidrive.strato.com/users/username/Backup", fisByte);
It worked for me. Let me know.
I had to extend the "org.apache.http.impl.client.LaxRedirectStrategy" and also the getRedirect() Method of org.apache.http.impl.client.DefaultRedirectStrategy with a treatment of the needed methods: PUT, MKOL, etc. . By default only GET is redirected.
It looks like this:
private static final String[] REDIRECT_METHODS = new String[] { HttpGet.METHOD_NAME, HttpPost.METHOD_NAME, HttpHead.METHOD_NAME, HttpPut.METHOD_NAME, HttpDelete.METHOD_NAME, HttpMkCol.METHOD_NAME };
isRedirectable-Method
for (final String m : REDIRECT_METHODS) {
if (m.equalsIgnoreCase(method)) {
System.out.println("isRedirectable true");
return true;
}
}
return method.equalsIgnoreCase(HttpPropFind.METHOD_NAME);
getRedirect-Method:
final URI uri = getLocationURI(request, response, context);
final String method = request.getRequestLine().getMethod();
if (method.equalsIgnoreCase(HttpHead.METHOD_NAME)) {
return new HttpHead(uri);
} else if (method.equalsIgnoreCase(HttpGet.METHOD_NAME)) {
return new HttpGet(uri);
} else if (method.equalsIgnoreCase(HttpPut.METHOD_NAME)) {
HttpPut httpPut = new HttpPut(uri);
httpPut.setEntity(((HttpEntityEnclosingRequest) request).getEntity());
return httpPut;
} else if (method.equalsIgnoreCase("MKCOL")) {
return new HttpMkCol(uri);
} else if (method.equalsIgnoreCase("DELETE")) {
return new HttpDelete(uri);
} else {
final int status = response.getStatusLine().getStatusCode();
if (status == HttpStatus.SC_TEMPORARY_REDIRECT) {
return RequestBuilder.copy(request).setUri(uri).build();
} else {
return new HttpGet(uri);
}
}
That worked for me.
I am wondering what the correct way with Play 2.3.x (Java) is to upload images to Amazon S3 in a non-blocking way.
Right now I am wrapping the amazons3.putObject method inside a promise. However I fear that I am basically just blocking another thread with this logic. My code looks like following:
return Promise.promise(
new Function0<Boolean>() {
public Boolean apply() {
if (S3Plugin.amazonS3 != null) {
try {
PutObjectRequest putObjectRequest = new PutObjectRequest(
S3Plugin.s3Bucket, name + "." + format, file.getFile());
ObjectMetadata metadata = putObjectRequest.getMetadata();
if(metadata == null) {
metadata = new ObjectMetadata();
}
metadata.setContentType(file.getContentType());
putObjectRequest.setMetadata(metadata);
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
S3Plugin.amazonS3.putObject(putObjectRequest);
return true;
} catch (AmazonServiceException e) {
// error uploading image to s3
Logger.error("AmazonServiceException: " + e.toString());
} catch (AmazonClientException e) {
// error uploading image to s3
Logger.error("AmazonClientException: " + e.toString());
}
}
return false;
}
}
);
What is the best way to do the upload process non-blocking?
The Amazon library also provides the TransferManager.class for asynchronous uploads but I am not sure how to utilize this in a non-blocking way either...
SOLUTION:
After spending quite a while figuring out how to utilize the Promise/Future in Java, I came up with following solution thanks to Will Sargent:
import akka.dispatch.Futures;
final scala.concurrent.Promise<Boolean> promise = Futures.promise();
... create AmazonS3 upload object ...
upload.addProgressListener(new ProgressListener() {
#Override
public void progressChanged(ProgressEvent progressEvent) {
if(progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
promise.success(true);
}
else if(progressEvent.getEventCode() == ProgressEvent.FAILED_EVENT_CODE) {
promise.success(false);
}
}
});
return Promise.wrap(promise.future());
Important to note is that I have to use the scala promise and not the play framework promise. The return value however is a play.libs.F.Promise.
You can do the upload process and return a Future by creating a Promise, returning the promise's future, and only writing to the promise in the TransferManager's progress listener:
Examples of promises / futures: http://docs.scala-lang.org/overviews/core/futures.html
Return the Scala future from Promise: http://www.playframework.com/documentation/2.3.x/api/java/play/libs/F.Promise.html#wrapped()
TransferManager docs: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html