I am trying to test AmazonS3 API using FakeS3. I have installed it and started it as the instructions say to do. When I run the code, the server responds and the log shows no errors. The bucket is created in the specified directory. However, the file is not present in the directory. Instead, there is another directory that has the name keyName. No file is present inside this directory. What am I doing wrong?
import java.io.File;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.S3ClientOptions;
import com.amazonaws.services.s3.model.PutObjectRequest;
public class FileUpload {
private static String bucketName = "bucket1";
private static String keyName = "File1";
private static String uploadFileName = "message.txt";
public static void main(String[] args){
//create a new AmazonS3Client object
BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar");
AmazonS3 client = new AmazonS3Client(credentials);
client.setEndpoint("http://localhost:10000");
client.setS3ClientOptions(newS3ClientOptions().withPathStyleAccess(true));
System.out.println("Creating a new bucket......");
client.createBucket(bucketName); //create a new bucket
System.out.println("Uploading text file into specified bucket......");
File fin = new File(uploadFileName);
client.putObject(bucketName, keyName, fin);
System.out.println("Done");
}
}
Example server log:
st - - [09/Jul/2015:19:10:56 CDT] "PUT /bucket1/ HTTP/1.1" 200 0
- -> /bucket1/
localhost - - [09/Jul/2015:19:10:56 CDT] "PUT /bucket1/File1 HTTP/1.1" 200 0
- -> /bucket1/File1
Related
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
import java.io.IOException;
public class UploadObject {
public static void main(String[] args) throws IOException {
Regions clientRegion = Regions.US_EAST_1;
String fileObjKeyName = "N.pdf";
String fileName = "C:\\home\\aws\\N.pdf";
//To Test the File Upload
String accessKeyId = "AKIAZGSMNGVXXXZ73VXE";
String secretAccessKey = "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";
String bucketName = "fioprod-s3-addon-us-east-12";
try {
final BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new AWSStaticCredentialsProvider(basicAWSCredentials))
.build();
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
The above code works fine to load pdf file into S3 when I run it from IntelliJ IDE. I want this code to move to Pentaho "User Defined Class", when I do that it throws error - " Imported class "com.amazonaws.auth.AWSStaticCredentialsProvider" could not be loaded"
How do I resolve that? My ultimate goal is to load a .pdf or .zip file into S3 using pentaho.
Thank you for your time.
Your have written nice code which is working as well. You just need to keep aws-java-sdk jar to your data-integration/lib location.
You can download sdk jar file from Here
You can look my KTR also from Here where I have included your code and make small changes to workable your code in User-defined-java-class in PDI
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
import java.io.IOException;
public boolean processRow(StepMetaInterface smi, StepDataInterface sdi) throws KettleException{
Object[] r = getRow();
if (r == null) {
setOutputDone();
return false;
}
Regions clientRegion = Regions.US_EAST_1;
String fileObjKeyName = "N.pdf";
String fileName = "C:\\home\\aws\\N.pdf";
//To Test the File Upload
String accessKeyId = "AKIAZGSMNGVXXXZ73VXE";
String secretAccessKey = "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";
String bucketName = "fioprod-s3-addon-us-east-12";
final BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html
AmazonS3 s3Client = (AmazonS3)AmazonS3ClientBuilder.standard().withRegion(clientRegion).withCredentials(new AWSStaticCredentialsProvider(basicAWSCredentials)).build();
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
putRow(data.outputRowMeta, r);
return true;
}
I have around 100 files in my Dropbox account I am trying to make shareable link for all of the files using Dropbox API.
Tried using
DbxClient = new DbxClient(config, accessToken);
client.createShareableUrl(path);
but got an error on DbxClient cannot find symbol, or class not found.
import com.dropbox.core.DbxRequestConfig;
import com.dropbox.core.v2.*;
import static com.dropbox.core.v2.files.AlphaGetMetadataError.path;
import com.dropbox.core.v2.files.FileMetadata;
import com.dropbox.core.v2.files.ListFolderResult;
import com.dropbox.core.v2.files.Metadata;
import com.dropbox.core.v2.sharing.RequestedVisibility;
import com.dropbox.core.v2.sharing.SharedLinkMetadata;
import com.dropbox.core.v2.sharing.SharedLinkSettings;
import com.dropbox.core.v2.users.FullAccount;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class DBX {
static boolean doYouWantMeToUpload = false;
private static final String ACCESS_TOKEN = "My access token here I removed it";
public static void main(String args[]) throws DbxException, FileNotFoundException, IOException {
// Create Dropbox client
DbxRequestConfig config = DbxRequestConfig.newBuilder("dropbox/java-tutorial").build();
DbxClientV2 client = new DbxClientV2(config, ACCESS_TOKEN);
// Get current account info
FullAccount account = client.users().getCurrentAccount();
System.out.println(account.getName().getDisplayName());
if(doYouWantMeToUpload == true){
// Get files and folder metadata from Dropbox root directory
ListFolderResult result = client.files().listFolder("");
while (true) {
for (Metadata metadata : result.getEntries()) {
System.out.println(metadata.getPathLower());
}
if (!result.getHasMore()) {
break;
}
result = client.files().listFolderContinue(result.getCursor());
}
// Upload "test.txt" to Dropbox
try (InputStream in = new FileInputStream("test.txt")) {
FileMetadata metadata = client.files().uploadBuilder("/test.txt")
.uploadAndFinish(in);
}
// Get shareable link for a file
DbxClient = new DbxClient(config, ACCESS_TOKEN);
client.createShareableUrl(test.txt);
}
}
}
I want to get shareable link for all files in my Dropbox.
I followed these instructions in Dropbox GitHub.
You're attempting to use the old createShareableUrl which is for Dropbox API v1, which is now retired.
You should instead use Dropbox API v2, via DbxClientV2, like you do for the other calls in your code.
Specifically, to create a shared link, you should use createSharedLinkWithSettings. That would look something like:
DbxClientV2 client = new DbxClientV2(config, ACCESS_TOKEN);
client.sharing().createSharedLinkWithSettings(path);
I am working to copy box files to S3 bucket. How to get file object from box file to copy in to S3 bucket using box-java-sdk
I have to tried to get file's metadata from box folder, but end up with limited documentation to get file object.
import com.box.sdk.BoxAPIConnection;
import com.box.sdk.BoxFile;
import com.box.sdk.BoxFolder;
import com.box.sdk.BoxItem;
import com.box.sdk.Metadata;
String access_token = "some_access_token";
String refresh_token = "some_refresh_token";
BoxAPIConnection api = new BoxAPIConnection(client_id,
client_secret,
access_token,
refresh_token);
for (BoxItem.Info itemInfo : folder) {
if (itemInfo instanceof BoxFile.Info) {
// getting file info, metadata
// have to upload the file content here to S3 bucket
} else if (itemInfo instanceof BoxFolder.Info)
{
BoxFolder.Info folderInfo = (BoxFolder.Info) itemInfo;
// Do something with the folder.
}
}
Goal is to upload box content to S3 bucket.
So i came up with this java code to copy files from box folder to Aws S3. I have used box-sdk-java, aws-sdk-java here.
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;
import com.amazonaws.services.s3.model.UploadPartResult;
import com.box.sdk.BoxAPIConnection;
import com.box.sdk.BoxFile;
import com.box.sdk.BoxFolder;
import com.box.sdk.BoxItem;
import com.box.sdk.Metadata;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.commons.io.FileUtils;
public static String fileObjKeyName = "";
public static String bucketName = "s3Bucket";
// store credentials in your local machine in aws config / credentials file.
public static ProfileCredentialsProvider credentialsProvider = new ProfileCredentialsProvider();
public static AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(credentialsProvider)
.withRegion(regionfOfS3Bucket)
.build();
String access_token = "some_access_token";
String refresh_token = "some_refresh_token";
BoxAPIConnection api = new BoxAPIConnection(client_id,
client_secret,
access_token,
refresh_token);
BoxFolder folder = new BoxFolder(api,folderId);
for (BoxItem.Info itemInfo : folder) {
if (itemInfo instanceof BoxFile.Info) {
// getting file info, metadata
// have to upload the file content here to S3 bucket
BoxFile file = new BoxFile(api, itemInfo.getID());
BoxFile.Info info = file.getInfo();
System.out.println(info.getName());
FileOutputStream stream;
try {
stream = new FileOutputStream(info.getName());
file.download(stream);
stream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
File file_new = FileUtils.getFile(info.getName());
fileObjKeyName = itemInfo.getID() + "_" + info.getName();
long contentLength = file_new.length();
System.out.println(contentLength);
long partSize = 5 * 1024 * 1024;
List<PartETag> partETags = new ArrayList<PartETag>();
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, fileObjKeyName);
InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Because the last part could be less than 5 MB, adjust the
// part size as needed.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create the request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest().withBucketName(bucketName).withKey(fileObjKeyName)
.withUploadId(initResponse.getUploadId()).withPartNumber(i).withFileOffset(filePosition).withFile(file_new).withPartSize(partSize);
// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());
filePosition += partSize;
}
CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, fileObjKeyName, initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
file_new.delete();
} else if (itemInfo instanceof BoxFolder.Info)
{
BoxFolder.Info folderInfo = (BoxFolder.Info) itemInfo;
// Do something with the folder.
}
}
package com.Main;
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
public class Main {
public static void main(String[] args) throws IOException {
//Source file in the local file system
String localSrc = args[0];
//Destination file in HDFS
String dst = args[1];
//Input stream for the file in local file system to be written to HDFS
InputStream in = new BufferedInputStream(new FileInputStream(localSrc));
//Get configimport org.apache.commons.configuration.Configuration;uration of Hadoop system
Configuration conf = new Configuration();
System.out.println("Connecting to -- "+conf.get("fs.defaultFS"));
//Destination file in HDFS
FileSystem fs = FileSystem.get(URI.create(dst), conf);
OutputStream out = fs.create(new Path(dst));
//Copy file from local to HDFS
IOUtils.copyBytes(in, out, 4096, true);
System.out.println(dst + " copied to HDFS");
}
}
AM getting following error message "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at com.Main.Main.main(Main.java:22)"
I have Json file in my local , have to move that in HDFS
Ex:
{"Del":"Ef77xvP","time":1509073785106},
{"Del":"2YXsF7r","time":1509073795109}
Specify command line arguments to your program. You code snippet expects first argument to be source and next arguments to be destination.
For more details refer to What is "String args[]"? parameter in main method Java
I'm trying to upload file to my Google drive Account using Java. The file is uploaded but i'm getting an warning: WARNING: Application name is not set. Call Builder#setApplicationName.
Code is
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Arrays;
import com.google.api.client.googleapis.auth.oauth2.GoogleAuthorizationCodeFlow;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.auth.oauth2.GoogleTokenResponse;
import com.google.api.client.http.FileContent;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.drive.Drive;
import com.google.api.services.drive.DriveScopes;
import com.google.api.services.drive.model.File;
public class UploadFileGoogleDrive {
private static String CLIENT_ID = "*****";
private static String CLIENT_SECRET = "******";
private static String REDIRECT_URI = "****";
public static void main(String[] args) throws IOException{
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(
httpTransport, jsonFactory, CLIENT_ID, CLIENT_SECRET, Arrays.asList(DriveScopes.DRIVE))
.setAccessType("online")
.setApprovalPrompt("auto").build();
String url = flow.newAuthorizationUrl().setRedirectUri(REDIRECT_URI).build();
System.out.println("Please open the following URL in your browser then type the authorization code:");
System.out.println(" " + url);
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String code = br.readLine();
GoogleTokenResponse response = flow.newTokenRequest(code).setRedirectUri(REDIRECT_URI).execute();
GoogleCredential credential = new GoogleCredential().setFromTokenResponse(response);
//Create a new authorized API client
Drive service = new Drive.Builder(httpTransport, jsonFactory, credential).build();
//Insert a file
File body = new File();
body.setTitle("My document");
body.setDescription("A test document");
body.setMimeType("text/plain");
java.io.File fileContent = new java.io.File("document.txt");
FileContent mediaContent = new FileContent("text/plain", fileContent);
File file = service.files().insert(body, mediaContent).execute();
System.out.println("File ID: " + file.getId());
}
}
The Output is
May 24, 2014 11:59:14 AM com.google.api.client.googleapis.services.AbstractGoogleClient
WARNING: Application name is not set. Call Builder#setApplicationName.
File ID: 1Ze77mqHtKWDU3eVljATlHQ0U
The Google APIs Client Library for Java strongly encourages you to set an application name when constructing your clients, for use when troubleshooting requests against our server logs. You can see an example on how to set the application name in the Google Drive Java command line sample.