How to copy object to different amazon account using aws sdk v2? - java

I am working on java project in which I am using aws sdk v2 for using amazon s3 services .
I am performing copy operation it is working with same account but not working with different account.
Code :-
public void copyObjects(S3Object[] s3DestObjects, String sDestBucket, String sSourceBucket, String sSourceObject) {
try {
AwsBasicCredentials awsCreds = AwsBasicCredentials.create(
ACCESS_KEY,
SECRET_KEY);
S3ClientBuilder s3ClientBuilder =
S3Client.builder().credentialsProvider(StaticCredentialsProvider.create(awsCreds));
s3ClientBuilder.region(Region.US_EAST_2);
S3Client s3Client = s3ClientBuilder.build();
String encodedUrl = null;
try {
encodedUrl = URLEncoder.encode(sSourceBucket + "/" + sSourceObject, StandardCharsets.UTF_8.toString());
} catch (UnsupportedEncodingException e) {
System.out.println("URL could not be encoded: " + e.getMessage());
}
for (S3Object s3DestObject : s3DestObjects) {
//CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder().destinationBucket(dstBucket).destinationKey(dstS3Object.key).copySource(encodedUrl).build();
CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder()
.copySource(encodedUrl)
.destinationBucket(sDestBucket)
.destinationKey(s3DestObject.key)
.metadata(s3DestObject.getMetadata()).metadataDirective(MetadataDirective.REPLACE)
.build();
CopyObjectResponse copyObjectResponse = s3Client.copyObject(copyObjectRequest);
}
} catch (S3Exception e) {
throw e;
}
}
This above code is working with same account bucket but not working with different account bucket and getting error :-
Access Denied (Service: S3, Status Code: 403, Request ID: 4VCND27Z6P3CEJ8H, Extended Request ID: 2T88jx4+R+LjO74pBHOhJj8uOUx6M4Hx3UYYkWm4Sbf6cb9NVM8f5DvFcanv0rbXhZUfEkqpSuI=)
please suggest how can i do copy objects to different accounts?

It appears your situation is:
You have Amazon S3 buckets in different AWS Accounts
You wish to copy objects between the buckets
There are two ways to do this:
1. 'Push' the objects
If your code is running in Account A and you wish to copy from a bucket in Account A to a bucket in Account B, then you will need:
Permission on the IAM Entity (eg IAM User or IAM Role) that is being used by your program to write to the bucket in Account B, AND
A bucket policy on the bucket in Account B that permits the IAM Entity used by your program to write to the bucket
When copying the object, you must set ACL=bucket-owner-full-control to 'hand-over' ownership of the object to the destination AWS Account
OR
2. 'Pull' the objects
If your code is running in Account B and you wish to copy from a bucket in Account A to a bucket in Account B, then you will need:
Permission on the IAM Entity (eg IAM User or IAM Role) that is being used by your program to read from the bucket in Account A, AND
A bucket policy on the bucket in Account A that permits the IAM Entity used by your program to read from the bucket
Note that in both cases, your program needs permission from the AWS Account it is running in AND a bucket policy on the bucket in the other AWS Account.

Related

Unable to validate google recaptcha enterprise. getting error: java.io.IOException: The Application Default Credentials are not available

Unable to validate google recaptcha enterprise. getting error:
java.io.IOException: The Application Default Credentials are not available.
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
I have also created credential json with service account and set in environment variable GOOGLE_APPLICATION_CREDENTIALS and alternatively created credential json with aws external account and set in environment variable. but "I was getting required parameters" must be specified error
Note: I am able to get the token from client side but this error is from server side.
Code blocks used to get credentials:
Method1 :
// If you don't specify credentials when constructing the client, the client library will
// look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Storage storage = StorageOptions.getDefaultInstance().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
Method2:
System.setProperty("GOOGLE_APPLICATION_CREDENTIALS", jsonPath);
System.out.println("GOOGLE_APPLICATION_CREDENTIALS");
System.out.println(System.getProperty("GOOGLE_APPLICATION_CREDENTIALS"));
FileInputStream fileInputStream = new FileInputStream(jsonPath);
GoogleCredentials credentials = GoogleCredentials.fromStream(fileInputStream)
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).build().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
Interpreting assesment:
/**
* Create an assessment to analyze the risk of an UI action.
*
* #param projectID: GCloud Project ID
* #param recaptchaSiteKey: Site key obtained by registering a domain/app to use recaptcha services.
* #param token: The token obtained from the client on passing the recaptchaSiteKey.
* #param recaptchaAction: Action name corresponding to the token.
*/
public static void createAssessment(String projectID, String recaptchaSiteKey, String token,
String recaptchaAction)
throws IOException {
// Initialize a client that will be used to send requests. This client needs to be created only
// once, and can be reused for multiple requests. After completing all of your requests, call
// the `client.close()` method on the client to safely
// clean up any remaining background resources.
try (RecaptchaEnterpriseServiceClient client = RecaptchaEnterpriseServiceClient.create()) {
// Specify a name for this assessment.
String assessmentName = "assessment-name";
// Set the properties of the event to be tracked.
Event event = Event.newBuilder()
.setSiteKey(recaptchaSiteKey)
.setToken(token)
.build();
// Build the assessment request.
CreateAssessmentRequest createAssessmentRequest = CreateAssessmentRequest.newBuilder()
.setParent(ProjectName.of(projectID).toString())
.setAssessment(Assessment.newBuilder().setEvent(event).setName(assessmentName).build())
.build();
Assessment response = client.createAssessment(createAssessmentRequest);
// Check if the token is valid.
if (!response.getTokenProperties().getValid()) {
System.out.println("The CreateAssessment call failed because the token was: " +
response.getTokenProperties().getInvalidReason().name());
return;
}
// Check if the expected action was executed.
if (!response.getTokenProperties().getAction().equals(recaptchaAction)) {
System.out.println("The action attribute in your reCAPTCHA tag " +
"does not match the action you are expecting to score");
return;
}
// Get the risk score and the reason(s).
// For more information on interpreting the assessment,
// see: https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment
float recaptchaScore = response.getRiskAnalysis().getScore();
System.out.println("The reCAPTCHA score is: " + recaptchaScore);
for (ClassificationReason reason : response.getRiskAnalysis().getReasonsList()) {
System.out.println(reason);
}
}
I was able to acheive the same using recaptcha V3 instead of enterprise version. As we had to acheive scored based assesment , V3 supports score based validation.

Which Java API from the Azure SDK to delete a NetworkSecurityRule?

I can't find the Java API from the Azure SDK to delete a NetworkSecurityRule resource.
The REST API is documented here.
I use this Maven dependency: com.microsoft.azure:azure-mgmt-network:jar:1.31.0
In my code I hold a reference to a NetworkManager instance and I have a collection of NetworkSecurityRule objects.
Does anyone know how to do it?
Thanks,
Chris
According to my test, we can use the following code. For more details, please refer to the docuemnt
1. create a service principal and assign Reader role for the sp.
az login
az account set --subscription "<your subscription id>"
# it will assign Contributor to the sp at subscription level
az ad sp create-for-rbac -n "mysample" --role Contributor
code
public static void main(String[] args){
String clientId = "your sp appId";
String secret = "your sp password";
String domain = "your tenant domain";
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(clientId, domain, secret,
AzureEnvironment.AZURE);
Azure azure = AzureAzure.configure().withLogLevel(LogLevel.BASIC).authenticate(credentials)
.withDefaultSubscription();
NetworkSecurityGroup group = azure.networkSecurityGroups().getById(
"your nsg resource id");
for(String i : group.securityRules().keySet()){
System.out.println(i);
}
group.update().withoutRule.apply();
group = azure.networkSecurityGroups().getById(
"/subscriptions/e5b0fcfa-e859-43f3-8d84-5e5fe29f4c68/resourceGroups/testgroup/providers/Microsoft.Network/networkSecurityGroups/test0123");
System.out.println(group.Name());
for(String i : group.securityRules().keySet()){
System.out.println(i);
}
}

Create AWS lambda to copy csv file from S3 to RDS MySQL

I am trying to load at least 4 csv files from my S3 bucket into my RDS Mysql database. Everytime the files are put in the bucket they will have a different name. The filenames have the date added at the end. I would like for them to automatically be uploaded to database when they are put in the S3 bucket. So far all I have is the load function to connect to the database. At this point I'm just trying to load one file. What would I do to have the file automatically loaded once its put in the S3 bucket? Thanks for the help!
lambdafunctionhandler file
public class LambdaFunctionHandler implements RequestHandler<Service, ResponseClass> {
public void loadService(){
Statement stmt = null;
try{
Connection conn = DriverManager.getConnection("jdbc:mysql://connection/db", "user", "password");
log.info("Connected to database.");
//load date sql
String query="LOAD DATA FROM S3 '"+ S3_BUCKET_NAME + "' INTO TABLE " + sTablename
+ " FIELDS TERMINATED BY ',' ENCLOSED BY '\"' "
+ "lines terminated by '\r\n' "+"IGNORE " + ignoreLines+" LINES";
stmt.executeUpdate(query);
System.out.println("loaded table.");
conn.close();
}catch(SQLException e){
e.printStackTrace();
}
}
#Override
public ResponseClass handleRequest(Service arg0, Context arg1) {
String path="";
return null;
}
If you have the full key of whatever file you're trying to load into S3 is going to be, then the standard AmazonS3 client object has this method: boolean doesObjectExist(String bucketName, String objectName) . By the "rules" of S3, uploading a file to S3 is atomic. The specified S3 key will not return true for this call unless the file is completely uploaded.
So you can trigger your upload of your file, and test for completeness with the doesObjectExist call. Once done, then perform your lambda function.
Alternatively, S3 also has another service (if you want to keep feeding the AWS beast) where you can turn on Bucket notifications, or trigger a Lambda function to execute with one of these notifications. I can't remember the name off the top of my head.

AWS S3 using grails

I am trying to establish connection to aws for perofrming basic operations on s3 bucket. Following is the code:
def list(){
AWSCredentials credentials = new BasicAWSCredentials("Access key", "Secret Key");
AmazonS3 s3client = new AmazonS3Client(credentials);
String bucketName = "sample-bucket-from-java-code";
System.out.println("Listing all buckets : ");
for (Bucket bucket : s3client.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
}
This gives me the error:
request- Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I have also double checked the access key and secret key that I am using. Cannot figure out the issue.
Its always good to use "cognito accountId" rather than access and secret key. Because using cognito accountID has limited access to AWS, which can always be changed.
// Initialize the Amazon Cognito credentials provider.
AWSCognitoCredentialsProvider* credentialsProvider = [AWSCognitoCredentialsProvider
credentialsWithRegionType:AWSRegionUSEast1
accountId:#"xxxxxxxxxxx"
identityPoolId:#"xxxxxxxxxxx"
unauthRoleArn:#"arn:aws:iam::xxxxxxxxxxx"
authRoleArn:nil];
AWSServiceConfiguration* configuration = [AWSServiceConfiguration configurationWithRegion:AWSRegionUSWest2
credentialsProvider:credentialsProvider];
[AWSServiceManager defaultServiceManager].defaultServiceConfiguration = configuration;
You can also find running code from my blog.
https://tauheeda.wordpress.com/2015/10/15/use-aws-service-to-downloadupload-files-in-your-applications/
Do not forget to add your credential info:
accountId:#“xxxxxxxx”
identityPoolId:#”xxxxxxxx-xxxxxxxx”
unauthRoleArn:#”xxxxxxxx-xxxxxxxx-xxxxxxxx-xxxxxxxx“

AWS cognito does not generate tokens when run from a serverlet

I am trying to get amazon cognito to work. If I run the code to generate a login token from a standalone java program it works.
public class cognito extends HttpServlet
{
public static void main(String[] args) throws Exception {
AWSCredentials credentials = new BasicAWSCredentials("*******", "********");
AmazonCognitoIdentityClient client =
new AmazonCognitoIdentityClient(credentials);
client.setRegion(Region.getRegion(Regions.EU_WEST_1));
GetOpenIdTokenForDeveloperIdentityRequest tokenRequest =
new GetOpenIdTokenForDeveloperIdentityRequest();
tokenRequest.setIdentityPoolId("*************");
HashMap<String, String> map = new HashMap<String, String>();
//Key -> Developer Provider Name used when creating the identity pool
//Value -> Unique identifier of the user in your <u>backend</u>
map.put("test", "AmazonCognitoIdentity");
//Duration of the generated OpenID Connect Token
tokenRequest.setLogins(map);
tokenRequest.setTokenDuration(1000l);
GetOpenIdTokenForDeveloperIdentityResult result = client
.getOpenIdTokenForDeveloperIdentity(tokenRequest);
String identityId = result.getIdentityId();
String token = result.getToken();
System.out.println("id = " + identityId + " token = " + token);
}
}
However when I run this code from a servlet on a redhat linux server, it always times out.
Any suggestion would be helpful
map.put("test", "AmazonCognitoIdentity");
are you sure your developer provider name is "test"?
you can see it in your cognito identity pool edit page.
And "AmazonCognitoIdentity" should be your own unique user-id.
Without the actual exception, it is hard to tell what is the exact issue. It could be that something else running in your servlet engine is setting a much more aggressive socket timeout than the default when it runs from the command line. You might want to explicitly set the connection and socket timeouts using methods using this class http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html and pass it in to the identity client constructor.

Categories