After creating a Minio bucket, I set the bucket's lifecycle rules. The LifeCycleRule takes up the expiration variable that is set for just 1 day. When checking the status of my bucket through minio client (mc), mc ilm ls mycloud/bucketName , I notice that the Lifecycle rule was successfully applied on to the designated bucket. However, when checking back on Minio after 1 day, the bucket is still there. Is there something else that I need to add to the LifeCycleRule in order to delete Minio Bucket properly?
Note, I've been using Minio SDKs Java Client API as reference.
fun createBucket(bucketName: String){
client.makeBucket(MakeBucketArgs.builder().bucket(bucketName).build())
setBucketLifeCycle(bucketName)
}
private fun setBucketLifeCycle(bucketName: String){
// Setting the expiration for one day.
val expiration = Expiration(null as ZonedDateTime?, 1, null)
var lifeCycleRuleList = mutableListOf<LifecycleRule>()
val lifecycleRuleExpiry = LifecycleRule(
Status.ENABLED,
null,
expiration,
RuleFilter("expiry/logs"),
"rule 1",
null,
null,
null)
lifecycleRuleList.add(lifecycleRuleExpiry)
var lifecycleConfig = LifecycleConfiguration(lifecycleRuleList)
// Applies the lifecycleConfig on to target bucket.
client.setBucketLifecycle(SetBucketLifecycleArgs.buider()
.bucket(bucketName).config(lifecycleConfig).build())
}
Questions
Am I missing something more on my LifeCycleRule?
Could it be that the bucket does not get automatically deleted because it has objects inside of it?
I did notice on the minio client that when the bucket has items on it, mc rb mycloud/bucketName will fail to remove the the bucket, but forcing it with mc rb -force mycloud/bucketName will successfully remove it. Is there a way to speficy "force" on the lifecycle parameters?
Lifecycle rules apply to objects within a bucket, not to the bucket itself.
An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects.
(ref: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html)
So, this bucket will not be deleted (even) when all the objects in it are expired via ILM policies.
Related
I have a List filteredList where and I am streaming over each element and using for each to set some items
filteredList.parallelStream().forEach(s->{
ARChaic option=new ARChaic();
option.setCpu(s.getNoOfCPU());
option.setMem(s.getMemory());
option.setStorage(s.getStorage());
option.setOperatingSystem(s.getOperationSystem());
ARChaic newOption= providerDes.getLatest(option); //this is a external service
s.setCloudMemory(newOption.getMem());
s.setCloudCPU(newOption.getCpu());
s.setCloudStorage(newOption.getStorage());
s.setCloudOS(newOption.getOperatingSystem());
});
The goal is to call this service but if the above option is same then take the old one to call.
For Example- if two server have same memory,cpu,os and storage then it will call getLatest only once.
Suppose at position 1 and 7 in filteredList I have same config then I shouldn't be calling getLatest again at 7 since I already have previous option value which I will set it 7(Working done after service call)
You can add equals and hashcode to your Server class to denote when two Server instances are equal. From your description, you will have to take into account and compare the memory, cpu, os, and storage.
After this, you can map the filteredList as a Map<Server, List<Server>> to get unique servers as the key and the value will have all the repeated server instances. You will call the service once for each key in the map, but after you get the result, you can update all the server instances that are the value of the map with the result.
Map<Server, List<Server>> uniqueServers = filteredList.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.mapping(Function.identity(),
Collectors.toList())));
uniqueServers.entrySet().parallelStream().forEach(entry -> {
Server currentServer = entry.getKey(); //Current server
ARChaic option=new ARChaic();
option.setCpu(currentServer.getNoOfCPU());
option.setMem(currentServer.getMemory());
option.setStorage(currentServer.getStorage());
option.setOperatingSystem(currentServer.getOperationSystem());
ARChaic newOption= providerDes.getLatest(option); //this is a external service
//update all servers with the result.
entry.getValue().forEach(server -> {
server.setCloudMemory(newOption.getMem());
server.setCloudCPU(newOption.getCpu());
server.setCloudStorage(newOption.getStorage());
server.setCloudOS(newOption.getOperatingSystem());
});
});
I am trying to set up amazon SES recipient rule set for putting emails into an s3 bucket. I have created an s3 bucket and I want these mails to sent into folders according to the email id. For example if an email is coming to 1#mydomain.com it should go into mytestbucket/1 and if it is coming to 2#mydomain.com it should go into mytestbucket/2.
AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonSimpleEmailServiceClient sesClient = new AmazonSimpleEmailServiceClient(awsCredentials);
;
if (sesClient != null) {
CreateReceiptRuleRequest req = new CreateReceiptRuleRequest();
req.withRuleSetName(ruleSetName);
ReceiptRule rule = new ReceiptRule();
rule.setEnabled(true);
rule.setName(customerIdString + "-email");
rule.withRecipients(customerIdString + "#" + mydomain.com);
List<ReceiptAction> actions = new ArrayList<ReceiptAction>();
ReceiptAction action = new ReceiptAction();
S3Action s3Action = new S3Action();
s3Action.setBucketName(mytestbucket);
s3Action.setObjectKeyPrefix(customerIdString);
action.setS3Action(s3Action);
actions.add(action);
rule.setActions(actions);
req.setRule(rule);
CreateReceiptRuleResult response = sesClient.createReceiptRule(req);
return true;
}
Whenever I add a customer I was calling this method to create a rule to my active ruleset. But it looks like only 100 rules can be added. My usecase will be for at least 100 000. How can I achieve this?
Something I am expecting to do is
Have a single recipient rule which says any email comes to mysubdomain invoke a lambda function
Lambda function should put the email into subfolders of s3
Follow these steps to achieve what you desire...
Create a single SES rule to place ALL emails into a single S3 folder unsorted_emails (you can call it anything).
Create a Lambda function that places emails into their proper folders.
Set unsorted_emails as an event source to trigger your lambda function.
Now, whenever new emails are added to unsorted_emails, your lambda function will be triggered and move the email into a proper folder.
Let me know if these steps make sense, if you have any questions, or if I can clarify more.
In the AWS Java SDK 1.10.69, I can launch an instance and specify EBS volume mappings for the instance:
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
String userDataString = Base64.encodeBase64String(userData.toString().getBytes());
runInstancesRequest
.withImageId(machineImageId)
.withInstanceType(instanceType.toString())
.withMinCount(minCount)
.withMaxCount(maxCount)
.withKeyName(sshKeyName)
.withSecurityGroupIds(securityGroupIds)
.withSubnetId(subnetId)
.withUserData(userDataString)
.setEbsOptimized(true);
final EbsBlockDevice ebsBlockDevice = new EbsBlockDevice();
ebsBlockDevice.setDeleteOnTermination(true);
ebsBlockDevice.setVolumeType(VolumeType.Gp2);
ebsBlockDevice.setVolumeSize(256);
ebsBlockDevice.setEncrypted(true);
final BlockDeviceMapping mapping = new BlockDeviceMapping();
mapping.setDeviceName("/dev/sdb");
mapping.setEbs(ebsBlockDevice);
It seems that currently I can only enable / disable encryption on the volume, and not specify which KMS Customer Master Key to use for the volume.
Is there a way around this?
Edit: See my other answer below (https://stackoverflow.com/a/47602790/7692970) for the much easier solution now available
To specify a Customer Master Key (CMK) for an EBS volume for an instance, you have to combine the RunInstancesRequest with a CreateVolumeRequest and an AttachVolumeRequest. Otherwise, if you just specify true for encryption on the EbsBlockDevice it will use the default CMK.
First create the instance(s), without specifying the EBS volumes in the block device mapping of the RunInstancesRequest, then separately create the volumes, then attach them.
CreateVolumeRequest has withKmsKeyId()/setKmsKeyId() options.
For example, updating your code might look like:
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
String userDataString = Base64.encodeBase64String(userData.toString().getBytes());
runInstancesRequest
.withImageId(machineImageId)
.withInstanceType(instanceType.toString())
.withMinCount(minCount)
.withMaxCount(maxCount)
.withKeyName(sshKeyName)
.withSecurityGroupIds(securityGroupIds)
.withSubnetId(subnetId)
.withUserData(userDataString)
.setEbsOptimized(true);
RunInstancesResult runInstancesResult = ec2Client.runInstances(runInstancesRequest);
for (Instance instance : runInstancesResult.getReservation()) {
CreateVolumeRequest volumeRequest = new CreateVolumeRequest()
.withAvailabilityZone(instance.getPlacement().getAvailabilityZone())
.withKmsKeyId(/* CMK id or alias/yourkeyaliashere */)
.withEncrypted(true)
.withSize(256)
.withVolumeType(VolumeType.Gp2);
CreateVolumeResult volumeResult = ec2Client.createVolume(volumeRequest);
AttachVolumeRequest attachRequest = new AttachVolumeRequest()
.withDevice("/dev/sdb")
.withInstanceId(instance.getInstanceId())
.withVolumeId(volumeResult.getVolume().getVolumeId());
ec2Client.attachVolume(attachRequest);
}
Note: If you make use of the block device mapping in instance metadata, it does not get updated when you attach a volume to a running instance. To bring it up to date, you can stop/start the instance.
Good news! AWS has just added the ability to specify CMK key ids in the block device mapping when launching instances.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ec2/model/EbsBlockDevice.html#setKmsKeyId-java.lang.String-
This was added to the AWS Java SDK in version 1.11.237.
Therefore in your original code you now just add
ebsBlockDevice.setKmsKeyId(keyId);
where keyId can be a CMK alias (in the form alias/<alias name>), key id (looks like 1234abcd-12ab-34cd-56ef-1234567890ab) or full CMK ARN (arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab).
I'm trying to create a simple application that executes a "git push --mirror" operation from the Java domain.
The JGit library, specifically the PushCommand class, doesn't seem to support the "--mirror" option even though it supports "--all" and "--tags".
Am I missing something? How do we use JGit to do "git push --mirror ..."?
Try it manually by using the following ref spec:
git.push().setRefSpecs(new RefSpec("+refs/*:refs/*")).call();
There is no exact equivalent to --mirror in JGit yet, but you should be able to emulate this behaviour. To force-push all local refs you can configure the PushCommand with
PushCommand pushCommand = git.push();
pushCommand.setForce(true);
pushCommand.add("refs/*:refs/*");
That would leave the refs that have been deleted locally. Therefore you can obtain a list of remote-refs to determine what has been deleted locally and publish those deletions to the remote:
Collection<Ref> remoteRefs = git.lsRemote().setRemote("origin").setHeads(true).setTags(true).call();
Collection<String> deletedRefs = new ArrayList<String>();
for (Ref remoteRef : remoteRefs) {
if (git.getRepository().getRef(remoteRef.getName()) == null) {
deletedRefs.add(remoteRef.getName());
}
}
for (String deletedRef : deletedRefs) {
pushCommand.add(":" + deletedRef);
}
The git variable references the repository that you want to push from, i.e. the one from the first block. The LsRemoteCommand returns all heads and tags from the remote repository that is configured as origin in the local repository's configuration. In the usual case, the one you cloned from.
Please note that there is a small gap to the approach how deleted local refs are propagated. The LsRemoteCommand only returns refs under heads and tags (e.g. no custom refs like pulls), hence you would not detect a local deletion of e.g. refs/foo/bar.
Does that work for you?
I'm looking at the AWS API and I can't seem to find a method to help me get info on an existing RDS database. I also tried to use a method that gets a list of all the RDS databases but failed at that too.
I looked at 2 methods and apparently they aren't what I'm looking for or I'm using them wrong.
Method 1:
I looked at ModifyDBInstanceRequest, to see if I could specify the name of an existing database and if I could query it for its properties (mysql version, storage size, etc.)
The following piece of code didn't do as I expected. ad-dash-test is an existing db in RDS. When I ran my code, it said the engine version is null, even though this is an existing db and I specified it by its DB Instance name.
ModifyDBInstanceRequest blah = new ModifyDBInstanceRequest("ad-dash-test");
System.out.println("the engine ver is " + blah.getEngineVersion());
Method 2:
I tried using the DescribeDBInstancesResult method but it looks like it's used for newly created RDS databases, not existing ones.
DescribeDBInstancesResult db = new DescribeDBInstancesResult();
List<DBInstance> list = db.getDBInstances();
System.out.println("list length = " + list.size());
The list length that returns is 0 and I have 8 RDS instances.
I didn't find any examples in Amazon's SDK for RDS and using my logic and the API docs didn't seem to help. Hopefully someone can point me in the right direction. Thanks in advance for your help.
In both of your methods, you are just building a Request object, and you are never sending the request to AWS.
Try the following in your second example:
// Instantiating rdsClient directly is deprecated, use AmazonRDSClientBuilder.
// AmazonRDSClient rdsClient = new AmazonRDSClient(/*add your credentials and the proper constructor overload*/);
AmazonRDS rdsClient = AmazonRDSClientBuilder.defaultClient();
DescribeDBInstancesRequest request = new DescribeDBInstancesRequest();
DescribeDBInstancesResult result = rdsClient.describeDBInstances(request);
List<DBInstance> list = result.getDBInstances();
System.out.println("list length = " + list.size());
An example for method 1 (for modifying your instance(s)) should be similar.