AWS Java SDK: Specifying KMS Key Id For EBS - java

In the AWS Java SDK 1.10.69, I can launch an instance and specify EBS volume mappings for the instance:
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
String userDataString = Base64.encodeBase64String(userData.toString().getBytes());
runInstancesRequest
.withImageId(machineImageId)
.withInstanceType(instanceType.toString())
.withMinCount(minCount)
.withMaxCount(maxCount)
.withKeyName(sshKeyName)
.withSecurityGroupIds(securityGroupIds)
.withSubnetId(subnetId)
.withUserData(userDataString)
.setEbsOptimized(true);
final EbsBlockDevice ebsBlockDevice = new EbsBlockDevice();
ebsBlockDevice.setDeleteOnTermination(true);
ebsBlockDevice.setVolumeType(VolumeType.Gp2);
ebsBlockDevice.setVolumeSize(256);
ebsBlockDevice.setEncrypted(true);
final BlockDeviceMapping mapping = new BlockDeviceMapping();
mapping.setDeviceName("/dev/sdb");
mapping.setEbs(ebsBlockDevice);
It seems that currently I can only enable / disable encryption on the volume, and not specify which KMS Customer Master Key to use for the volume.
Is there a way around this?

Edit: See my other answer below (https://stackoverflow.com/a/47602790/7692970) for the much easier solution now available
To specify a Customer Master Key (CMK) for an EBS volume for an instance, you have to combine the RunInstancesRequest with a CreateVolumeRequest and an AttachVolumeRequest. Otherwise, if you just specify true for encryption on the EbsBlockDevice it will use the default CMK.
First create the instance(s), without specifying the EBS volumes in the block device mapping of the RunInstancesRequest, then separately create the volumes, then attach them.
CreateVolumeRequest has withKmsKeyId()/setKmsKeyId() options.
For example, updating your code might look like:
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
String userDataString = Base64.encodeBase64String(userData.toString().getBytes());
runInstancesRequest
.withImageId(machineImageId)
.withInstanceType(instanceType.toString())
.withMinCount(minCount)
.withMaxCount(maxCount)
.withKeyName(sshKeyName)
.withSecurityGroupIds(securityGroupIds)
.withSubnetId(subnetId)
.withUserData(userDataString)
.setEbsOptimized(true);
RunInstancesResult runInstancesResult = ec2Client.runInstances(runInstancesRequest);
for (Instance instance : runInstancesResult.getReservation()) {
CreateVolumeRequest volumeRequest = new CreateVolumeRequest()
.withAvailabilityZone(instance.getPlacement().getAvailabilityZone())
.withKmsKeyId(/* CMK id or alias/yourkeyaliashere */)
.withEncrypted(true)
.withSize(256)
.withVolumeType(VolumeType.Gp2);
CreateVolumeResult volumeResult = ec2Client.createVolume(volumeRequest);
AttachVolumeRequest attachRequest = new AttachVolumeRequest()
.withDevice("/dev/sdb")
.withInstanceId(instance.getInstanceId())
.withVolumeId(volumeResult.getVolume().getVolumeId());
ec2Client.attachVolume(attachRequest);
}
Note: If you make use of the block device mapping in instance metadata, it does not get updated when you attach a volume to a running instance. To bring it up to date, you can stop/start the instance.

Good news! AWS has just added the ability to specify CMK key ids in the block device mapping when launching instances.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ec2/model/EbsBlockDevice.html#setKmsKeyId-java.lang.String-
This was added to the AWS Java SDK in version 1.11.237.
Therefore in your original code you now just add
ebsBlockDevice.setKmsKeyId(keyId);
where keyId can be a CMK alias (in the form alias/<alias name>), key id (looks like 1234abcd-12ab-34cd-56ef-1234567890ab) or full CMK ARN (arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab).

Related

How to rename a container in Azure Cosmos DB with Java SQL API?

It look like that it is not possible to rename a container in the Azure Cosmos DB. It should be copy to the new container via a bulk operation. How can I do this with the Java SDK? Are there any samples for it?
Yes, you are right. Changing container name is currently not possible. As I understand, you want to discard old container (as you wanted to rename initially) and migrate data to new one.
Data Migration tool is a great tool to do so: Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB
Also, do check out Bulk Executor library for Java, API documentation and samples.
You can use importAll in BulkExecutor class:
ConnectionPolicy connectionPolicy = new ConnectionPolicy();
RetryOptions retryOptions = new RetryOptions();
// Set client's retry options high for initialization
retryOptions.setMaxRetryWaitTimeInSeconds(120);
retryOptions.setMaxRetryAttemptsOnThrottledRequests(100);
connectionPolicy.setRetryOptions(retryOptions);
connectionPolicy.setMaxPoolSize(1000);
DocumentClient client = new DocumentClient(HOST, MASTER_KEY, connectionPolicy, null);
String collectionLink = String.format("/dbs/%s/colls/%s", "mydb", "mycol");
DocumentCollection collection = client.readCollection(collectionLink, null).getResource();
DocumentBulkExecutor executor = DocumentBulkExecutor.builder().from(client, collection,
collection.getPartitionKey(), collectionOfferThroughput).build();
// Set retries to 0 to pass control to bulk executor
client.getConnectionPolicy().getRetryOptions().setMaxRetryWaitTimeInSeconds(0);
client.getConnectionPolicy().getRetryOptions().setMaxRetryAttemptsOnThrottledRequests(0);
for(int i = 0; i < 10; i++) {
List documents = documentSource.getMoreDocuments();
BulkImportResponse bulkImportResponse = executor.importAll(documents, false, true, 40);
// Validate that all documents inserted to ensure no failure.
if (bulkImportResponse.getNumberOfDocumentsImported() < documents.size()) {
for(Exception e: bulkImportResponse.getErrors()) {
// Validate why there were some failures.
e.printStackTrace();
}
break;
}
}
executor.close();
client.close();
I solve the problem by linking instead copying all the data. This simulate an rename of the container. Instead of one container I use two container now. The first contains only the name of the second container.
Now I can build the new version of the container. If I are finish I change the name of the saved container name. Then I drop the old container.
The tricky is to inform all nodes of the app to use the new container name.

How to provide name of the Instance while creating aws ec2 using java aws sdk

I am finding it very hard to figure out how to provide the name of ec2 instance while creating ec2 instance using Aws Java SDK.
I am using following method to create ec2 Instance-
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
runInstancesRequest.withImageId(ec2Configuration.getImageId())
.withInstanceType(ec2Configuration.getInstanceType())
.withMinCount(ec2Configuration.getMincount())
.withMaxCount(ec2Configuration.getMaxcount())
.withKeyName(ec2Configuration.getKeyPairName())
.withSecurityGroupIds(Arrays.asList(ec2Configuration.getSgId()));
if (ec2Configuration.isEbsOptimized())
runInstancesRequest.withMonitoring(true);
if (ec2Configuration.isEbsOptimized())
runInstancesRequest.withEbsOptimized(true);
try {
RunInstancesResult result = amazonEC2Client.runInstances(
runInstancesRequest);
} catch (Exception e) {
// all exception stuffs
}
I could not find anything anywhere like .withName("myVmName").withInstanceType(...) or .define("myVmName").withInstanceType().
What is the way to set the name of Instance while creating an instance.
I want to give name given like the name 'cpanel' given in this image
Instances do not have a name, they instead have tags which are MetaData key values that are attached to the instance.
You will have a tag with the key of "Name".
What you would want to be looking at is the withTagSpecifications argument.

How to get JanusGraphManagement from Java

I can't understand how to get a JanusGraphManagement instance from a graph created with the ConfiguredGraphFactory.
I tried doing something like this:
JanusGraphFactory.Builder config = JanusGraphFactory.build();
config.set("storage.hostname", storageHostname);
config.set("storage.port", storagePort);
config.set("storage.backend", STORAGE_BACKEND);
config.set("index.search.backend", SEARCH_BACKEND);
config.set("index.search.hostname", indexHostname);
config.set("index.search.port", indexPort);
config.set("graph.graphname", graphName);
JanusGraph graph = config.open();
JanusGraphManagement mgmt = graph.openManagement();
But it generates the following exception:
java.lang.NullPointerException: Gremlin Server must be configured to use the JanusGraphManager.
The gremlin-server is ruinning with the following configuration:
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 180000
# channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
channelizer: org.janusgraph.channelizers.JanusGraphWebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
#graph: conf/gremlin-server/janusgraph-cql-es-server.properties,
ConfigurationManagementGraph: conf/gremlin-server/janusgraph-cql-es-server-configured.properties
}
.....
And the JanusGraph's one is this:
gremlin.graph=org.janusgraph.core.ConfiguredGraphFactory
graph.graphname=ConfigurationManagementGraph
storage.backend=cql
storage.hostname=127.0.0.1
storage.cql.keyspace=janusgraph
cache.db-cache = true
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.elasticsearch.client-only=true
What I'd like to do is to define the graph schema directly from Java code, that's why I need to the a managment instance and a traversal source is not enough
They really don't seem to want you to do this from Java. Check my initial commit to an example repo I built.
The general deal is that there is a bunch of internal magic happening. You need to make a new embedded instance of the ConfigurationManagementGraph and a few other things. The steps to get ConfiguredGraphFactory up and running are:
JanusGraphManager(Settings())
// the configuration file you used for your ConfigurationManagementGraph in your `janusgrpah-server.yaml` file
val mgrConfFile = File("conf/janusgraph-cql-configurationgraph.properties")
// load the configuration
val base = CommonsConfiguration(ConfigurationUtil.loadPropertiesConfig(mgrConfFile))
// modify a fe wthings specific to the ConfigurationManagementGraph
base.set("graph.graphname", "name-of-this-graph-instance")
base.set("graph.unique-instance-id", "some-super-unique-id")
base.set("storage.lock.local-mediator-group", "tmp")
// duplicate the config for some reason?
val local = ModifiableConfiguration(GraphDatabaseConfiguration.ROOT_NS, base, BasicConfiguration.Restriction.NONE)
// build another type of configuration?
val config = GraphDatabaseConfiguration(base, local, instanceId, local)
// create the new ConfigurationManagementGraph instance
return ConfigurationManagementGraph(StandardJanusGraph(config))
Don't forget that you will still need to create a template configuration first.
Now, you can use the singleton ConfiguredGraphFactory anywhere in your application, just like the docs say.
val myGraph = ConfiguredGraphFactory.open("myGraph")
Keep in mind that you may not need to do this. The Client.submit() function comes in handy for most things. For example:
// connect to the gremlin server
val cluster = Cluster.build("localhost").create()
val client = cluster.connect<Client.ClusteredClient>()
// example: get a list of existing graph names
val existingGraphs = client.submit("ConfiguredGraphFactory.getGraphNames()").all().get()
// check if a graph exists
val exists = existingGraphs.any { it.string == "myGraph" }
// create a new graph with the existing template
// (note: this *cannot* be cast to a JanusGraph, even though that would make this really useful)
val myGraph: TinkerGraph = client.submit("ConfiguredGraphFactory.getGraphNames()").all().get().first().get(TinkerGraph::class.java)
EDIT:
As #FlorianHockmann pointed out on the JanusGraph discord server, it's preferable to not use these objects directly from your Java. Instead, it's better to use a Client.SessionedClient when you connect, like so
val cluster = Cluster.build("localhost").create()
val session = cluster.connect<Client.SessionedClient>()
Since you've established a session, you can now save and re-use variables on the server. As Florian put it,
client.submit("mgmt = ConfiguredGraphFactory.open('myGraph').openManagement()").all().get()
// afterwards you can use it:
client.submit("// do stuff with mgmt").all().get()
Just don't forget to call session.close() when you're done!
Check out this gist I made for an example

List of Places using Platform SDK

Background
My application connects to the Genesys Interaction Server in order to receive events for actions performed on the Interaction Workspace. I am using the Platform SDK 8.5 for Java.
I make the connection to the Interaction Server using the method described in the API reference.
InteractionServerProtocol interactionServerProtocol =
new InteractionServerProtocol(
new Endpoint(
endpointName,
interactionServerHost,
interactionServerPort));
interactionServerProtocol.setClientType(InteractionClient.AgentApplication);
interactionServerProtocol.open();
Next, I need to register a listener for each Place I wish to receive events for.
RequestStartPlaceAgentStateReporting requestStartPlaceAgentStateReporting = RequestStartPlaceAgentStateReporting.create();
requestStartPlaceAgentStateReporting.setPlaceId("PlaceOfGold");
requestStartPlaceAgentStateReporting.setTenantId(101);
isProtocol.send(requestStartPlaceAgentStateReporting);
The way it is now, my application requires the user to manually specify each Place he wishes to observe. This requires him to know the names of all the Places, which he may not necessarily have [easy] access to.
Question
How do I programmatically obtain a list of Places available? Preferably from the Interaction Server to limit the number of connections needed.
There is a method you can use. If you check methods of applicationblocks you will see cfg and query objects. You can use it for get list of all DNs. When building query, try blank DBID,name and number.
there is a .net code similar to java code(actually exatly the same)
List<CfgDN> list = new List<CfgDN>();
List<DN> dnlist = new List<Dn>();
CfgDNQuery query = new CfgDNQuery(m_ConfService);
list = m_ConfService.RetrieveMultipleObjects<CfgDN>(query).ToList();
foreach (CfgDN item in list)
{
foo = (DN) item.DBID;
......
dnlist.Add(foo);
}
Note : DN is my class which contains some property from platform SDK.
KeyValueCollection tenantList = new KeyValueCollection();
tenantList.addString("tenant", "Resources");
RequestStartPlaceAgentStateReportingAll all = RequestStartPlaceAgentStateReportingAll.create(tenantList);
interactionServerProtocol.send(all);

How can attach new EBS volume to existing EC2 instance using java sdk?

I have tried using AttachVolumeRequest but in response i get following error
Caught Exception: The request must contain the parameter volume
Reponse Status Code: 400
Error Code: MissingParameter
here is my code , in this code ec2 is my amazonclient object and its work fine so far
AttachVolumeRequest attachRequest=new AttachVolumeRequest()
.withInstanceId("my instance id");
attachRequest.setRequestCredentials(credentials);
EbsBlockDevice ebs=new EbsBlockDevice();
ebs.setVolumeSize(2);
//attachRequest.withVolumeId(ebs.getSnapshotId());
AttachVolumeResult result=ec2.attachVolume(attachRequest);
any help is highly appreciated. thanks in advance
Cause
Class EbsBlockDevice from the AWS SDK for Java serves a different purpose, accordingly method getSnapshotId() only returns The ID of the snapshot from which the volume will be created, i.e. not the volume ID, hence the respective exception.
Solution
You most likely want to use class CreateVolumeRequest instead, e.g. (from the top of my head):
CreateVolumeRequest createVolumeRequest = new CreateVolumeRequest()
.withAvailabilityZone("my instance's AZ") // The AZ in which to create the volume.
.withSize(2); // The size of the volume, in gigabytes.
CreateVolumeResult createVolumeResult = ec2.createVolume(createVolumeRequest);
AttachVolumeRequest attachRequest = new AttachVolumeRequest()
.withInstanceId("my instance id");
.withVolumeId(createVolumeResult.getVolume().getVolumeId());
AttachVolumeResult attachResult = ec2.attachVolume(attachRequest);

Categories