CDK Java - Referencing nested json context values within a Stack - java

So I'm just getting started with CDK using Java & I'd like to find how to extract context info from the cdk.context.json.
Essentially looking to hold parameters externally from the Stack relative to the environment (dev, test etc).
So will be looking at incorporating into a pipeline (probably Gitlab) so the cdk.context.json will be version controlled.
For instance, a cut of my context is as follows;
{
"vpc-provider:account=999999999:filter.isDefault=true:filter.vpc-id=vpc-w4w4w4w4w4:region=eu-west-2:returnAsymmetricSubnets=true": {
"vpcId": "vpc-w4w4w4w4w4",
.......
"environments": {
"dev": {
"documentDb": [
{
"port": "27017"
}
],
"subnetGroups": [
{
"name": "Private",
"type": "Private",
"subnets": [
{
"cidr": "10.0.1.0/24/24",
"availabilityZone": "eu-west-2a"
},
{
"cidr": "10.0.2.0/24",
"availabilityZone": "eu-west-2b"
}
]
}
]
},
"prod": {
"documentDb": [
{
"port": "27018"
}
],
"subnetGroups": [
{
"name": "Private",
"type": "Private",
"subnets": [
{
"cidr": "20.0.1.0/24/24",
"availabilityZone": "eu-west-2a"
},
{
"cidr": "20.0.2.0/24",
"availabilityZone": "eu-west-2b"
}
]
}
]
}
}
}
I'd like to extract the values for dev --> documentDb --> port for instance in the most elegant CDK way. If in my Stack I use;
this.getNode().tryGetContext("environments")
I am returned the whole JSON block;
{dev={documentDb=[{port=27017], subnetGroups=[{name=Private, type=Private, subnets=[{cidr=10.0.1.0/24/24, availabilityZone=eu-west-2a}, {cidr=10.0.2.0/24, availabilityZone=eu-west-2b}]}]}, prod={documentDb=[{port=27018], subnetGroups=..............
& not sure how to progress up the tree. If I synch passing in the config;
cdk synth -c config=environments > target/DocumentDbAppStack.yaml
& in my Stack;
this.getNode().tryGetContext("config")
I get "environments".
I can parse the LinkedHashMap using a JSON parser but that's obviously the wrong approach. I have looked at loads of examples / AWS documentation etc but can't seem to find the answer. There seems to be a wealth of info using Typescript (think that was the first language used for CDK), but I've never used it.
Thanks in advance.
Detailed above in description.

Related

OpenApi specification generator - Supply values from multiple Enum classes for a String field

I'm writing a Spring Boot application in Kotlin, and I'm currently struggling to generate a specification for a DTO class that has a backing field of the type String, which I want to then later parse into one of two enum classes in the adapter layer.
I've tried the following approach using the oneOf Annotation value, which seemed like it does what I want:
data class MyDto(
#Schema(
type = "string",
oneOf = [MyFirstEnum::class, MySecondEnum::class]
)
val identifier: String,
val someOtherField: String
) {
fun transform() { ... } // this will use the string identifier to pick the correct enum type later
}
Which results in the following OpenApi Spec:
"MyDto": {
"required": [
"someOtherField",
"identifier"
],
"type": "object",
"properties": {
"identifier": {
"type": "object", // <--- this should be string
"oneOf": [{
"type": "string",
"enum": [
"FirstEnumValue1",
"FirstEnumValue2",
"FirstEnumValue3"
]
}, {
"type": "string",
"enum": [
"SecondEnumValue1",
"SecondEnumValue2",
"SecondEnumValue3"
]
}
]
},
"someOtherField": {
"type": "string"
}
}
}
As you can see, the enum constants are (I think) correctly inlined into the specification, but the type annotation on the field, which I set to string is bypassed, resulting in an object type, which I suppose is incorrect in this case.
My questions are:
Is my current code and the resulting spec valid with the object declaration instead of string?
Is there a better way to embed the enum values into the spec?
Edited to add: I'm using Spring Boot v2.7.8 in combination with springdoc-openapi v1.6.13 to automatically generate the OpenApi Spec.
The annotation based approach that I showed in my question does not seem to generate a valid OpenApi spec with springdoc-openapi:1.6.13. The type of the field identifier needs to be String, as Helen mentioned in the comments.
I was able to solve the issue by creating the Schema for this particular class manually, using a GlobalOpenApiCustomizer Bean:
#Bean
fun myDtoCustomizer(): GlobalOpenApiCustomizer {
val firstEnum = StringSchema()
firstEnum.description = "First Enum"
MyFirstEnum.values().forEach { firstEnum.addEnumItem(it.name) }
val secondEnum = StringSchema()
secondEnum.description = "Second Enum"
MySecondEnum.values().forEach { secondEnum.addEnumItem(it.name) }
return GlobalOpenApiCustomizer {
it.components.schemas[MyDto::class.simpleName] = ObjectSchema()
.addProperty(
MyDto::identifier.name,
StringSchema().oneOf(
listOf(
firstEnum,
secondEnum
)
)
)
.addProperty(MyDto::someOtherField.name, StringSchema())
}
}
Which in turn produces the following Spec:
"MyDto": {
"type": "object",
"properties": {
"identifier": {
"type": "string",
"oneOf": [{
"type": "string",
"description": "First Enum",
"enum": [
"FirstEnumValue1",
"FirstEnumValue2",
"FirstEnumValue3"
]
}, {
"type": "string",
"description": "Second Enum",
"enum": [
"SecondEnumValue1",
"SecondEnumValue2",
"SecondEnumValue3"
]
}
]
},
"someOtherField": {
"type": "string"
}
}
}

How to properly store event.duration field in Elastic?

I have a Java service that writes logs in JSON format, they are then picked up by filebeat and sent to Elastic. I would like to be able to set one of the ECS fields (event.duration) described here
I set up a net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder encoder, and I set the event.duration field in MDC before calling logging method. The output looks like this:
{
"#timestamp": "2021-12-07T10:41:59.589+01:00",
"message": "message",
"event.duration": "5606000000",
"service": {
"name": "logging.application.name_IS_UNDEFINED",
"type": "java"
},
"log": {
"logger": "com.demo.Demo",
"level": "WARN"
},
"process": {
"thread": {
"name": "main"
}
},
"error": {}
}
However, in Kibana I see event.duration as a JSON inside the flat field:
flat
{
"event.duration": "10051000000"
}
How can I make it on the same level as other ECS fields like event.name?
You should create an ingest pipeline using the dot_expander processor in order to transform your dotted field into an object:
PUT _inest/pipeline/de-dot
{
"processors" : [
{
"dot_expander": {
"field": "event.duration"
}
}
]
}
Then you need to make sure that your indexing process references this pipeline, i.e. ...?pipeline=de-dot

AWS Stepfunctions workflow using fargate as worker- how do I get to send the ouput to the next step?

I need to make an api for stepfunctions but the problem is, how do I get the output of the first as input for the next?
Here is what I have so far:
{
"Comment": "Match",
"StartAt": "Search",
"States": {
"Search": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "Search-cluster",
"TaskDefinition": "Search-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "search",
"Command.$": "$.commands"
}
]
}
},
"Next": "Save"
},
"Save": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "save-cluster",
"TaskDefinition": "save-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "save",
"Command.$": "$.commands"
}
]
}
},
"Next": "Send"
},
"Send": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "send-cluster",
"TaskDefinition": "send-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "send",
"Command.$": "$.commands"
}
]
}
},
"End": true
}
}
}
I was facing the same issue and contacted AWS Support. Was told that it is not possible to directly return the result of a Fargate Task like you can do with Lambdas. One of the options is to store the result of your task in a separate DB like DynamoDB and write a Lambda to retrieve the value and update your input JSON with the output from the previous task.
Sidenote: In your ASL, you should look at using ResultPath. The default behaviour is to replace the input node with the output (result). Meaning, if in your input JSON you have values that you would like to use in subsequent states and if you don't specify ResultPath, they'd be lost after the first state. Ref: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html#input-output-resultpath-amend
You don't have to manually mange this. Lambda function's event parameter contains the previous function(s) return output(s).

swagger-codegen not generating classes for schemas defined in referenced json files

I am unable to get swagger-codegen to generate classes for json schemas defined in files separate from the main API definition file. The command I am using is:
$ java -jar swagger-codegen-cli-2.2.2.jar generate -i api.json -l java -o gen -v
This is what api.json looks like:
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "Simple API",
"description": "A simple API to learn how to write OpenAPI Specification"
},
"schemes": [
"https"
],
"host": "simple.api",
"basePath": "/openapi101",
"paths": {
"/persons": {
"get": {
"summary": "Gets some persons",
"description": "Returns a list containing all persons.",
"responses": {
"200": {
"description": "A list of Person",
"schema": {
"$ref" : "person.json#/definitions/person"
}
}
}
}
}
}
}
The person.json file referenced here lives alongside api.json (i.e. at the same level) and contains the following:
{"definitions": {
“person”: {
"type": "object",
"description": "",
"properties": {
"requestId": {
"type": "string",
"example": "1234"
}
}
}}}
I would expect the code generation to generate a class called Person.java - but it does not - in fact it does not generate any model classes. Also the verbose logging logs the following right at the start which makes me think it is interpreting the reference incorrectly and for some reason prepending a #definitions to the $ref.
[main] INFO io.swagger.parser.Swagger20Parser - reading from api.json
{
"swagger" : "2.0",
"info" : {
"description" : "A simple API to learn how to write OpenAPI Specification",
"version" : "1.0.0",
"title" : "Simple API"
},
"host" : "simple.api",
"basePath" : "/openapi101",
"schemes" : [ "https" ],
"paths" : {
"/persons" : {
"get" : {
"summary" : "Gets some persons",
"description" : "Returns a list containing all persons.",
"parameters" : [ ],
"responses" : {
"200" : {
"description" : "A list of Person",
"schema" : {
"$ref" : "#/definitions/person.json#/definitions/person"
}
}
}
}
}
}
}
Anybody know what is going on here and what is the correct way to reference a schema definition that lives in a local file?
Adding a ./ to the $ref makes it work.
./person.json#/definitions/person

Validate JSON to a Particular Format

I Have a Json which may come from other application and i need to check it whether is is in particular format. The JSON template i have is as follows,
{
"Types": {
"Type1": {
"attribute1": "value1",
"attribute2": "value2",
"attribute3": "value3",
"recordList": {
"record1": [
{
"field": "value"
},
{
"field": {
"subrecord1": [
{
"subfield1": "subvalue1",
"subfield2": "subvalue2"
}
]
}
}
]
},
"date": "2010-08-21 03:05:03"
}
}
}
Is there any way to validate the JSON based on particular template or format.
You can use JSON Schema for that. JSON Schema lets you describe the format of the object graph you expect to receive, and then software implementing it lets you validate what you receive against your schema. There's an OSS Java implementation called json-schema-validator.

Categories