I have a Java service that writes logs in JSON format, they are then picked up by filebeat and sent to Elastic. I would like to be able to set one of the ECS fields (event.duration) described here
I set up a net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder encoder, and I set the event.duration field in MDC before calling logging method. The output looks like this:
{
"#timestamp": "2021-12-07T10:41:59.589+01:00",
"message": "message",
"event.duration": "5606000000",
"service": {
"name": "logging.application.name_IS_UNDEFINED",
"type": "java"
},
"log": {
"logger": "com.demo.Demo",
"level": "WARN"
},
"process": {
"thread": {
"name": "main"
}
},
"error": {}
}
However, in Kibana I see event.duration as a JSON inside the flat field:
flat
{
"event.duration": "10051000000"
}
How can I make it on the same level as other ECS fields like event.name?
You should create an ingest pipeline using the dot_expander processor in order to transform your dotted field into an object:
PUT _inest/pipeline/de-dot
{
"processors" : [
{
"dot_expander": {
"field": "event.duration"
}
}
]
}
Then you need to make sure that your indexing process references this pipeline, i.e. ...?pipeline=de-dot
Related
So I'm just getting started with CDK using Java & I'd like to find how to extract context info from the cdk.context.json.
Essentially looking to hold parameters externally from the Stack relative to the environment (dev, test etc).
So will be looking at incorporating into a pipeline (probably Gitlab) so the cdk.context.json will be version controlled.
For instance, a cut of my context is as follows;
{
"vpc-provider:account=999999999:filter.isDefault=true:filter.vpc-id=vpc-w4w4w4w4w4:region=eu-west-2:returnAsymmetricSubnets=true": {
"vpcId": "vpc-w4w4w4w4w4",
.......
"environments": {
"dev": {
"documentDb": [
{
"port": "27017"
}
],
"subnetGroups": [
{
"name": "Private",
"type": "Private",
"subnets": [
{
"cidr": "10.0.1.0/24/24",
"availabilityZone": "eu-west-2a"
},
{
"cidr": "10.0.2.0/24",
"availabilityZone": "eu-west-2b"
}
]
}
]
},
"prod": {
"documentDb": [
{
"port": "27018"
}
],
"subnetGroups": [
{
"name": "Private",
"type": "Private",
"subnets": [
{
"cidr": "20.0.1.0/24/24",
"availabilityZone": "eu-west-2a"
},
{
"cidr": "20.0.2.0/24",
"availabilityZone": "eu-west-2b"
}
]
}
]
}
}
}
I'd like to extract the values for dev --> documentDb --> port for instance in the most elegant CDK way. If in my Stack I use;
this.getNode().tryGetContext("environments")
I am returned the whole JSON block;
{dev={documentDb=[{port=27017], subnetGroups=[{name=Private, type=Private, subnets=[{cidr=10.0.1.0/24/24, availabilityZone=eu-west-2a}, {cidr=10.0.2.0/24, availabilityZone=eu-west-2b}]}]}, prod={documentDb=[{port=27018], subnetGroups=..............
& not sure how to progress up the tree. If I synch passing in the config;
cdk synth -c config=environments > target/DocumentDbAppStack.yaml
& in my Stack;
this.getNode().tryGetContext("config")
I get "environments".
I can parse the LinkedHashMap using a JSON parser but that's obviously the wrong approach. I have looked at loads of examples / AWS documentation etc but can't seem to find the answer. There seems to be a wealth of info using Typescript (think that was the first language used for CDK), but I've never used it.
Thanks in advance.
Detailed above in description.
I am using a restful webservice that gives response in json api format. There is a relationship attribute that has id and type params. Based on the id reference it displays values in the included attribute. The id is created after the two requests that process as a final output. Till then I save my data in database as one single object. Now when I fetch the data from database using rest webservice the output shows all the attributes except the included. Which I believe is because it isn't able to find the reference so not getting displayed. But in the database all the values are present perfectly. I am not sure whether json api supports multiple ids for relationship attribute or not.
Example:
Request Body:
{
"data": {
"type": "orders",
"attributes": {
"name": "new order",
"updateDate": "",
"register":"yes",
"items":[
{
"description": "newly added item",
"type": "new item",
"amount": [
{
"deliveryfee": "123",
"mrp": "456"
}
]
}
]
}
}
}
Expected Response Body:
{
"data": {
"type": "orders",
"id": "1",
"attributes": {
"name": "new order",
"updateDate": "",
},
"relationships": {
"items": {
"data": [
{
"type": "items",
"id": null
}
]
}
}
},
"included": [
{
"type": "items",
"id": null,
"attributes": {
"type": "new item",
"description": "newly added item",
"amount": [
{
"deliveryfee": "123",
"mrp": "456"
}
]
}
}
]
}
Actual Response Body:
{
"data": {
"type": "orders",
"id": "1",
"attributes": {
"name": "new order",
"updateDate": "",
},
"relationships": {
"items": {
"data": [
{
"type": "items",
"id": null
}
]
}
}
}
}
I'm not fully sure if I understand your question correctly. But let me try to answer.
The combination of a type and id is used in JSON API specification to identify a resource:
Within a given API, each resource object’s type and id pair MUST identify a single, unique resource.
null as used in your example is not a valid value for id:
The values of the id and type members MUST be strings.
The API may combine multiple identifiers used in its internal database to construct the value used for id in a JSON API document as long as it's unique for the given type. This would be a valid id value from JSON API specification point of view as long as it's guaranteed to be unique.
{
"type": "posts":
"id": "post_id:5,locale:en"
}
The API may de-serialize the ID to two different identifiers: A post with the id 5 and a local with the id "en". That would be an internal implementation detail of the API. Consumers should not care if a meaning is encoded within the id value.
The request and response bodies given in your question do not fit together. Both contain a field items. But in the request body the items field is an attribute, while its a relationship in the response.
It seems as if you are trying to create multiple resources at once. This is not support by JSON API specification v1. It is supported in the third release candidate for v1.1 of the specification through the official extension Atomic Operations.
I need to make an api for stepfunctions but the problem is, how do I get the output of the first as input for the next?
Here is what I have so far:
{
"Comment": "Match",
"StartAt": "Search",
"States": {
"Search": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "Search-cluster",
"TaskDefinition": "Search-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "search",
"Command.$": "$.commands"
}
]
}
},
"Next": "Save"
},
"Save": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "save-cluster",
"TaskDefinition": "save-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "save",
"Command.$": "$.commands"
}
]
}
},
"Next": "Send"
},
"Send": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.sync",
"Parameters": {
"Cluster": "send-cluster",
"TaskDefinition": "send-task",
"Overrides": {
"ContainerOverrides": [
{
"Name": "send",
"Command.$": "$.commands"
}
]
}
},
"End": true
}
}
}
I was facing the same issue and contacted AWS Support. Was told that it is not possible to directly return the result of a Fargate Task like you can do with Lambdas. One of the options is to store the result of your task in a separate DB like DynamoDB and write a Lambda to retrieve the value and update your input JSON with the output from the previous task.
Sidenote: In your ASL, you should look at using ResultPath. The default behaviour is to replace the input node with the output (result). Meaning, if in your input JSON you have values that you would like to use in subsequent states and if you don't specify ResultPath, they'd be lost after the first state. Ref: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html#input-output-resultpath-amend
You don't have to manually mange this. Lambda function's event parameter contains the previous function(s) return output(s).
I get below JSON string as a request body for my REST API. I don't like this JSON structure, but I don't have any control on this. It's somebody else posting this message and I have to create a REST API (POST method) and consume this message in my API. So I have to deserialize this into Java objects in my REST controller. It has list of lists objects. I tried several ways with fasterxml, but I was not successful.
{
"messages": [
[
{
"message": "message1_a",
"info": {
"timestamp": "2521013204"
}
},
{
"message": "message1_b",
"info": [
{
"message": "message1_c",
"info": {
"id": "asfa-14fs-df"
}
},
{
"message": "message1_d",
"info": {
"reason": "msg_reason",
}
}
]
}
]
]
}
Can anybody help me how my Java POJOs should look like?
It seems like a array of message.
In Java you can use Spring to transform the Json to a Object.
String url = "http://your/json/url";
ResponseEntity<Message[]> responseEntity = new RestTemplate().getForEntity(url, Message[].class);
Be sure that your entity has all the attributes of Json.
The doc: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html
I Have a Json which may come from other application and i need to check it whether is is in particular format. The JSON template i have is as follows,
{
"Types": {
"Type1": {
"attribute1": "value1",
"attribute2": "value2",
"attribute3": "value3",
"recordList": {
"record1": [
{
"field": "value"
},
{
"field": {
"subrecord1": [
{
"subfield1": "subvalue1",
"subfield2": "subvalue2"
}
]
}
}
]
},
"date": "2010-08-21 03:05:03"
}
}
}
Is there any way to validate the JSON based on particular template or format.
You can use JSON Schema for that. JSON Schema lets you describe the format of the object graph you expect to receive, and then software implementing it lets you validate what you receive against your schema. There's an OSS Java implementation called json-schema-validator.