So far, I've been able to create a KStream with the help of a topic.
KStream<String, Object> testqa2 = builder.stream("testqa2", Consumed.with(Serdes.String(), Serdes.String()))
.mapValues(value -> {
System.out.println(value);
return value;
});
It doesn't print anything, so on debbuging - I realized I am just creating my KStream. There is no data in it.
I am having a litte trouble creating serializer/deserializer for worker class.
package com.copart.mwa.Avro;
public class Worker {
private static String WorkerActivityName;
private static String WorkerSid;
private static String WorkerPreviousActivityName;
private static String WorkerPreviousActivitySid;
public String getWorkerActivityName() {
return WorkerActivityName;
}
public void setWorkerActivityName(String workerActivityName) {
WorkerActivityName = workerActivityName;
}
public static String getWorkerSid() {
return WorkerSid;
}
public void setWorkerSid(String workerSid) {
WorkerSid = workerSid;
}
public String getWorkerPreviousActivityName() {
return WorkerPreviousActivityName;
}
public void setWorkerPreviousActivityName(String workerPreviousActivityName) {
WorkerPreviousActivityName = workerPreviousActivityName;
}
public String getWorkerPreviousActivitySid() {
return WorkerPreviousActivitySid;
}
public void setWorkerPreviousActivitySid(String workerPreviousActivitySid) {
WorkerPreviousActivitySid = workerPreviousActivitySid;
}
#Override
public String toString() {
return "Worker(" + WorkerSid + ", " + WorkerActivityName + ")";
} }
And the message from the producer to the consumer is a JSON
{
"WorkerActivityName": "Available",
"EventType": "worker.activity.update",
"ResourceType": "worker",
"WorkerTimeInPreviousActivityMs": "237",
"Timestamp": "1626114642",
"WorkerActivitySid": "WAc9030ef021bc1786d3ae11544f4d9883",
"WorkerPreviousActivitySid": "WAf4feb231e97c1878fecc58b26fdb95f3",
"WorkerTimeInPreviousActivity": "0",
"AccountSid": "AC8c5cd8c9ba538090da104b26d68a12ec",
"WorkerName": "Dorothy.Finegan#Copart.Com",
"Sid": "EV284c8a8bc27480e40865263f0b42e5cf",
"TimestampMs": "1626114642204",
"P": "WKe638256376188fab2a98cccb3c803d67",
"WorkspaceSid": "WS38b10d521442ecb74fcc263d5a4d726e",
"WorkspaceName": "Copart-MiPhone",
"WorkerPreviousActivityName": "Unavailable(RNA)",
"EventDescription": "Worker Dorothy.Finegan#Copart.Com updated to Available Activity",
"ResourceSid": "WKe638256376188fab2a98cccb3c803d67",
"WorkerAttributes": "{\"miphone_dept\":[\"USA_YRD_OPS\"],\"languages\":[\"en\"],\"home_region\":\"GL\",\"roles\":[\"supervisor\"],\"miphone_yards\":[\"81\"],\"miphone_enabled\":true,\"miphone_states\":[\"IL\"],\"home_state\":\"IL\",\"skills\":[\"YD_SELLER\",\"YD_TITLE\"],\"home_division\":\"Northern\",\"miphone_divisions\":[\"Northern\"],\"miphone_functions\":[\"outbound_only\"],\"full_name\":\"Dorothy Finegan\",\"miphone_regions\":[\"GL\"],\"home_country\":\"USA\",\"copart_user_id\":\"USA3204\",\"home_yard\":\"81\",\"home_dept\":\"USA_YRD_OPS\",\"email\":\"dorothy.finegan#copart.com\",\"home_dept_category\":\"OPS\",\"contact_uri\":\"client:Dorothy_2EFinegan_40Copart_2ECom\",\"queue_activity\":\"Available\",\"teams\":[],\"remote_employee\":false,\"miphone_call_center_units\":[\"USA_YRD_OPS|81\"],\"miphone_call_center_teams\":[]}"
}
I want to implemenet a customer deserializer where
"WorkspaceSid": "WS38b10d521442ecb74fcc263d5a4d726e", is the key and the remaining values of the other attributes act as the value for the key-value pair.
Thanks,
Anmol
It doesn't print anything
If there is data in testqa2 topic, and you have auto.offset.reset=earliest, then it should.
having a litte trouble creating serializer/deserializer for worker class
Kafka has bulit-in JSON serializers that you can build a Serde for. You don't need to make your own.
"WorkspaceSid", is the key
use selectKey, or map if you want to modify the key, not mapValues
Serializer<JsonNode> jsonNodeSerializer = new JsonSerializer();
Deserializer<JsonNode> jsonNodeDeserializer = new JsonDeserializer();
final Serde<JsonNode> jsonNodeSerde = Serdes.serdeFrom(jsonNodeSerializer,jsonNodeDeserializer);
KStream<String, JsonNode> testqa2 = builder.stream("testqa2", Consumed.with(Serdes.String(), jsonSerde))
.selectKey((k, json) -> json.get("WorkspaceSid"))
.print(Printed.toSysOut());
Alternatively, fix your producer code to get the Sid from the value, and set the key there...
If you want to use Avro, you wouldn't write a Worker class - you would generate it from an Avro schema.
Related
I am getting json from dynamoDb that looks like this -
{
"id": "1234",
"payment": {
"payment_id": "2345",
"user_defined": {
"some_id": "3456"
}
}
}
My aim is to get the user_defined field in a Java HashMap<String, Object> as user_defined field can contain any user defined fields, which would be unknown until the data arrives. Everything works fine except my DynamoDBMapper cannot convert the user_defined field to a Java HashMap. It is throwing this error -
Exception occured Response[payment]; could not unconvert attribute
This is how the classes looks like -
#DynamoDBTable(tableName = "PaymentDetails")
public class Response {
private String id;
public Response() {
}
private Payment payment = new Payment();
#DynamoDBHashKey(attributeName="id")
public String getId() { return id; }
public void setId(String id) { this.id = id; }
public Payment getPayment() {
return payment;
}
public void setPayment(Payment payment) {
this.payment = payment;
}
}
The payment field mapper -
#DynamoDBDocument
public class Payment {
private String payment_id:
private HashMap<String, Object> user_defined;
public Payment() {}
public getPayment_id() {
return payment_id;
}
public setPayment_id(String payment_id) {
this.payment_id = payment_id;
}
#DynamoDBTypeConverted(converter = HashMapMarshaller.class)
public HashMap<String, Object> getUser_defined() {
return user_defined;
}
public void setUser_defined(HashMap<String, Object> user_defined) {
this.user_defined = user_defined;
}
}
The HashMapMarshaller(Just to check if Hashmap marshaller wasn't working with gson, I just defined a Hashmap, put in a value and return it, but seems to still not working) -
public class HashMapMarshaller implements DynamoDBTypeConverter<String, HashMap<String, Object>> {
#Override
public String convert(HashMap<String, Object> hashMap) {
return new Gson().toJson(hashMap);
}
#Override
public HashMap<String, Object> unconvert(String jsonString) {
System.out.println("jsonString received for unconverting is " + jsonString);
System.out.println("Unconverting attribute");
HashMap<String, Object> hashMap = new HashMap<>();
hashMap.put("key", "value");
return hashMap;
//return new Gson().fromJson(jsonString, new TypeToken<HashMap<String, Object>>(){}.getType());
}
}
Marshaller approach is till now not working for me. It is also not printing any of the printlns I've put in there. I've also tried using #DynamoDBTyped(DynamoDBMapperFieldModel.DynamoDBAttributeType.M) and using Map instead of HashMap above my user_defined getter to no avail.
I want to find out how to convert the user_defined field to Java HashMap or Map. Any help is appreciated. Thank you!
Make Map<String, Object> to Map<String, String>. It should work without any custom converters. Otherwise be specific about Map's value type. For example, Map<String, SimplePojo> should work. Don't forget to annotate SimplePojo class with #DynamoDBDocument.
With Object as a type of Map's value, DynamoDB will not able to decide which object it has to create while reading entry from DynamoDB. It should know about specific type like String, Integer, SimplePojo etc.
According to the documentation described here: http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-req-resp.html one can create its own POJO to serialise input and output for Java AWS Lambda.
However it appears that it doesn't properly work for input requests where fields are capitalised. For instance, the format of input for a custom resource lambda looks like:
{"RequestType":"Create",
"ServiceToken":"arn:aws:lambda:....",
"ResponseURL":"https://cloudformation-custom-resource-response-e...",
...}
This can be easily tested via this simple MCVE code:
package test;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class TestLambda implements RequestHandler<TestLambda.TestEvent, String> {
private static final Logger logger = LogManager.getLogger(TestLambda.class);
#Override
public String handleRequest(TestEvent event, Context context) {
logger.debug(event.toString());
return null;
}
public static final class TestEvent {
private String key1;
private String Key2;
private String key3;
public String getKey1() {
return key1;
}
public void setKey1(String key1) {
this.key1 = key1;
}
public String getKey2() {
return Key2;
}
public void setKey2(String key2) {
Key2 = key2;
}
public String getKey3() {
return key3;
}
public void setKey3(String key3) {
this.key3 = key3;
}
#Override
public String toString() {
return "TestEvent{" +
"key1='" + key1 + '\'' +
", Key2='" + Key2 + '\'' +
", key3='" + key3 + '\'' +
'}';
}
}
}
Then create a test in AWS Console for this lambda and pass there as a request the following json:
{
"key3": "value3",
"Key2": "value2",
"Key1": "value1"
}
The result in logs will be:
2017-11-06 09:30:13 16849696-c2d5-11e7-80c3-150a37863c42 DEBUG TestLambda:15 - TestEvent{key1='null', Key2='null', key3='value3'}
Is there any way to deserialise this input without dealing with a raw byte stream as they suggest in that topic?
You shouldn't rely on any other features of serialization frameworks
such as annotations. If you need to customize the serialization
behavior, you can use the raw byte stream to use your own
serialization.
This seems to me to be a great limitation of Java AWS Lambdas if we can not freely create a POJO for any type of event.
In the pojo, make the fields public and exactly the same case as the json fields. This means you should have upper camel case fields e.g.,
public class TestEvent {
public String Key1;
public String Key2;
public String key3;
}
I cannot explain why this works, but I just tried this today based on a work colleague's suggestion and it works. I know it doesn't look elegant. But at least it's fewer lines of code than deserializing streams.
I've been in the same issue for a long time, and finally found the answer:
There is a blueprint for Lambda :
https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/blob/0f7f3d933741a48c08c85feff267793f60b61a60/blueprints/java/src/io/AuthPolicy.java#L68
So basically what you need to do is to override the get method of your JSON. And have a Map<String, Object> returned in this method. Now you can have capitalized key inside the map to resolve the problem:
public Map<String, Object> getPolicyDocument() {
Map<String, Object> serializablePolicy = new HashMap<>();
serializablePolicy.put(VERSION, policyDocumentObject.Version);
Statement[] statements = policyDocumentObject.getStatement();
Map<String, Object>[] serializableStatementArray = new Map[statements.length];
for (int i = 0; i < statements.length; i++) {
Map<String, Object> serializableStatement = new HashMap<>();
AuthPolicy.Statement statement = statements[i];
serializableStatement.put(EFFECT, statement.Effect);
serializableStatement.put(ACTION, statement.Action);
serializableStatement.put(RESOURCE, statement.getResource());
serializableStatement.put(CONDITION, statement.getCondition());
serializableStatementArray[i] = serializableStatement;
}
serializablePolicy.put(STATEMENT, serializableStatementArray);
return serializablePolicy;
}
The property on your Java bean is still key2, all lower case, because the property name visible to the serialization framework is derived from the getter, not the private field name. So it still goes into the input event looking for key2 not Key2.
I'm using Jackson in Spring MVC application. I want to use a String value as key name for Java POJO --> JSON
"record": {
"<Dynamic record name String>": {
"value": {
....
}
}
}
So the dynamic record name String could be "abcd","xyz" or any other string value. How can I define my "record" POJO to have a key like that ?
Unfortunately, you cannot have dynamic fields in Java classes (unlike some other languages), so you have two choices:
Using Maps
Using JSON objects (i.e. JsonNode in case of Jackson)
Suppose, you have a data like this:
{
"record": {
"jon-skeet": {
"name": "Jon Skeet",
"rep": 982706
},
"darin-dimitrov": {
"name": "Darin Dimitrov",
"rep": 762173
},
"novice-user": {
"name": "Novice User",
"rep": 766
}
}
}
Create two classes to capture it, one for user and another for the object itself:
User.java:
public class User {
private String name;
private Long rep;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public Long getRep() { return rep; }
public void setRep(Long rep) { this.rep = rep; }
#Override
public String toString() {
return "User{" +
"name='" + name + '\'' +
", rep=" + rep +
'}';
}
}
Data.java:
public class Data {
private Map<String, User> record;
public Map<String, User> getRecord() { return record; }
public void setRecord(Map<String, User> record) { this.record = record; }
#Override
public String toString() {
return "Data{" +
"record=" + record +
'}';
}
}
Now, parse the JSON (I assume there is a data.json file in the root of your classpath):
public class App {
public static void main(String[] args) throws Exception {
final ObjectMapper objectMapper = new ObjectMapper();
System.out.println(objectMapper.readValue(App.class.getResourceAsStream("/data.json"), Data.class));
System.out.println(objectMapper.readTree(App.class.getResourceAsStream("/data.json")));
}
}
This will output:
Data{record={jon-skeet=User{name='Jon Skeet', rep=982706}, darin-dimitrov=User{name='Darin Dimitrov', rep=762173}, novice-user=User{name='Novice User', rep=766}}}
{"record":{"jon-skeet":{"name":"Jon Skeet","rep":982706},"darin-dimitrov":{"name":"Darin Dimitrov","rep":762173},"novice-user":{"name":"Novice User","rep":766}}}
In case of a Map you can use some static classes, like User in this case, or go completely dynamic by using Maps of Maps (Map<String, Map<String, ...>>. However, if you find yourself using too much maps, consider switching to JsonNodes. Basically, they are the same as Map and "invented" specifically for highly dynamic data. Though, you'll have some hard time working with them later...
Take a look at a complete example, I've prepared for you here.
This is in Kotlin but I have found a solution to the same problem using Jackson.
You don't need the root node "record", so you will need to get rid of it or start one node deeper(you're on your own there) but to turn the list of records that are children of their id into a list of records with id in the object follows:
val node = ObjectMapper().reader().readTree(json)
val recordList = mutableListOf<Record>()
node.fields().iterator().forEach {
val record = record(
it.key,
it.value.get("name").asText(),
it.value.get("rep").asText()
)
recordList.add(event)
}
node.fields() returns a map of children(also maps)
iterating through the parent map you will get the id from the key and then the nested data is in the value (which is another map)
each child of fields is key : value where
key = record id
value = nested data (map)
This solution, you don't need multiple classes to deserialize a list of classes.
I have my data in this format:
{
"0" : {"a": {}}, {"b": {}}, ...
"1" : {"c": {}}, {"d": {}}, ...
.
.
.
}
I am able to capture it into a map using the dynamic capture feature of jackson by using #JsonAnySetter annotation.
public class Destination{
Map<String, Object> destination = new LinkedHashMap<>();
#JsonAnySetter
void setDestination(String key, Object value) {
destination.put(key, value);
}
}
I am trying to parse a json string to java object but i am not sure on the object hierarchy.
below is the json string
{
"TABLE_Detail":{
"1":{
"TABLE":"table1",
"RUN_DATE":"20170313",
"SEQ_NUM":"123",
"START_TIME":"20170313133144",
"END_TIME":"20170313133655"
},
"2":{
"TABLE":"table2",
"RUN_DATE":"20170313",
"SEQ_NUM":"123",
"START_TIME":"20170313133142",
"END_TIME":"20170313133723"
}
}
}
Here the number 1, 2 are dynamic and can go up to any number, I tried to create a outer object and have a Map of type key String and value as object TableData. The map having variable name TABLE_Detail. but the TableData object is always null. TableData object has all the variables.
Please help me on how to convert this json string to object.
Change 1 to table1 and 2 to table2:
public class TableDetails {
private TableData table1;
private TableData table2;
public TableDetails(){
}
// getter and setter
}
And if modify json format to "Koen Van Looveren" mentioned:
public class TableDetails {
List<TableData> tables;
public TableDetails() {
}
// getter and setter
}
The table class:
Table.java:
public class TableData {
private String table;
private String run_date;
private String seq_num;
private String start_time;
private String end_time;
public TableData() {
}
// getter and setter
}
you have two choice for such painfully json structure when using Gson.
using Gson parsing json as Map and write some class access returned Map.this mode works fine for access data only!
//usage
TableDetails details=new TableDetails(gson.fromJson(json, Map.class));
//classes
class TableDetails {
private Map<String, Map> context;
public TableDetails(Map root) {
this.context = (Map<String, Map>) root.get("TABLE_Detail");
}
public int size() {
return context.size();
}
public Table get(String key) {
return new Table(context.get(key));
}
}
class Table {
private Map context;
public Table(Map context) {
this.context = context;
}
public String getName() {
return get("TABLE");
}
private <T> T get(String name) {
return (T) context.get(name);
}
...
}
write your own Gson TypeAdapter,but this way may be more complex.if you interesting on write custom TypeAdapter there is a demo that I written for extract json root.gson-enclosing-plugin
You can try deserialize it into a Map<String, Map<String, TableData>>. The reason why Map<String, TableData> doesn't work it that the pesudo-array is wrapped in another object.
The following example converts a response into a List<TableData>:
public List<TableData> deserialize(String json) {
return Gson().<Map<String, Map<String, TableData>>>fromJson(json, new TypeToken<Map<String, Map<String, TableData>>>(){}.getType())
.values().iterator().next()
.entrySet().stream()
.sorted(Comparator.comparingInt(e -> Integer.parseInt(e.getKey())))
.map(Map.Entry::getValue)
.collect(Collectors.toList());
}
I was in a search for the solution, and i came across one of the site where the solution worked. i wanted to credit the below site. Thanks for all the support.
I am able to map the dynamic value 1, 2 as map keys and values are mapped correspondingly to the TableData object properties using #SerializedName gson annootation.
http://findnerd.com/list/view/Parse-Json-Object-with-dynamic-keys-using-Gson-/24094/
When using an array in json you need to use [ for opening and ] for closing
{
"TABLE_Detail": [
{
"TABLE": "table1",
"RUN_DATE": "20170313",
"SEQ_NUM": "123",
"START_TIME": "20170313133144",
"END_TIME": "20170313133655"
},
{
"TABLE": "table2",
"RUN_DATE": "20170313",
"SEQ_NUM": "123",
"START_TIME": "20170313133142",
"END_TIME": "20170313133723"
}
]
}
I need to make a Builder class in which I need to have below fields so when I populate those fields in my Builder class and then if I call toJson method on it which I need to create as well, then it should make json structure like as shown below:
{
"id": "hello",
"type": "process",
"makers": {
"typesAndCount": {
"abc": 4,
"def": 3,
"pqr": 2
}
}
}
Key in my above JSON is fixed always only the values will change. But in typesAndCount field I have three different keys abc, def and pqr. Sometimes I will have one key there or two keys or all the keys. So stuff in typesAndCount key can change depending on what's being passed. Below is also possible case.
{
"id": "hello",
"type": "process",
"makers": {
"typesAndCount": {
"abc": 4,
"def": 3,
}
}
}
I started with below code in my Builder class but not sure how should I proceed further.
public class Key {
private final String id;
private final String type;
// confuse now
}
I just want to populate data in my class and then call some method it can be toJson to make string in above JSON format.
User Builder pattern for fluent configure your data builder. E.g.
class Builder {
private final String id;
private final String type;
private Map<String, Integer> map = new HashMap<>();
// mandatory fields are always passed through constructor
Builder(String id, String type) {
this.id = id;
this.type = type;
}
Builder typeAndCount(String type, int count) {
map.put(type, count);
return this;
}
JsonObject toJson() {
JsonObjectBuilder internal = null;
if (!map.isEmpty()) {
internal = Json.createObjectBuilder();
for (Map.Entry<String, Integer> e: map.entrySet()) {
internal.add(e.getKey(), e.getValue());
}
}
// mandatory fields
JsonObjectBuilder ob = Json.createObjectBuilder()
.add("id", id)
.add("type", type);
if (internal != null) {
ob.add("makers", Json.createObjectBuilder().add("typesAndCount", internal));
}
return ob.build();
}
public static void main(String[] args) {
Builder b = new Builder("id_value", "type_value")
.typeAndCount("abs", 1)
.typeAndCount("rty", 2);
String result = b.toJson().toString();
System.out.println(result);
}
}
As you can see you can call typeAndCount as many times as you need or even don't call it at all. toJson method handles this without any problem.
UPDATE: The output for example in method main is
{"id":"id_value","type":"type_value","makers":{"typesAndCount":{"abs":1,"rty":2}}}
UPDATE 2: the builder without 'typeAndCount` method call at all will produce this output
{"id":"id_value","type":"type_value"}