I need to serialize a map to a json in a certain order.
This is the map
HashMap<String, String> dataMap = {
"CompanyCode": "4",
"EntyyCode": "2002296",
"SubEntityCode": "000",
"ContractNumber": "52504467115",
"Progressive Contract": "0",
"DocumentNumber": "200003333494028",
"LogonUserName": "AR333",
"Progressive Title": "0"
}
This is the json model I would like:
{
"Policy": {
"ContractNumber": "52504467115",
"ProgressiveContract": "0"
},
"Title": {
"LogonUserName": "AR333",
"ProgressiveTitle": "0"
},
"BusinessChannel": {
"CompanyCode": "4",
"EntyyCode": "2002296",
"SubEntityCode": "000"
},
"Document": {
"DocumentNumber": "200003333494028"
}
}
I need to convert this map into a JSON string. I know that this can be done using Jackson as below:
new ObjectMapper().writeValueAsString(map);
How do I do this using Jackson? Or is there any other way to do this in Java?
Thank you
First of all, the solution you request contains a second problem: partition. Not only must the items contain a particular order, but they must also somehow be divided over different categories. In Java, these categories usually correspond to their own classes or, since recently, records. Then the top level class (corresponding to the unnamed outer object of the JSON) determines ordering, as so (the name Contract is my choice):
record Contract(
Policy policy,
Title title,
BusinessChannel businessChannel,
Document document )
{
}
with each of the properties of Contract having their own class, e.g.:
record Policy( String contractNumber, int progressiveContract )
etc.
Serializing Contract then recursively serializes each of its parameters, with the required outcome as the result.
This would be the 'standard' way.
So, since you start with a HashMap, which by contract offers no guarantee of ordering, let alone an easy way to partition its contents into sub-objects, you could try two things:
Rethink the use of a map. Switching to the class structure takes care of the structure automatically.
Manually stream and convert the values in order (or use e.g. a TreeMap with custom Comparator) and then partition the values themselves. This probably requires more work than a map saves.
Related
So I've been trying to aggregate some stream data to a KTable using Kafka stream. My JSON from the topic looks like
{
"id": "d04a6184-e805-4ceb-9aaf-b2ab0139ee84",
"person": {
"id": "d04a6184-e805-4ceb-9aaf-b2ab0139ee84",
"createdBy": "user",
"createdDate": "2023-01-01T00:28:58.161Z",
"name": "person 1",
"description": "test1"
}
}....
KStream<Object, String> firstStream = builder.stream("topic-1").mapValues(value -> {
JSONObject json = new JSONObject(String.valueOf(value));
JSONObject json2 = new JSONObject(json.getJSONObject("person").toString());
return json2.toString();
});
I get something like
null{"createdDate":"2023-01-01T00:28:58.161Z","createdBy":"user","name":"person 1","description":"test1","id":"d04a6184-e805-4ceb-9aaf-b2ab0139ee84"}
null{"createdDate":"2023-01-01T00:29:07.862Z","createdBy":"user","name":"person 2","description":"test 2","id":"48d8b895-eb27-4977-9dbc-adb8fbf649d8"}
null{"createdDate":"2023-01-01T00:29:12.261Z","createdBy":"anonymousUser","name":"person 2","description":"test 2 updated","id":"d8b895-eb27-4977-9dbc-adb8fbf649d8"}
I want to group this data in such a way such that
person 1 will hold one JSON associated with it
person 2 will hold a List of both JSON associated with it
I have checked this Kafka Streams API GroupBy behaviour which describes the same problem but the solution given there doesn't work for me. Do I have to perform any extra operations? Please help
In order to groupBy, you need a pairing key. So, use map to extract the name of each person.
Then, as the linked answer says, you need to aggregate after grouping to "combine data per person", across events.
By the way, you should setup the Streams config with JsonSerde for values rather than String Serde in order to reduce the need to manually parse each event.
I'm using Java, Spring-boot, Hibernate stack and protocol buffers as DTO for communication among micro-services. At reverse proxy, I convert the protobuf object to json using protobuf's java support.
I have the following structure
message Item {
int64 id = 1;
string name = 2;
int64 price = 3;
}
message MultipleItems {
repeated Item items = 1;
}
Converting the MultipleItems DTO to json gives me the following result:
{
"items": [
{
"id": 1,
"name": "ABC",
"price": 10
},
{
"id": 2,
"name": "XYZ",
"price": 20
}
]
}
In the generated json, I've got the key items that maps to the json array.
I want to remove the key and return only json array as the result. Is there a clean way to achieve this?
I think it's not possible.
repeated must appear as a modifier on a field and fields must be named.
https://developers.google.com/protocol-buffers/docs/proto3#json
There's no obvious reason why Protobuf could not support this1 but, it would require that its grammar be extended to support use of repeated at the message level rather than its current use at the field level. This, of course, makes everything downstream of the proto messages more complex too
JSON, of course, does permit it.
It's possible that it complicates en/decoding too (an on-the-wire message could be either a message or an array of messages.
1 Perhaps the concern is that generated code (!) then must necessarily be more complex too? Methods would all need to check whether the message is an array type or a struct type, e.g.:
func (x *X) SomeMethod(ctx context.Context, []*pb.SomeMethodRequest) ...
And, in Golang pre-generics, it's not possible to overload methods this way and they would need to have distinct names:
func (x *X) SomeMethodArray(ctx context.Context, []*pb.SomeMethodRequest) ...
func (x *X) SomeMethodMessage(ctx context.Context, *pb.SomeMethodRequest) ...
I have a class Fact which is a extends java.util.HashMap class. I am passing object of this class as fact to drools.
Now an instance of fact looks like this (Map<String, Object>):
{
"key1": "value"
"attributes": [{"name": "name1", "value": "value1"},{"name": "name2", "value": "value2"},{"name": "name3", "value": "value3"}...]
"locks": [{"type": "type1", "value": "value1", "attributes": {"key_a1": "val_a1""key_a2": "val_a2"...}}]
}
Running validations on root level entries in this map is straight forward e.g. running validations on key1.
Now, I want to run some validations on attributes and locks.
For attributes, I want to ensure that all attributes which are needed are present in this map and their corresponding values are correct. So I do this in the when block:
fact: Fact(this["key1"] != null && this.containsKey("attributes"));
attributesEntries: Entry(key == "attributes") from fact.entrySet();
attributesMaps: LinkedHashMap() from attributesEntries;
fact is HashMap
attributes are of type ArrayList<LinkedHashMap<String, String>> (an id key is also added for the LinkedHashMap whose value is the value of key name only).
locks are of type ArrayList<LinkedHashMap<String, Object>>
locks have attributes of type Map<String, String>
but it is not working. When I evaluate attributesEntries it is ArrayList<LinkedHashMap> and it has all the expected values but attributesMaps comes as empty. I also tried passing filters like LinkedHashMap(key == 'key1', value == 'val1') but that also didn't work. Tried looking for solutions and none were available for this sort of structure. Whatever was available I tried to extend but didn't work.
Is this possible to achieve and if so how? Also, how do I validate value (not empty and matches a pattern) once I am able to get it from the Map.
I am new to drools and we are using 5.4.0.Final version of drools.
Also, how can I work with the next level nested Map in locks.
I once had the misfortune of working on a project where we made this same mistake and had our class extend HashMap. (Fair warning: HashMap doesn't serialize well so you're going to use a lot of extra memory.)
I'm going to assume several things about your model because you neglected to share the class definition itself.
But I'm going to assume the following, based on your example JSON:
You have added a string value ("value") with the key "key1"
You have added a List<Map<String, ?>> value (possibly a List<Fact>) with the key "locks"
You have added a List<Map<String, ?>> value (possibly a List<Fact>) with the key "attributes"
The HashMap's get(key) method will return an object value; you've already noted the special this[ key ] syntax.
From your partial rule attempt, it's not entirely clear what you're trying to do. I think you're trying to get the List<Map<String, ?>> that is saved in your map under the "attributes" key.
rule "Do something with the attributes"
when
$fact: Fact( this["key1"] != null,
$attr: this["attributes"] != null )
then
System.out.println("Found " + $attr.size() + " attributes");
end
this["attributes"] returns the value associated with the key attributes. In this case, it's a List or whatever you shoved in there. If the key doesn't exist, the null check handles that.
You also asked how you could do stuff with a child map inside one of those lists. Let's say that want to do something with the attribute that has "name": "name1" ...
rule "Do something with the 'name = name1' attribute"
when
$fact: Fact( this["key1"] != null,
$attributes: this["attributes"] != null )
$nameAttr: Map( this["name"] == "name1" ) from $attributes
then
// do something with $nameAttr
end
The pattern repeats, of course. Let's say you've shoved yet another List<Map<String, ?>> into your attribute maps:
rule "Do something with a child of 'name' attribute"
when
$fact: Fact( this["key1"] != null,
$attributes: this["attributes"] != null )
$nameAttr: Map( this["name"] == "name1",
$attrKids: this["children"] != null ) from $attributes
$childNameAttr: Map( this["name"] == "child1" ) from $attrKids
then
// etc.
end
I strongly recommend reconsidering your object model to not be Map-based. At the company I worked at where all of our projects were built against a nested Map-based model and running Drools 5.0.1, I spent significant time and effort upgrading parts of it to Drools 7 and a proper model that passed in just the data we needed. It saved a ton of resources and ended up being much faster.
I'm using Jackson to read/write datas from/into json files and I have an issue with the User POJO. It has a Map wich is supposed to be the ways to contact the User (so it can have from 0 to 7, depending on the Enum). I want to be able to put ways to contact using a form in JSF.
I tried something like value="#{config.user.contacts[EMAIL_PRO]}"
where of course EMAIL_PRO is an Enum (later, the user should be able to chose the Enum himself, but right now I try simple).
But when I do so, the error is
Null key for a Map not allowed in JSON
wich I understand, 'cause my debug says that the value returned is{null = null}. Now first question : since the map is empty, is JSF supposed to work simply like that ? The key "EMAIL_PRO" doesnt exists yet, but shouldn't JSF make the work done for me, and put right value with the key ?
The other question is much more about Jackson and Maps. As I said, my POJO User contains a Map, and the json file is a Map himself (containing multiple users).
Is it really possible to write a Map into this file using Jackson where the Map is Map<String, Object> and the Object contains a Map<Enum, Object> ? And if yes, how ?
Thanks for the help
PS: I cannot change either my APIs or my POJOs.
I think this is a repeated post, see How to convert hashmap to JSON object in Java
And as it says on one of the responses:
Map<String, Object> data = new HashMap<String, Object>();
data.put( "name", "Mars" );
data.put( "age", 32 );
data.put( "city", "NY" );
JSONObject json = new JSONObject();
json.putAll( data );
System.out.printf( "JSON: %s", json.toString(2) );
output:
JSON: {
"age": 32,
"name": "Mars",
"city": "NY"
}
You can also try to use Google's GSON.Google's GSON is the best library available to convert Java Objects into their JSON representation
Suppose that I serialise two different objects and save them to a directory.
Problem: Upon application start up, parsing the JSON files are not a problem - since GSON is employed, I can write my own serialisers and deserialisers for both of the JSON files for their respective objects to be constructed.
But the problem is, how can I differentiate between the numerous JSON files in terms of what they store within them, so I can apply the correct deserialiser to it.
Thank you, best.
Consider standardizing your JSON structure to include document type. You can even store the target object type in that field. Good practice is to include document version number as well. Example below shows two different versions of the 'account' document and a transaction document. All three can be stored in, say, the same Couchbase bucket. The way to differentiate between different documents would be to look at the "doc_type" field and the document version (if required). From the GSON serializer selection standpoint, you can look at at the "doc_type" in a switch/if-else statement or store the target object type in place of "account" or "transaction" and then, at the expense of performance, dynamically parse JSON to POJO.
{
"doc_type": "account",
"doc_ver": 1,
"content": {
"accnt_no": "12321645645484",
"name": "Name or alias",
"email": "Email address",
"password": "Password in raw format",
"exp_date": "06/10/2017"
}
}
{
"doc_type": "account",
"doc_ver": 2,
"content": {
"accnt_no": "12321645645484",
"name": "customer name",
"email": "customer email",
"password": "pass",
"timezone": "customer timezone",
"ip": "IP address",
"spoken_languages": [ "EN", "RU" ],
"exp_date": "06/10/2017"
}
}
{
"doc_type": "transaction",
"doc_ver": 1,
"content": {
"accnt_no": "12321645645484",
"tran_date": "06/04/2017",
"tran_time": "09:15:84.953"
}
}
Hope this helps.
I think that the best way is parse JSON to a HashMap<String, Object> with multiple level. GSON will parse your JSON to HashMap with key is object name and value is an object (This object will belong to 3 type: HashMap for a object in JSON, List for an array in JSON and String for a string in JSON). To using this HashMap you need to iterate through the HashMap using a recursive method.