Mule ESB Create map payload - java

I need to transform inbound payload into map (java.util.Map). There are any way to create map in mule xml configs?
Regards
EDIT:
Payload type is com.novell.LDAPAttributeSet which is set of LDAPAttribute objects. LDAPAttribute object contains name and value fields. I need to extract name and value fields and convert them to map. Extracting fields will done with jxpath expressions. But I don't know how to create map from these fields.

I suggest you use a Groovy transformer:
<script:transformer>
<script:script engine="groovy">
[key1: payload.attr1,
key2: payload.attr2]
</script:script>
</script:transformer>
Where key1,key2 are your choice of keys to use in the map and attr1,attr2 are attributes of the LDAPAttributeSet (or any other valid expression that allows you to get the desired values out of this object).
PS. In case you wonder, the script namespace is declared that way:
xmlns:script="http://www.mulesoft.org/schema/mule/scripting"
xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/scripting
http://www.mulesoft.org/schema/mule/scripting/3.1/mule-scripting.xsd"

Related

Validate JSON data against Yaml shema

I have a java object (let's name it JsonValidator) that can be configured in YAML file.
I would like to describe a schema of JSON objects in YAML notation something like this
And then I need to validate JSON objects according to the schema. Does anybody know any Java libs I can use or any examples?
Thanks
The schema of a json document can be defined using json schema (actually, OpenAPI uses a custom flavor of json schema). It is essentially a json document defining the structure of an other json document. There are a few java implementations out there. If you want to stick to YAML for defining the schema, then will first need to convert YAML to JSON and then use the json schema validator, see this SO question for doing that.
You can find some Java validators for JSON Schema here.

How to convert JSON request body to Avro schema based Java Class?

I have a Kotlin Gradle Spring Boot and Spring Webflux application which accepts a JSON as its request body. I have an openapi generated Java class, which the JSON request body can be casted into.
On the other hand, I also have an Avro schema which is the same as the openapi, except the field names have been cleaned up. The JSON request body has fields where the names start with special character, e.g. $device_version, +ip_address, ~country. So, in the Avro schema, I remove them as Avro only allows the field name to start with either an alphabet or underscore.
Casting from JSON request body to openapi generated Java class is no problem, however, casting it to Avro schema generated Java class is a problem due to the field names.
I mean, I can manually set the fields but the JSON object is quite huge.
Is there an elegant and quick solution to convert that JSON request body containing different field names due to special character to the Avro schema generated class?
Packages used
org.hidetake.swagger.generator
org.openapitools:openapi-generator-cli (with swaggerCodeGen)
com.commercehub.gradle.plugin:gradle-avro-plugin

Elasticsearch - what to do if fields have the same name but multiple mapping

I use Elasticsearch for storing data sent from multiple sources outside of my system, i.e. I'm not controlling the incoming data - I just receive json document and store it. I have no logstash with its filters in the middle, only ES and Kibana. Each data source sent its own data type and all of them are stored in the same index (per tenant) but in different types. However since I cannot control the data that is sent to me, it is possible to receive documents of different types with the field having the same name and different structure.
For example, assume that I have type1 and type2 with field FLD, which is an object in both cases but the structure of this object is not the same. Specifically FLD.name is a string field in type1 but an object in type2. And in this case, when type1 data arrives it is stored successfully but when type2 data arrives, it is rejected:
failed to put mappings on indices [[myindex]], type [type2]
java.lang.IllegalArgumentException: Mapper for [FLD] conflicts with existing mapping in other types[Can't merge a non object mapping [FLD.name] with an object mapping [FLD.name]]
ES documentation clearly declare that fields with the same name in the same index in different mapping types mapped to the same field internally and must have the same mapping (see here).
My question is what can I do in this case? I'd prefer to keep all the types in the same index. Is it possible to add a unique-per-type suffix to field names or something like this? Any other solution? I'm a newbie in Elasticsearch so maybe I'm missing something simple... Thanks in advance.
There is no way to do index arbitrary JSON without pre-processing before it's indexed - not even Dynamic templates are flexible enough.
You can flatten nested objects into key-value pairs and use a Nested datatype, Multi-fields, and ignore_malformed to index arbitrary JSON (even with type conflicts) as described here. Unfortunately, Elasticsearch can still throw an exception at query time if you try to, for example, match a string to kv_pairs.value.long, so you'll have choose appropriate fields based on format of the value.
It's not the best practice I suppose, but you can store the field content as a String and make the deserialization manually after retrieve the information.
So, imagine a class like:
class Person {
private final Object name;
}
That can receive a List of String or a List of any other Object, just for example
So, instead of serialize the Person to a String and save it, you can serialize to a String and save the content on another class, like:
String personContent new ObjectMapper().writeValueAsString(person);
RequestDto dto = new RequestDto(personContent);
String dtoContent new ObjectMapper().writeValueAsString(dto);
And save the dtoContent:
IndexRequest request = new IndexRequest("persons")
request.source(dtoContent, XContentType.JSON);
IndexResponse response = client.index(request, RequestOptions.DEFAULT);
The RequestDto will be a simple class with a String field:
class RequestDto {
private String content;
}
I'm not a expert on ElasticSearch, but probably you will loose a lot of features from the ElasticSearch by passing his validations doing that.

Retrieve GAE Datastore's data types in Java

How to retrieve data types of properties of entities stored in Google App Engine Datastore using Java? I didn't find property.getType() or similar method in Java Datastore API.
There is no direct method provided.
but you can retrieve it by comparing the java Type of Property with the table given at this link
Map<String, Object> properties = entity.getProperties();
String[] propertyNames = properties.keySet().toArray(
new String[properties.size()]);
for(final String propertyName : propertyNames) {
// propertyNames string contains
// "com.google.appengine.api.users.User" then its Google Accounts user
// "java.lang.Integer" then its Integer
// "int" then premetive integer
}
Hope this helps
You need to use Property Metadata Queries.
Be aware that, as stated in the documentation, the property representation returned by the queries would be AppEngine representation, and does not have one-to-one mapping with Java classes. But you would be able to get the general data type at least.

What data type should be used to reference serialized objects by ID?

I have a bunch of classes that will be instantiated and passed between my JS front-end and Spring MVC.
I'm using Simple to serialize my object as XML for persistent storage and Jackson to pass it to my UI as JSON. Consequently I need to have an ID attribute that is used to reference an object, and this needs to be consistent in both the JSON, POJO and XML.
This means I need an ID attribute in my Java class. What type should I declare it as? I've seen int being used in the Simple library tutorial and I've also seen UUID being used here.
The Simple library creates id and ref attributes (or any other that you provide) to maintain references:
<parent name="john" id="1">
<children>
<child id="2" name="tom">
<parent ref="1"/>
</child>
</children>
</parent>
The accompanying Java to read it back in:
Strategy strategy = new CycleStrategy("id", "ref");
Serializer serializer = new Persister(strategy);
File source = new File("example.xml");
Parent parent = serializer.read(Parent.class, source);
The id isn't in the original Java object however, so it won't be passed with the JSON to the UI. Should I define a private UUID uid; and pass that, while also letting Simple generate the auxiliary id and ref attributes it uses, or is there a way to use a single Java attribue to do both?
EDIT: Hibernate has an #Id annotation, but I can't find something similar for Simple. Will this do?
#Attribute
private int id;
The problem is that I'll need to instantiate and pass it as JSON, but it needs to be unique as well (meaning it'd be easier to use UID). Also, will Simple use this id for its ref?
EDIT 2:
Using id as an attribute and then using the CycleStrategy causes it to use the value you define from the class and not the internal ones it uses in the XML which the ref point to... I'm using two attributes for now - a uuid I generate together with the id Simple uses internally.

Categories