YAMLMapper MismatchedInputException YAML key with periods - java

Error: com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input
Yaml file:
formatting.template:
fields:
- name: birthdate
type: java.lang.String
subType: java.util.Date
lenght: 10
ConfigurationProperties:
#Data
public class FormattingConfigurationProperties {
private List<Field> fields;
#Data
public static class Field {
private String name;
private String type;
private String subType;
private String lenght;
}
}
Method to read yaml
private static FormattingConfigurationProperties buildFormattingConfigurationProperties() throws IOException {
InputStream inputStream = new FileInputStream(new File("./src/test/resources/" + "application_formatting.yaml"));
YAMLMapper mapper = new YAMLMapper();
mapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
mapper.enable(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY);
return mapper.readerFor(FormattingConfigurationProperties.class)
.at(/formatting/template)
.readValue(inputStream);
}
I actually solved it by changing the Yaml file, splitting formatting.template on separate lines:
formatting:
template:
fields:
- name: birthdate
type: java.lang.String
subType: java.util.Date
lenght: 10
This means that is not able to read key with dot (periods (.)).
Someone know how to avoid MismatchedInputException, when the prefix is on the same line separated by dot?

You're using the JSON Pointer /formatting/template. This is for nested mappings, as shown in your second YAML file. If you have a condensed key formatting.template, you'll need the JSON Pointer /formatting.template instead.
YAML is perfectly able to read keys with a dot, it just does not do what you think it does. The dot is not a special character in YAML, just part of the content.
You may have worked with Spring which loads YAML files by rewriting them as Properties files, where . is a separator. Since the existing dots are not escaped, for YAML files used with Spring, a dot is the same as nested keys. However when you directly use a YAML loader, such as Jackson, that is not the case.

Related

RedisGraph: how to persist properties in data containing BOTH single AND double quotes?

I am testing RedisGraph as a way to store my data which originates from a client as JSON.
The JSON passes through a bean for validation etc and I use Jackson to serialise the bean so the RedisGraph string is in the correct format. For completeness on that formatting step see the sample code at the end.
The data properties might contain sinqle quotes in valid JSON format eg: O'Toole
{ "name" : "Peter O'Toole", "desc" : "An actors actor" }
I can use a formatter as per the code block at the end to get the JSON into a format the RedisGraph command will allow which copes with the single quotes (without me needing to escape the data content - ie it can use what the client sends). eg this works:
GRAPH.QUERY movies "CREATE (:Actor {name:\"Peter O'Toole\", desc:\"An actors actor\", actor_id:1})"
So far, so good.
Now, the problem: I am having trouble with the syntax to persist original JSON where it ALSO contains escaped double quotes. eg:
{ "name" : "Peter O'Toole", "desc" : "An \"actors\" actor" }
I don't want to have to escape or wrap the desc property value because it is already escaped as valid JSON. But then how do I construct the RedisGraph command so it persists the properties using the values it is given? ie containing escaped double quotes.
In other words, this throws a parsing error because of the \" in the desc property.
GRAPH.QUERY movies "CREATE (:Actor {name:\"Peter O'Toole\", desc:\"An \"actors\" actor\", actor_id:1})"
Given it would be quite common to want to persist data containing valid JSON escaped double quotes \" AND unescaped single quotes, there must be a way to do this. eg name and address data.
Any ideas?
Thanks, Murray.
PS: this doesnt work either: it chokes on the embedded ' in O'Toole
GRAPH.QUERY movies "CREATE (:Actor {name:\'Peter O'Toole\', desc:\'an \"actors\" actor\', actor_id:3})"
// \u007F is the "delete" character.
// This is the highest char value Jackson allows and is
// unlikely to be in the JSON (hopefully!)
JsonFactory builder = new JsonFactoryBuilder().quoteChar('\u007F').build();
ObjectMapper objectMapper = new ObjectMapper(builder);
// Set pretty printing of json
objectMapper.enable(SerializationFeature.INDENT_OUTPUT);
// Do not surround property names with quotes. ie { firstName : "Peter" }
objectMapper.configure(JsonWriteFeature.QUOTE_FIELD_NAMES.mappedFeature(), false);
// Make a Person
Person person = new Person("Peter", "O'Toole");
// Set the desc property using embedded quotes
person.setDesc("An \"actors\" actor");
// Convert Person to JSON
String json = objectMapper.writeValueAsString(person);
// Now convert your json to escape the double quotes around the string properties:
String j2 = json.replaceAll("\u007F", "\\\\\"");
System.out.println(j2);
This yields:
{
firstName : \"Peter\",
lastName : \"O'Toole\",
desc : \"An \"actors\" actor\"
}
which is in a format Redis GRAPH.QUERY movies "CREATE..." can use (apart from the issue with \"actors\" as discussed above).
OK. The issue was an artefact of trying to test the syntax by entering the commands into RedisInsight directly. As it turns out all one needs to do is to remove the double quotes from the valid json.
So, to be clear, based on normal valid json coming from the client app,
the formatter test is:
ObjectMapper objectMapper = new ObjectMapper();
// (Optional) Set pretty printing of json
objectMapper.enable(SerializationFeature.INDENT_OUTPUT);
// Do not surround property names with quotes. ie { firstname : "Peter" }
objectMapper.configure(JsonWriteFeature.QUOTE_FIELD_NAMES.mappedFeature(), false);
// Make a Person
// For this example this is done directly,
// although in the Java this is done using
// objectMapper.readValue(incomingJson, Person.class)
Person person = new Person("Peter", "O'Toole");
// Set the desc property using escaped double quotes
person.setDesc("An \"actor's\" actor");
// Convert Person to JSON without quoted property names
String json = objectMapper.writeValueAsString(person);
System.out.println(json);
yields:
{
firstname : "Peter",
lastname : "O'Toole",
desc : "An \"actor's\" actor"
}
and the command string is consumed by the Vertx Redis:
Vertx vertx = Vertx.vertx();
private final Redis redisClient;
// ...
redisClient = Redis.createClient(vertx);
String cmdStr = "CREATE (:Actor {firstname:"Peter", lastname: "O'Toole", desc:"An \"actor's\" actor", actor_id:1})";
Future<String> futureResponse = redisClient.send(Request.cmd(Command.GRAPH_QUERY).arg("movies").arg(cmdStr))
.compose(response -> {
Log.info("createRequest response=" + response.toString());
return Future.succeededFuture("OK");
})
.onFailure(failure -> {
Log.error("createRequest failure=" + failure.toString());
});
:-)

How to extract values from a String that cannot be converted to Json

While processing the DialogFlow Response object, I get the below given string as textPayload. If this is a Json string, I can easily convert it to a JSONObject and then extract the values. However, could not convert this to a Json Object. How do I get the values for the keys in this string? What is a good way to parse this string in Java?
String to be processed
Dialogflow Response : id: "XXXXXXXXXXXX"
lang: "en"
session_id: "XXXXX"
timestamp: "2020-04-26T16:38:26.162Z"
result {
source: "agent"
resolved_query: "Yes"
score: 1.0
parameters {
}
contexts {
name: "enaccaccountblocked-followup"
lifespan: 1
parameters {
}
}
metadata {
intent_id: "XXXXXXXXXXXX"
intent_name: "EN : ACC : Freezing Process - Yes"
end_conversation: true
webhook_used: "false"
webhook_for_slot_filling_used: "false"
is_fallback_intent: "false"
}
fulfillment {
speech: "Since you have been permanently blocked, please request to unblock your account"
messages {
lang: "en"
type {
number_value: 0.0
}
speech {
string_value: "Since you have been permanently blocked, please request to unblock your account."
}
}
}
}
status {
code: 200
error_type: "success"
}
Convert it to valid json, then map using one of the many libraries out there.
You'll only need to:
replace "Dialogflow Response :" with {
add } to the end
add commas between attributes, ie
at the end of every line with a ":"
after "}", except when the next non-whitespace is also "}"
Jackson (at least) can be configured to allow quotes around attribute names as optional.
Deserializing to a Map<String, Object> works for all valid json (except an array, which this isn't).
If I understand you correctly the issue here is that the keys do not have quotations marks, hence, a JSON parser will reject this.
Since the keys all start on a new line with some white-space and all end with a colon : you can fix this easily with a regular expression.
See How to Fix JSON Key Values without double-quotes?
You can then parse it to a Map via
Map<String, Object> map
= objectMapper.readValue(json, new TypeReference<Map<String,Object>>(){});
(but I assume you are aware of this).
Create a class for TextPayload object like this.
public class TextPayload {
private int session_id;
private String lang;
private String timestamp;
private String[] metadata ;
//Other attributes
//getters setters
}
Then using an ObjectMapper extract the values from textpayload like this:
ObjectMapper mapper = new ObjectMapper();
TextPayload textPayload = mapper.readValue(output, User.class);
To utilize ObjectMapper and hands on with it follow this
you can use the nodejs package parse-dialogflow-log to parse the textResponse string.
replace "Dialogflow Response :" with "{"
add "}" to the end
run the package on the result and you'll get a nice json.

unable to parse json using objectMapper of jackson where json value contains \\

I am de serializing json string to plain java object using jackson's ObjectMapper class. ObjectMapper is throwing an exception
json string I am trying to deserialize is
String input="{\"id\":\"30329\",\"appId\":\"3301\",\"nodeId\":1556537156187,\"data\":\"select id,obt_marks,'\\\\m' as dummy from ltc_test_1\"}";
value for data key contains \ which is causing the problem, is their a way to escape this. i want this value as is in my POJO
It can work by replacing each occurrence of \ with \\ so string will look like
\"data\":\"select id,obt_marks,'\\\\m' as dummy from ltc_test_1\"
Question: How this can be achieved using java,is there any setting in objectMapper or Jackson to tackle this problem ?
Below is pojo which i will get after de-serialization
public class WorkflowProcessInfo {
private Long id;
private Long appId;
private Long nodeId;
private String data;
}
////Code I am using for deserialization
ObjectMapper mapper = new ObjectMapper();
mapper.configure(Feature.ALLOW_BACKSLASH_ESCAPING_ANY_CHARACTER, false);
mapper.configure(Feature.ALLOW_UNQUOTED_FIELD_NAMES, true);
mapper.configure(Feature.ALLOW_UNQUOTED_CONTROL_CHARS, true);
mapper.configure(Feature.ALLOW_SINGLE_QUOTES, true); mapper.configure(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY,
true);
mapper.setSerializationInclusion(Include.NON_NULL);
try{
return mapper.readValue(inputJson, WorkflowProcessInfo.class);
}catch(Exception e){
syso(e.getMessage())}
I expecting WorkflowProcessInfo object with values as present in json meaning, data attribute of pojo should look like below
WorkflowProcessInfo.data="select id,obt_marks,'\\m' as dummy from ltc_test_1"
instead i am getting below exception
com.fasterxml.jackson.core.JsonParseException: Unrecognized character
escape 'm' (code 109) at [Source: java.io.StringReader#1ea9f6af;
line: 1, column: 84]

How to map csv file to pojo class in java

I am using java maven plugin.I want to fetch employee.csv file records in pojo class.
this pojo class I am generating from employee.csv header and all fields of pojo class are String type.now I want to map employee.csv to generated pojo class.my requirement is I dont want to specify column names manually.because if I change csv file then again I have to chane my code so it should dynamically map with any file. for instance
firstName,lastName,title,salary
john,karter,manager,54372
I want to map this to pojo which I have already
public class Employee
{
private String firstName;
private String lastName;
.
.
//getters and setters
//toString()
}
uniVocity-parsers allows you to map your pojo easily.
class Employee {
#Trim
#LowerCase
#Parsed
private String firstName;
#Parsed
private String lastName;
#NullString(nulls = { "?", "-" }) // if the value parsed in the quantity column is "?" or "-", it will be replaced by null.
#Parsed(defaultNullRead = "0") // if a value resolves to null, it will be converted to the String "0".
private Integer salary; // The attribute name will be matched against the column header in the file automatically.
...
}
To parse:
BeanListProcessor<Employee> rowProcessor = new BeanListProcessor<Employee>(Employee.class);
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setRowProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
CsvParser parser = new CsvParser(parserSettings);
//And parse!
//this submits all rows parsed from the input to the BeanListProcessor
parser.parse(new FileReader(new File("/path/to/your.csv")));
List<Employee> beans = rowProcessor.getBeans();
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
you can use openCSV jar to read the data and then you can map the each column values with the class attributes.
Due to security reason, i can not share my code with you.

Converting a single CSV/TSV string into a Java object?

Instead of converting an entire CSV file to an object, is there a simple API that takes in one csv or tsv string, and converts it to an object? The api's I've found so far are geared towards csv/tsv FIlE to list of objects.
Obviously I could just split the String and call a constructor, but was wondering if there was a clean api I could use.
You can do this with Jackson. It looks pretty similar to the other answers but seems to perform better than SuperCSV according to their tests.
Define your POJO (both the annotation and constructor seems to be necessary):
#JsonPropertyOrder({ "foo", "bar" })
public class FooBar {
private String foo;
private String bar;
public FooBar() {
}
// Setters, getters, toString()
}
Then parse it:
String input = "1,2\n3,4";
StringReader reader = new StringReader(input);
CsvMapper m = new CsvMapper();
CsvSchema schema = m.schemaFor(FooBar.class).withoutHeader().withLineSeparator("\n").withColumnSeparator(',');
try {
MappingIterator<FooBar> r = m.reader(FooBar.class).with(schema).readValues(reader);
while (r.hasNext()) {
System.out.println(r.nextValue());
}
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Go with uniVocity-parsers as it is at least twice as fast than SuperCSV and has way more features.
For example, let's say your bean is:
class TestBean {
// if the value parsed in the quantity column is "?" or "-", it will be replaced by null.
#NullString(nulls = { "?", "-" })
// if a value resolves to null, it will be converted to the String "0".
#Parsed(defaultNullRead = "0")
private Integer quantity; // The attribute type defines which conversion will be executed when processing the value.
// In this case, IntegerConversion will be used.
// The attribute name will be matched against the column header in the file automatically.
#Trim
#LowerCase
// the value for the comments attribute is in the column at index 4 (0 is the first column, so this means fifth column in the file)
#Parsed(index = 4)
private String comments;
// you can also explicitly give the name of a column in the file.
#Parsed(field = "amount")
private BigDecimal amount;
#Trim
#LowerCase
// values "no", "n" and "null" will be converted to false; values "yes" and "y" will be converted to true
#BooleanString(falseStrings = { "no", "n", "null" }, trueStrings = { "yes", "y" })
#Parsed
private Boolean pending;
Now, to read your input as a list of TestBean
// BeanListProcessor converts each parsed row to an instance of a given class, then stores each instance into a list.
BeanListProcessor<TestBean> rowProcessor = new BeanListProcessor<TestBean>(TestBean.class);
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setRowProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
CsvParser parser = new CsvParser(parserSettings);
parser.parse(getReader("/examples/bean_test.csv"));
// The BeanListProcessor provides a list of objects extracted from the input.
List<TestBean> beans = rowProcessor.getBeans();
To parse TSV files, just change the combination of CsvParserSettings & CsvParser to TsvParserSettings & TsvParser.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
I'm using this Api:
http://jsefa.sourceforge.net/
You can use annotations to convert your entities in CSV.
In the case of SuperCSV which you mentioned in a comment, you could pass it a String wrapped in a StringReader, i.e.
CsvBeanReader beanReader=new CsvBeanReader(new StringReader(theString), preferences);
beanReader.read(theBean, nameMapping);
I was currently dealing with a similar issue. in my case I wanted to import a single csv row at a time into a single pojo as I was getting my data in the form of discrete single line websocket updates. at the end jackson worked best for me as I didnt have to put everything into a list of pojos first.
here the code
String csvString="rick|sanchez|99"
private CsvMapper mapper=new CsvMapper();
private CsvSchema schema = mapper.schemaFor(Pojo.class).withColumnSeparator('|');
private ObjectReader r=mapper.readerFor(Pojo.class).with(schema);
Pojo pojo=r.readValue(csvString);
for this to work you also ned to add the following annotation to your pojo
#JsonPropertyOrder({"firstName","lastName","age"})
as far as I know its the only one that easily lets you parse a single csv line into a single pojo instance. obviously you could also do this over a constructor by hand but these libraries deal with with type conversions for you so its particularly useful if your pojo contains lots of different attributes

Categories