How to convert pubsub payload to LogEntry object in log export - java

I have enabled log exports to a pub sub topic. I am using dataflow to process these logs and store relevant columns in BigQuery. Can someone please help with the conversion of the pubsub message payload to a LogEntry object.
I have tried the following code:
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
PubsubMessage pubsubMessage = c.element();
ObjectMapper mapper = new ObjectMapper();
byte[] payload = pubsubMessage.getPayload();
String s = new String(payload, "UTF8");
LogEntry logEntry = mapper.readValue(s, LogEntry.class);
}
But I got the following error:
com.fasterxml.jackson.databind.JsonMappingException: Can not find a (Map) Key deserializer for type [simple type, class com.google.protobuf.Descriptors$FieldDescriptor]
Edit:
I tried the following code:
try {
ByteArrayInputStream stream = new ByteArrayInputStream(Base64.decodeBase64(pubsubMessage.getPayload()));
LogEntry logEntry = LogEntry.parseDelimitedFrom(stream);
System.out.println("Log Entry = " + logEntry);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
But I get the following error now:
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag

The JSON format parser should be able to do this. Java's not my strength, but I think you're looking for something like:
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
LogEntry.Builder entryBuilder = LogEntry.newBuilder();
JsonFormat.Parser.usingTypeRegistry(
JsonFormat.TypeRegistry.newBuilder()
.add(LogEntry.getDescriptor())
.build())
.ignoringUnknownFields()
.merge(c.element(), entryBuilder);
LogEntry entry = entryBuilder.build();
...
}
You might be able to get away without registering the type. I think in C++ the proto types are linked into a global registry.
You'll want "ignoringUnknownFields" in case the service adds new fields and exports them and you haven't updated your copy of the proto descriptor. Any "#type" fields in the exported JSON that will cause problems too.
You may need special handling of the payload (i.e. strip if from the JSON and then parse it separately). If it's JSON I'd expect the parser to try populating sub-messages that don't exist. If it's proto ... it actually might work if you register the Any type too.

Related

How to deserialize avro data using Apache Beam (KafkaIO)

I've only seen one thread containing information about the topic I've mentioned which is :
How to Deserialising Kafka AVRO messages using Apache Beam
However, after trying a few variations of kafkaserializers I still cannot deserialize kafka messages. Here's my code:
public class Readkafka {
private static final Logger LOG = LoggerFactory.getLogger(Readkafka.class);
public static void main(String[] args) throws IOException {
// Create the Pipeline object with the options we defined above.
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
PTransform<PBegin, PCollection<KV<action_states_pkey, String>>> kafka =
KafkaIO.<action_states_pkey, String>read()
.withBootstrapServers("mybootstrapserver")
.withTopic("action_States")
.withKeyDeserializer(MyClassKafkaAvroDeserializer.class)
.withValueDeserializer(StringDeserializer.class)
.updateConsumerProperties(ImmutableMap.of("schema.registry.url", (Object)"schemaregistryurl"))
.withMaxNumRecords(5)
.withoutMetadata();
p.apply(kafka)
.apply(Keys.<action_states_pkey>create())
}
where MyClassKafkaAvroDeserilizer is
public class MyClassKafkaAvroDeserializer extends
AbstractKafkaAvroDeserializer implements Deserializer<action_states_pkey> {
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
configure(new KafkaAvroDeserializerConfig(configs));
}
#Override
public action_states_pkey deserialize(String s, byte[] bytes) {
return (action_states_pkey) this.deserialize(bytes);
}
#Override
public void close() {} }
and the class action_states_pkey is code generated from avro tools using
java -jar pathtoavrotools/avro-tools-1.8.1.jar compile schema pathtoschema/action_states_pkey.avsc destination path
where the action_states_pkey.avsc is literally
{"type":"record","name":"action_states_pkey","namespace":"namespace","fields":[{"name":"ad_id","type":["null","int"]},{"name":"action_id","type":["null","int"]},{"name":"state_id","type":["null","int"]}]}
With this code I'm getting the error :
Caused by: java.lang.ClassCastException: org.apache.avro.generic.GenericData$Record cannot be cast to my.mudah.beam.test.action_states_pkey
at my.mudah.beam.test.MyClassKafkaAvroDeserializer.deserialize(MyClassKafkaAvroDeserializer.java:20)
at my.mudah.beam.test.MyClassKafkaAvroDeserializer.deserialize(MyClassKafkaAvroDeserializer.java:1)
at org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.advance(KafkaUnboundedReader.java:221)
at org.apache.beam.sdk.io.BoundedReadFromUnboundedSource$UnboundedToBoundedSourceAdapter$Reader.advanceWithBackoff(BoundedReadFromUnboundedSource.java:279)
at org.apache.beam.sdk.io.BoundedReadFromUnboundedSource$UnboundedToBoundedSourceAdapter$Reader.start(BoundedReadFromUnboundedSource.java:256)
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:592)
... 14 more
It seems there's an error in trying to map the Avro Data to my custom class ?
Alternatively, I've tried the following code :
PTransform<PBegin, PCollection<KV<action_states_pkey, String>>> kafka =
KafkaIO.<action_states_pkey, String>read()
.withBootstrapServers("bootstrapserver")
.withTopic("action_states")
.withKeyDeserializerAndCoder((Class)KafkaAvroDeserializer.class, AvroCoder.of(action_states_pkey.class))
.withValueDeserializer(StringDeserializer.class)
.updateConsumerProperties(ImmutableMap.of("schema.registry.url", (Object)"schemaregistry"))
.withMaxNumRecords(5)
.withoutMetadata();
p.apply(kafka);
.apply(Keys.<action_states_pkey>create())
// .apply("ExtractWords", ParDo.of(new DoFn<action_states_pkey, String>() {
// #ProcessElement
// public void processElement(ProcessContext c) {
// action_states_pkey key = c.element();
// c.output(key.getAdId().toString());
// }
// }));
which does not give me any error until i try to print out the data. I have to verify that I'm succesfully reading the data one way or another so my intent here is to log the data in the console. If I uncomment the commented section i get the same error once again:
SEVERE: 2019-09-13T07:53:56.168Z: java.lang.ClassCastException: org.apache.avro.generic.GenericData$Record cannot be cast to my.mudah.beam.test.action_states_pkey
at my.mudah.beam.test.Readkafka$1.processElement(Readkafka.java:151)
Another thing to note is that if I specify :
.updateConsumerProperties(ImmutableMap.of("specific.avro.reader", (Object)"true"))
always gives me an error of
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 443
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class NAMESPACE.action_states_pkey specified in writer's schema whilst finding reader's schema for a SpecificRecord.
It seems there's something wrong with my approach?
If anyone has any experience reading AVRO data from Kafka Streams using Apache Beam, please do help me out. I greatly appreciate it.
Here's a snapshot of my package with the schema and class in it as well:
package/working path details
Thanks.
public class MyClassKafkaAvroDeserializer extends
AbstractKafkaAvroDeserializer
Your class is extending the AbstractKafkaAvroDeserializer which returns GenericRecord.
You need to convert the GenericRecord to your custom object.
OR
Use SpecificRecord for this as stated in one of the following answers:
/**
* Extends deserializer to support ReflectData.
*
* #param <V>
* value type
*/
public abstract class ReflectKafkaAvroDeserializer<V> extends KafkaAvroDeserializer {
private Schema readerSchema;
private DecoderFactory decoderFactory = DecoderFactory.get();
protected ReflectKafkaAvroDeserializer(Class<V> type) {
readerSchema = ReflectData.get().getSchema(type);
}
#Override
protected Object deserialize(
boolean includeSchemaAndVersion,
String topic,
Boolean isKey,
byte[] payload,
Schema readerSchemaIgnored) throws SerializationException {
if (payload == null) {
return null;
}
int schemaId = -1;
try {
ByteBuffer buffer = ByteBuffer.wrap(payload);
if (buffer.get() != MAGIC_BYTE) {
throw new SerializationException("Unknown magic byte!");
}
schemaId = buffer.getInt();
Schema writerSchema = schemaRegistry.getByID(schemaId);
int start = buffer.position() + buffer.arrayOffset();
int length = buffer.limit() - 1 - idSize;
DatumReader<Object> reader = new ReflectDatumReader(writerSchema, readerSchema);
BinaryDecoder decoder = decoderFactory.binaryDecoder(buffer.array(), start, length, null);
return reader.read(null, decoder);
} catch (IOException e) {
throw new SerializationException("Error deserializing Avro message for id " + schemaId, e);
} catch (RestClientException e) {
throw new SerializationException("Error retrieving Avro schema for id " + schemaId, e);
}
}
}
The above is copied from https://stackoverflow.com/a/39617120/2534090
https://stackoverflow.com/a/42514352/2534090

Can't seem to transform KStream<A,B> to KTable<X,Y>

This is my first attempt at trying to use a KTable. I have a Kafka Stream that contains Avro serialized objects of type A,B. And this works fine. I can write a Consumer that consumes just fine or a simple KStream that simply counts records.
The B object has a field containing a country code. I'd like to supply that code to a KTable so it can count the number of records that contain a particular country code. To do so I'm trying to convert the stream into a stream of X,Y (or really: country-code, count). Eventually I look at the contents of the table and extract an array of KV pairs.
The code I have (included) always errors out with the following (see the line with 'Caused by'):
2018-07-26 13:42:48.688 [com.findology.tools.controller.TestEventGeneratorController-16d7cd06-4742-402e-a679-898b9ef78c41-StreamThread-1; AssignedStreamsTasks] ERROR -- stream-thread [com.findology.tools.controller.TestEventGeneratorController-16d7c\
d06-4742-402e-a679-898b9ef78c41-StreamThread-1] Failed to process stream task 0_0 due to the following error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=com.findology.model.traffic.CpaTrackingCallback, partition=0, offset=962649
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:240)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:922)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:802)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:749)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:719)
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.ByteArraySerializer / value: org.apache.kafka.common.serialization.ByteArraySerializer) is not compatible to the actual key or value type (key type: java.lang.Integer / value type: java.lang.Integer). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:92)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:43)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.kstream.internals.KStreamTransform$KStreamTransformProcessor.process(KStreamTransform.java:59)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
... 6 more
Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
at org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:21)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:146)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:94)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:87)
... 19 more
And here is the code I'm using. I've omitted certain classes for brevity. Note that I'm not using the Confluent KafkaAvro classes.
private synchronized void createStreamProcessor2() {
if (streams == null) {
try {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, getClass().getName());
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
StreamsConfig config = new StreamsConfig(props);
StreamsBuilder builder = new StreamsBuilder();
Map<String, Object> serdeProps = new HashMap<>();
serdeProps.put("schema.registry.url", schemaRegistryURL);
AvroSerde<CpaTrackingCallback> cpaTrackingCallbackAvroSerde = new AvroSerde<>(schemaRegistryURL);
cpaTrackingCallbackAvroSerde.configure(serdeProps, false);
// This is the key to telling kafka the specific Serde instance to use
// to deserialize the Avro encoded value
KStream<Long, CpaTrackingCallback> stream = builder.stream(CpaTrackingCallback.class.getName(),
Consumed.with(Serdes.Long(), cpaTrackingCallbackAvroSerde));
// provide a way to convert CpsTrackicking... info into just country codes
// (Long, CpaTrackingCallback) -> (countryCode:Integer, placeHolder:Long)
TransformerSupplier<Long, CpaTrackingCallback, KeyValue<Integer, Long>> transformer = new TransformerSupplier<Long, CpaTrackingCallback, KeyValue<Integer, Long>>() {
#Override
public Transformer<Long, CpaTrackingCallback, KeyValue<Integer, Long>> get() {
return new Transformer<Long, CpaTrackingCallback, KeyValue<Integer, Long>>() {
#Override
public void init(ProcessorContext context) {
// Not doing Punctuate so no need to store context
}
#Override
public KeyValue<Integer, Long> transform(Long key, CpaTrackingCallback value) {
return new KeyValue(value.getCountryCode(), 1);
}
#Override
public KeyValue<Integer, Long> punctuate(long timestamp) {
return null;
}
#Override
public void close() {
}
};
}
};
KTable<Integer, Long> countryCounts = stream.transform(transformer).groupByKey() //
.count(Materialized.as("country-counts"));
streams = new KafkaStreams(builder.build(), config);
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
streams.cleanUp();
streams.start();
try {
countryCountsView = waitUntilStoreIsQueryable("country-counts", QueryableStoreTypes.keyValueStore(),
streams);
}
catch (InterruptedException e) {
log.warn("Interrupted while waiting for query store to become available", e);
}
}
catch (Exception e) {
log.error(e);
}
}
}
The bare groupByKey() method on KStream uses the default serializer/deserializer (which you haven't set). Use the method groupByKey(Serialized<K,V> serialized), as in:
.groupByKey(Serialized.with(Serdes.Integer(), Serdes.Long()))
Also note, what you do in your custom TransformerSupplier, you can do simply with a KStream.map call.

Custom mapper for schema validation errors

I've used camel validator and I'm catching errors from the schema validation like a :
org.xml.sax.SAXParseException: cvc-minLength-valid: Value '' with length = '0' is not facet-valid with respect to minLength '1' for type
Is it any tool which will be good to map this errors for a prettier statements? I can always just iterate on the erros, split on them and prepare custom mapper, but maybe there is sth better than this? :)
Saxon is really good at error reporting. Its validator gives you understandable messages in the first place.
That's a SAX error message, and it appears to be quite clearly stated, but see ErrorHandler and DefaultHandler to customize it however you'd prefer.
I've created validation with xsd through camel validation component:
<to uri="validator:xsd/myValidator.xsd"/>
then I've used doCatch inside doTry block to catch exception:
<doCatch>
<exception>org.apache.camel.ValidationException</exception>
<log message="catch exception ${body}" loggingLevel="ERROR" />
<process ref="schemaErrorHandler"/>
</doCatch>
After that I wrote custom Camel Processor and it works great :)
public class SchemaErrorHandler implements Processor {
private final String STATUS_CODE = "6103";
private final String SEVERITY_CODE = "2";
#Override
public void process(Exchange exchange) throws Exception {
Map<String, Object> map = exchange.getProperties();
String statusDesc = "Unknown exception";
if (map != null) {
SchemaValidationException exception = (SchemaValidationException) map.get("CamelExceptionCaught");
if (exception != null && !CollectionUtils.isEmpty(exception.getErrors())) {
StringBuffer buffer = new StringBuffer();
for (SAXParseException e : exception.getErrors()) {
statusDesc = e.getMessage();
buffer.append(statusDesc);
}
statusDesc = buffer.toString();
}
}
Fault fault = new Fault(new Message(statusDesc, (ResourceBundle) null));
fault.setDetail(ErrorUtils.createDetailSection(STATUS_CODE, statusDesc, exchange, SEVERITY_CODE));
throw fault;
}
}

Convert XML to JSON format

I have to convert docx file format (which is in openXML format) into JSON format. I need some guidelines to do it. Thanks in advance.
You may take a look at the Json-lib Java library, that provides XML-to-JSON conversion.
String xml = "<hello><test>1.2</test><test2>123</test2></hello>";
XMLSerializer xmlSerializer = new XMLSerializer();
JSON json = xmlSerializer.read( xml );
If you need the root tag too, simply add an outer dummy tag:
String xml = "<hello><test>1.2</test><test2>123</test2></hello>";
XMLSerializer xmlSerializer = new XMLSerializer();
JSON json = xmlSerializer.read("<x>" + xml + "</x>");
There is no direct mapping between XML and JSON; XML carries with it type information (each element has a name) as well as namespacing. Therefore, unless each JSON object has type information embedded, the conversion is going to be lossy.
But that doesn't necessarily matter. What does matter is that the consumer of the JSON knows the data contract. For example, given this XML:
<books>
<book author="Jimbo Jones" title="Bar Baz">
<summary>Foo</summary>
</book>
<book title="Don't Care" author="Fake Person">
<summary>Dummy Data</summary>
</book>
</books>
You could convert it to this:
{
"books": [
{ "author": "Jimbo Jones", "title": "Bar Baz", "summary": "Foo" },
{ "author": "Fake Person", "title": "Don't Care", "summary": "Dummy Data" },
]
}
And the consumer wouldn't need to know that each object in the books collection was a book object.
Edit:
If you have an XML Schema for the XML and are using .NET, you can generate classes from the schema using xsd.exe. Then, you could parse the source XML into objects of these classes, then use a DataContractJsonSerializer to serialize the classes as JSON.
If you don't have a schema, it will be hard getting around manually defining your JSON format yourself.
The XML class in the org.json namespace provides you with this functionality.
You have to call the static toJSONObject method
Converts a well-formed (but not necessarily valid) XML string into a JSONObject. Some information may be lost in this transformation because JSON is a data format and XML is a document format. XML uses elements, attributes, and content text, while JSON uses unordered collections of name/value pairs and arrays of values. JSON does not does not like to distinguish between elements and attributes. Sequences of similar elements are represented as JSONArrays. Content text may be placed in a "content" member. Comments, prologs, DTDs, and <[ [ ]]> are ignored.
If you are dissatisfied with the various implementations, try rolling your own. Here is some code I wrote this afternoon to get you started. It works with net.sf.json and apache common-lang:
static public JSONObject readToJSON(InputStream stream) throws Exception {
SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setNamespaceAware(true);
SAXParser parser = factory.newSAXParser();
SAXJsonParser handler = new SAXJsonParser();
parser.parse(stream, handler);
return handler.getJson();
}
And the SAXJsonParser implementation:
package xml2json;
import net.sf.json.*;
import org.apache.commons.lang.StringUtils;
import org.xml.sax.*;
import org.xml.sax.helpers.DefaultHandler;
import java.util.ArrayList;
import java.util.List;
public class SAXJsonParser extends DefaultHandler {
static final String TEXTKEY = "_text";
JSONObject result;
List<JSONObject> stack;
public SAXJsonParser(){}
public JSONObject getJson(){return result;}
public String attributeName(String name){return "#"+name;}
public void startDocument () throws SAXException {
stack = new ArrayList<JSONObject>();
stack.add(0,new JSONObject());
}
public void endDocument () throws SAXException {result = stack.remove(0);}
public void startElement (String uri, String localName,String qName, Attributes attributes) throws SAXException {
JSONObject work = new JSONObject();
for (int ix=0;ix<attributes.getLength();ix++)
work.put( attributeName( attributes.getLocalName(ix) ), attributes.getValue(ix) );
stack.add(0,work);
}
public void endElement (String uri, String localName, String qName) throws SAXException {
JSONObject pop = stack.remove(0); // examine stack
Object stashable = pop;
if (pop.containsKey(TEXTKEY)) {
String value = pop.getString(TEXTKEY).trim();
if (pop.keySet().size()==1) stashable = value; // single value
else if (StringUtils.isBlank(value)) pop.remove(TEXTKEY);
}
JSONObject parent = stack.get(0);
if (!parent.containsKey(localName)) { // add new object
parent.put( localName, stashable );
}
else { // aggregate into arrays
Object work = parent.get(localName);
if (work instanceof JSONArray) {
((JSONArray)work).add(stashable);
}
else {
parent.put(localName,new JSONArray());
parent.getJSONArray(localName).add(work);
parent.getJSONArray(localName).add(stashable);
}
}
}
public void characters (char ch[], int start, int length) throws SAXException {
JSONObject work = stack.get(0); // aggregate characters
String value = (work.containsKey(TEXTKEY) ? work.getString(TEXTKEY) : "" );
work.put(TEXTKEY, value+new String(ch,start,length) );
}
public void warning (SAXParseException e) throws SAXException {
System.out.println("warning e=" + e.getMessage());
}
public void error (SAXParseException e) throws SAXException {
System.err.println("error e=" + e.getMessage());
}
public void fatalError (SAXParseException e) throws SAXException {
System.err.println("fatalError e=" + e.getMessage());
throw e;
}
}
Converting complete docx files into JSON does not look like a good idea, because docx is a document centric XML format and JSON is a data centric format. XML in general is designed to be both, document and data centric. Though it is technical possible to convert document centric XML into JSON, handling the generated data might be overly complex. Try to focus on the actual needed data and convert only that part.
If you need to be able to manipulate your XML before it gets converted to JSON, or want fine-grained control of your representation, go with XStream. It's really easy to convert between: xml-to-object, json-to-object, object-to-xml, and object-to-json. Here's an example from XStream's docs:
XML
<person>
<firstname>Joe</firstname>
<lastname>Walnes</lastname>
<phone>
<code>123</code>
<number>1234-456</number>
</phone>
<fax>
<code>123</code>
<number>9999-999</number>
</fax>
</person>
POJO (DTO)
public class Person {
private String firstname;
private String lastname;
private PhoneNumber phone;
private PhoneNumber fax;
// ... constructors and methods
}
Convert from XML to POJO:
String xml = "<person>...</person>";
XStream xstream = new XStream();
Person person = (Person)xstream.fromXML(xml);
And then from POJO to JSON:
XStream xstream = new XStream(new JettisonMappedXmlDriver());
String json = xstream.toXML(person);
Note: although the method reads toXML() XStream will produce JSON, since the Jettison driver is used.
If you have a valid dtd file for the xml snippet, then you can easily convert xml to json and json to xml using the open source eclipse link jar. Detailed sample JAVA project can be found here: http://www.cubicrace.com/2015/06/How-to-convert-XML-to-JSON-format.html
I have come across a tutorial, hope it helps you.
http://www.techrecite.com/xml-to-json-data-parser-converter
Use
xmlSerializer.setForceTopLevelObject(true)
to include root element in resulting JSON.
Your code would be like this
String xml = "<hello><test>1.2</test><test2>123</test2></hello>";
XMLSerializer xmlSerializer = new XMLSerializer();
xmlSerializer.setForceTopLevelObject(true);
JSON json = xmlSerializer.read(xml);
Docx4j
I've used docx4j before, and it's worth taking a look at.
unXml
You could also check out my open source unXml-library that is available on Maven Central.
It is lightweight, and has a simple syntax to pick out XPaths from your xml, and get them returned as Json attributes in a Jackson ObjectNode.

GSON: knowing what type of object to convert to?

I'm looking into using Google's GSON for my Android project that will request JSON from my web server. The JSON returned will be either...
1) A successful response of a known type (e.g.: class "User"):
{
"id":1,
"username":"bob",
"created_at":"2011-01-31 22:46:01",
"PhoneNumbers":[
{
"type":"home",
"number":"+1-234-567-8910"
},
{
"type":"mobile",
"number":"+1-098-765-4321"
}
]
}
2.) An unsuccessful response, which will always take on the same basic structure below.
{
"error":{
"type":"Error",
"code":404,
"message":"Not Found"
}
}
I'd like GSON to convert to the correct type depending on the existence of the error key/value pair above. The most practical way I can think to do this is as follows, but I'm curious if there's a better way.
final String response = client.get("http://www.example.com/user.json?id=1");
final Gson gson = new Gson();
try {
final UserEntity user = gson.fromJson(response, UserEntity.class);
// do something with user
} catch (final JsonSyntaxException e) {
try {
final ErrorEntity error = gson.fromJson(response, ErrorEntity.class);
// do something with error
} catch (final JsonSyntaxException e) {
// handle situation where response cannot be parsed
}
}
This is really just pseudocode though, because in the first catch condition, I'm not sure how to test whether the key error exists in the JSON response. So I guess my question is twofold:
Can I / how can I use GSON to test the existence of a key, and make a decision on how to parse based upon that?
Is this what others in a similar situation are doing with GSON, or is there a better way?
What you'd normally want to do is to have your server return an actual error code along with the JSON error response. Then you read the response as an ErrorEntity if you get an error code and as a UserEntity if you get 200. Obviously this requires a little more dealing with the details of communication with the server than just turning a URL in to a String, but that's how it is.
That said, I believe another option would be to use a custom JsonDeserializer and a class that can return either a value or an error.
public class ValueOrErrorDeserializer<V> implements JsonDeserializer<ValueOrError<V>> {
public ValueOrError<V> deserialize(JsonElement json, Type typeOfT,
JsonDeserializationContext context) {
JsonObject object = json.getAsJsonObject();
JsonElement error = object.get("error");
if (error != null) {
ErrorEntity entity = context.deserialize(error, ErrorEntity.class);
return ValueOrError.<V>error(entity);
} else {
Type valueType = ((ParameterizedType) typeOfT).getActualTypeArguments()[0];
V value = (V) context.deserialize(json, valueType);
return ValueOrError.value(value);
}
}
}
You'd then be able to do something like this:
String response = ...
ValueOrError<UserEntity> valueOrError = gson.fromJson(response,
new TypeToken<ValueOrError<UserEntity>>(){}.getType());
if (valueOrError.isError()) {
ErrorEntity error = valueOrError.getError();
...
} else {
UserEntity user = valueOrError.getValue();
...
}
I haven't tried that code out, and I'd still recommend using the HTTP error code, but it gives you an example of how you can use a JsonDeserializer to decide what to do with some JSON.

Categories