I have a JSON file with content like so :
[
{
"Name": "A+",
"Type": "Array",
"Designed by": "Arthur Whitney"
},
{
"Name": "Ada",
"Type": "Compiled, Imperative, Procedural, Object-oriented class-based",
"Designed by": "Tucker Taft, Jean Ichbiah"
}
{
"Name": "C",
"Type": "Compiled, Curly-bracket, Imperative, Procedural",
"Designed by": "Dennis Ritchie"
},
{
"Name": "C#",
"Type": "Compiled, Curly-bracket, Iterative, Object-oriented class-based, Reflective, Procedural",
"Designed by": "Microsoft"
},
{
"Name": "Java",
"Type": "Compiled, Curly-bracket, Imperative, Object-oriented class-based, Procedural, Reflective",
"Designed by": "James Gosling, Sun Microsystems"
},
{
"Name": "JavaScript",
"Type": "Curly-bracket, Interpreted, Reflective, Procedural, Scripting, Interactive mode",
"Designed by": "Brendan Eich"
}
]
I want to write a web application that user type a word FOR EXAMPLE "java" in a textbox and then click the search button .after that search inside this JSON file and show the result in a web page .
I am new in a web application.I don't know how to search through a JSON file and represent the result .Can anyone help me?
If you just want to get the job done, you can use lodash and than simply do something like this:
var data = [...] // your jsonfile as array of objects
_.findWhere(data, {Name:"Java"}) // returns the object
Related
everyone.
My question is quite simple, I think.
My use case:
Jenkins receives a huge JSON payload from a Gitlab WebHook (more than 2500 lines). I want to get rid of a specific node with more than 2000 lines. The JSON I get is so big that Jenkins is unable to parse it correctly so I want to remove a node I don't need.
Assume the sample tree in the documentation page:
{
"store": {
"book": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
}
],
"bicycle": {
"color": "red",
"price": 19.95
}
},
"expensive": 10
}
How can I get the whole tree except one? For example, if I want to get everything but the book node ...
{
"store": {
"bicycle": {
"color": "red",
"price": 19.95
}
},
"expensive": 10
}
I more or less understand the filters feature, and I assume I need to figure out a proper filter but it seems they are only useful to search nodes based on some criteria. I'm not sure if they are useful to remove elements based on filtering conditions.
Thanks so much for your help.
JSON Path is a tool for querying, not manipulation. You're not going to be able to alter the input value with it. You need another tool.
I'd suggest looking at something like https://jsonnet.org/ which is designed for template-based transformations and generation.
I'm trying to create SQL Tables from a Json File which is written following the OpenApi Specification. Here is an example of an Input file I must convert:
"definitions": {
"Order": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"petId": {
"type": "integer",
"format": "int64"
},
"quantity": {
"type": "integer",
"format": "int32"
},
"shipDate": {
"type": "string",
"format": "date-time"
},
"status": {
"type": "string",
"description": "Order Status",
"enum": [
"placed",
"approved",
"delivered"
]
},
"complete": {
"type": "boolean",
"default": false
}
},
"xml": {
"name": "Order"
}
},
"Category": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
}
},
"xml": {
"name": "Category"
}
},
My aim to to create two tables named "Order" and "Category" whose columns must be to ones listed in the "properties" field. I'm using Java.
The Input file is mutable, so I used Gson to read it. I managed to get an Output like this:
CREATE TABLE ORDER
COLUMNS:
id->
type: integer
format: int64
petId->
type: integer
format: int64
quantity->
type: integer
format: int32
shipDate->
type: string
format: date-time
status->
type: string
description: Order Status
Possibilities:
-placed
-approved
-delivered
complete->
type: boolean
default: false
CREATE TABLE CATEGORY
COLUMNS:
id->
type: integer
format: int64
name->
type: string
I'm stuck here, trying to convert the "type" and "format" fields into a type that can be read by PostgreSQL or MySQL. Furthermore, it is hard to work directly on the code to get a readable SQL string due the presence of nesting. So I thought it might be a good idea to work on the output and "translate" it to SQL. Is there any class\package that could help me reading a file like this? I'm trying to avoid the use of thousands IF ELSE conditions. Thank you.
Your assignment involves two phases.
One is "Parsing" the given JSON object and understanding the content
Second one is "Translating" the parsed content into a working SQL query
Here your Java program should work as a kind of Translation engine.
For parsing the JSON objects many java libraries are available.
To translate the parsed json into a SQL query you can simply use basic String manipulation methods.
I need to serve JSON from from my backend to the user. But before sending it over the wire I need to remove some data because it's confidential, every element who's key starts with conf_.
Assume I have the following JSON source:
{
"store": {
"book": [
{
"category": "reference",
"conf_author": "Nigel Rees",
"title": "Sayings of the Century",
"conf_price": 8.95
},
{
"category": "fiction",
"conf_author": "Evelyn Waugh",
"title": "Sword of Honour",
"conf_price": 12.99
},
{
"category": "fiction",
"conf_author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"conf_price": 8.99
},
{
"category": "fiction",
"conf_author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"conf_price": 22.99
}
],
"bicycle": {
"color": "red",
"conf_price": 19.95
}
},
"expensive": 10
}
Since the structure of the soruce JSON may vary (is not known), I need a way to identify the elements to remove by a pattern based on the key name (^conf_).
So the resulting JSON should be:
{
"store": {
"book": [
{
"category": "reference",
"title": "Sayings of the Century"
},
{
"category": "fiction",
"title": "Sword of Honour"
},
{
"category": "fiction",
"title": "Moby Dick",
"isbn": "0-553-21311-3"
},
{
"category": "fiction",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8"
}
],
"bicycle": {
"color": "red"
}
},
"expensive": 10
}
Since my source JSON will have 1m+ entries in the books array where every entry will have 100+ fields (child objects), I'm looking for some stream / event based approach like StAX rather then parsing the whole JSON into a JSONObject for manipulation for performance and resource reasons.
I looked at things like Jolt, JSONPath and JsonSurfer but these libraries did me get anywhere so far.
Can anyone provide some details on how my use case could be implemented best?
Regards!
You can use Jackson's Streaming API which can be used to parse huge JSON upto even giga bytes of size.It can be used to process huge files without loading them completely in memory.It allows get the data you want and ignore what you don't want also
Read more: http://wiki.fasterxml.com/JacksonStreamingApi
I am using mule to transform some webservice responses on my project, and currently i am using DataWeave message transformer.
JSON that i should transform :
{
"odata.metadata": "http://mchwtatmsdb/Across/CrossTank/api/v1/$metadata#Translations",
"value": [
{
"SourceSentence": {
"Id": 2750901,
"Text": "Refrigerator:",
"Language": 1033
},
"TargetSentence": {
"Id": 2750902,
"Text": "Kühlschrank:",
"Language": 1031
},
"Id": 2264817,
"Similarity": 100,
"CreationDate": "2009-02-25T12:56:15",
"Creator": "41e8d49d-0de7-4a96-a220-af96d94fe4b0",
"ModificationDate": "2009-02-25T12:56:15",
"Modificator": "00000000-0000-0000-0000-000000000000",
"State": "SmartInserted",
"Note": ""
},
{
"SourceSentence": {
"Id": 2750906,
"Text": "Refrigerator*",
"Language": 1033
},
"TargetSentence": {
"Id": 2750907,
"Text": "Kühlschrank*",
"Language": 1031
},
"Id": 2264822,
"Similarity": 100,
"CreationDate": "2009-02-25T12:55:46",
"Creator": "41e8d49d-0de7-4a96-a220-af96d94fe4b0",
"ModificationDate": "2009-02-25T12:55:46",
"Modificator": "00000000-0000-0000-0000-000000000000",
"State": "SmartInserted",
"Note": ""
}
]
}
I am basically using transformer, define metadatas respective to json files that is included in the project.
So transformer part is so simple :
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
"odata.metadata": payload."odata.metadata",
value: payload.value map ((value , indexOfValue) -> {
SourceSentence: {
Id: value.SourceSentence.Id,
Text: value.SourceSentence.Text as :string,
Language: value.SourceSentence.Language
},
TargetSentence: {
Id: value.TargetSentence.Id,
Text: value.TargetSentence.Text,
Language: value.TargetSentence.Language
},
Similarity: value.Similarity
})
}]]></dw:set-payload>
Transformation runs in expected way and it gets the necessary fields that i've set in dataweave transformer, after transformer implemented on json string, it changes the encoding somehow, and output doesn't show special characters. Such as:
{
"odata.metadata": "http://mchwtatmsdb/Across/CrossTank/api/v1/$metadata#Translations",
"value": [
{
"SourceSentence": {
"Id": 2750901,
"Text": "Refrigerator:",
"Language": 1033
},
"TargetSentence": {
"Id": 2750902,
"Text": "K252hlschrank:",
"Language": 1031
},
"Similarity": 100
},
{
"SourceSentence": {
"Id": 2750906,
"Text": "Refrigerator*",
"Language": 1033
},
"TargetSentence": {
"Id": 2750907,
"Text": "K252hlschrank*",
"Language": 1031
},
"Similarity": 100
}
]
}
"Text": "K252hlschrank*" part of the string is showing "ü" character as "252" i tried to run project both on the Windows an Linux environment. On linux, character is shown as "\u00" so i think this is somehow related OS problem. I've tried several things to fix the problem.
Tried to change project properties, set encoding to "UTF-8". It didn't work.
Tried to change run configuration, set encoding to "UTF-8". It didn't work.
Tried to give -Dfile.encoding="UTF-8" parameter into run parameters of Java, again it didn't work.
What is source of this problem, are transformers direclty using operating system's encoding ? Because without transformation, main json file represented as "ü", no encoding problem.
I solved this problem by changing my windows language settings to English(United Kingdom) from Turkish... Don't know how it is effected but it did the magic.
I've a csv file something similar to this
"name.firstName","name.givenName","name.DisplayName","phone.type","phone.value"
"john","maverick","John Maverick","mobile","123-123-123"
"jim","lasher","Jim Lasher","mobile","123-123-123"
I want to convert the 2nd and 3rd row into JSON objects.Using the first row as Header. So the result will be
[
{
"name": {
"firstName": "john",
"givenName": "maverick",
"DisplayName": "John Maverick"
},
"phone": {
"type": "mobile",
"value": "123-123-123"
}
},
{
"name": {
"firstName": "john",
"givenName": "maverick",
"DisplayName": "John Maverick"
},
"phone": {
"type": "mobile",
"value": "123-123-123"
}
]
Any idea how to achieve this?
Here is a Java library that may help you. http://www.jonathanhfisher.co.uk/playpen/csv2json/index.htm
Here is a JavaScript library that may or may not be useful to you. http://www.cparker15.com/code/utilities/csv-to-json/
And finally, here is a past answer that may be useful. I like the OpenCSV solution. However, instead of JAXB, you could use Jackson. Converting an CSV file to a JSON object in Java