I have a JSON file with this structure...
{"id":"1","name":"name","categories":["category1","category2","category3"],"type":"store"}
{"id":"2","name":"name","categories":["category1","category2","category3"],"type":"store"}
which doesn't have a key or commas separating each object. So when I use this code...
File input = new File("test.json");
ObjectMapper mapper = new ObjectMapper();
Map obj = mapper.readValue(input, Map.class);
the obj variable only has the first line in the json file which makes sense as it doesn't know what the key is.
I tried adding one by wrapping the objects like so...
{ "Key": [
{"id":"1","name":"name","categories":["category1","category2","category3"],"type":"store"},
{"id":"2","name":"name","categories":["category1","category2","category3"],"type":"store"}
] }
including adding the commas to separate each as the file did not have any commas to separate them.
While this works...
I have multiple json files that I have to work with
The file sizes are a bit big so it takes a long time to add they "Key" wrap like I did in the example.
I'm hoping to avoid this altogether but not sure if I can. Is there a way to read the json file using the original format into a Map so I can then filter the data as needed?
There is a simple solution that doesn't require file modification. Read your file line by line and then feed a single line to your ObjectMapper. You will get Many instances of Maps that you can store in a List, JsonArray or another map that you will need to create in your code. your code make look like:
ObjectMapper mapper = new ObjectMapper();
List<Map<String, Object> list = new ArrayList<>()
try (
BufferedReader br = new BufferedReader(new FileReader(new File("test.json")))) {
String line;
while((line = br.readLine()) != null) {
Map obj = mapper.readValue(line, Map.class);
list.add(obj)
}
Related
I have a Map<String, Object> object, which is obtained from:
new org.yaml.snakeyaml.Yaml().load(someStr)
I want to dump object to a local file, when I use:
new com.fasterxml.jackson.databind.ObjectMapper().writeValue(new File(filePath), object)
I got a proper file, but if some string filed in that object is too long. It broke at multiple lines, and was added line breaks, like this:
SELECT materialid, accountid, device_type, material_type, content
FROM cpc01.material_style_##
What should I do to make ObjectMapper to dump string field into one line instead of multiple lines?
Thanks to #flyx, using SnakeYaml can handle it, my code is as below:
DumperOptions options = new DumperOptions();
options.setSplitLines(false); // remove the line breaks
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK); // remove quotes
options.setIndent(2);
options.setPrettyFlow(true); // remove curly brackets
Yaml yaml = new Yaml(options);
// omit some code
yaml.dump(finalYamlMap, new FileWriter(fileName));
I've successfully created a JSON file with gson but now that I want to parse that same file, the encoding gets all messed up.
This is the code I use to read the JSON file:
BufferedReader jsonFile = new BufferedReader(new FileReader("file.json"));
Map<String, List<long[]>> trafficInput = new HashMap<>();
trafficInput = gson.fromJson(jsonFile, HashMap.class);
I can't seem to figure out how to ensure the file gets parsed the right way.
For instance, this valid JSON code from the file:
{"paris":[[1485907200000,182184411,41274],[1485993600000,151646118,36697],"london":[[1485907200000,30200160,155827]}
...gets parsed like this:
{"paris":[[1.4859072E12,1.82184411E8,41274.0],[1.4859936E12,1.51646118E8,36697.0],"london":[[1.4859072E12,3.020016E7,155827.0]}
This messes up the rest of the code as the longs aren't longs anymore.
For instance, if I try to print out a value, like so:
System.out.println(trafficInput.get("paris").get(0)[0]);
... I get this error:
Exception in thread "main" java.lang.ClassCastException: java.util.ArrayList cannot be cast to [J
Any help?
This is happening because of the following line:
trafficInput = gson.fromJson(jsonFile, HashMap.class);
This line instructs gson to deserialize the string into HashMap without specifying any type, and hence, gson applies its default conversion mechanism (i.e. converting Number into double etc). This is why Sysout statement results in a ClassCastException because that element is not an array.
All you need to do is to specify a TypeToken while calling fromJson method and it will take care of types. e.g.:
Gson gson = new Gson();
Type type = new TypeToken<Map<String, List<long[]>>>(){}.getType();
Map<String, List<long[]>> trafficInput = new HashMap<>();
trafficInput = gson.fromJson("{\"paris\":[[1485907200000,182184411,41274],[1485993600000,151646118,36697]],\"london\":[[1485907200000,30200160,155827]]}", type);
System.out.println(trafficInput);
System.out.println(gson.toJson(trafficInput));
The above snippet prints numbers without scientific notation.
I am trying to write a test case where I want to stream json objects from a json file separated by new line into Java.
I want to stream one event object in Java and serialize it.
The json file is of the form:
{"event":[{"D49-64":0,"Bezeichnung":"A 41","D33-48":0}]}
{"event":[{"D49-64":1,"Bezeichnung":"A 41","D33-48":0}]}
Any suggestions to stream the objects in Java will be beneficial.
The blob that you have posted is not a valid JSONObject, but two individual objects.
To stream this, you would end up with something like the following:
String pathToFile = "/path/to/something.txt";
BufferedReader someReader = new BufferedReader( new FileReader( pathToFile ));
String someData;
while (( someData = someReader.readLine() ) != null ) {
JSONObject o = new JSONObject( someData );
doSomethingWith( o );
}
The library I generally use for JSON manipulation is org.json
I was solving the same problem: reading data from file which just has sequence of json objects in it. I am using com.fasterxml.jackson library for json manipulation. While it does not have direct methods for exactly this, the solution is still quite simple:
// InputStream in - input stream with your data
ObjectMapper mapper = new ObjectMapper();
JsonParser parser = mapper.getFactory().createParser(in);
ObjectNode nextObject;
do {
nextObject = mapper.readTree(parser); // returns null when end of stream is reached
// process your object here
} while(nextObject != null);
I have a CSV file which has a header in the first line. I want to convert it to List<Map<String, String>>, where each Map<String, String> in the list represents a record in the file. The key of the map is the header and the value is the actual value of the field.
What I have so far:
BufferedReader br = <handle to file>;
// Get the headers to build the map.
String[] headers = br.lines().limit(1).collect(Collectors.toArray(size -> new String[size]));
Stream<String> recordStream = br.lines().skip(1);
What further operations can I perform on recordStream so that I can transform it to List<Map<String, String>>?
Sample CSV file is:
header1,header2,header3 ---- Header line
field11,field12,field13 ----> need to transform to Map where entry would be like header1:field11 header2:field12 and so on.
field21,field22,field23
field31,field32,field33
Finally all these Maps need to be collected to a List.
The following will work. The header line is retrieved by calling readLine directly on the BufferedReader and by splitting around ,. Then, the rest of the file is read: each line is split around , and mapped to a Map with the corresponding header.
try (BufferedReader br = new BufferedReader(...)) {
String[] headers = br.readLine().split(",");
List<Map<String, String>> records =
br.lines().map(s -> s.split(","))
.map(t -> IntStream.range(0, t.length)
.boxed()
.collect(toMap(i -> headers[i], i -> t[i])))
.collect(toList());
System.out.println(headers);
System.out.println(records);
};
A very important note here is that BufferedReader.lines() does not return a fresh Stream when it is called: we must not skip 1 line after we read the header since the reader will already have advanced to the next line.
As a side note, I used a try-with-resources construct so that the BufferedReader can be properly closed.
I know this is a bit of an old question, but I ran into the same problem, and created a quick sample of the Commons CSV solution mentioned by Tagir Valeev:
Reader in = new FileReader("path/to/file.csv");
Iterable<CSVRecord> records = CSVFormat.RFC4180.withFirstRecordAsHeader().parse(in);
List<Map> listOfMaps = new ArrayList<>();
for (CSVRecord record : records) {
listOfMaps.add(record.toMap());
}
The JSON example file consists of:
{
"1st_key": "value1",
"2nd_key": "value2",
"object_keys": {
"obj_1st": "value1",
"obj_2nd": "value2",
"obj_3rd": "value3",
}
}
I read the JSON file into a String with this StringBuilder method, in order to add the newlines into the string itself. So the String looks exactly like the JSON file above.
public String getJsonContent(String fileName) {
StringBuilder result = new StringBuilder("");
File file = new File(fileName);
try (Scanner scanner = new Scanner(file)) {
while (scanner.hasNextLine()) {
String line = scanner.nextLine();
result.append(line).append("\n");
}
scanner.close();
} catch (IOException e) {
e.printStackTrace();
}
return result.toString();
}
Then I translate the JSON file into an Object using MongoDB API (with DBObject, BasicDBObject and util.JSON) and I call out the Object section I need to change, which is 'object_keys':
File jsonFile = new File(C:\\example.json);
String jsonString = getJsonContent(jsonFile.getAbsolutePath());
DBObject jsonObject = (DBObject)JSON.parse(jsonString);
BasicDBObject objectKeys = (BasicDBObject) jsonObject.get("object_keys");
Now I can write new values into the Object using the PUT method like this:
objectKeys.put("obj_1st","NEW_VALUE1");
objectKeys.put("obj_2nd","NEW_VALUE2");
objectKeys.put("obj_3rd","NEW_VALUE3");
! This following part not needed, check out my answer below.
After I have changed the object, I need to write it back into the json file, so I need to translate the Object into a String. There are two methods to do this, either one works.
String newJSON = jsonObject.toString();
or
String newJSON = JSON.serialize(jsonObject);
Then I write the content back into the file using PrintWriter
PrintWriter writer = new PrintWriter(C:\\example.json)
writer.print(newJSON);
writer.close();
The problem I am facing now is that the String that is written is in a single line with no formatting whatosever. Somewhere along the way it lost all the newlines. So it basically looks like this:
{"1st_key": "value1","2nd_key": "value2","object_keys": { "obj_1st": "NEW_VALUE1","obj_2nd": "NEW_VALUE2","obj_3rd": "NEW_VALUE3", }}
I need to write the JSON file back in the same format as shown in the beginning, keeping all the tabulation, spaces etc.
Is this possible somehow ?
When you want something formatted the way you said it is addressed as writing to a file in a pretty/beautiful way. For example: Output beautiful json. A quick search on google found what i believe to solve your problem.
Solution
You're going to have to use a json parser of some sort. I personally prefer org.json and would recommend it if you are manipulating the json data, but you may also like json-io which is really good for json serialization with no external dependencies.
With json-io, it's as simple as
String formattedJson = JsonWriter.formatJson(jsonObject.toString())
With org.json, you simply pass an int to the toString method.
Thanks Saraiva, I found a surprisingly simple solution by Googling around with the words 'pretty printing JSON' and used the Google GSON library. I downloaded the .jar and added it to my project in Eclipse.
These are the new imports I needed:
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
Since I already had the JSON Object (jsonObject) readily available from my previous code, I only needed to add two new lines:
Gson gson = new GsonBuilder().setPrettyPrinting().create();
String newJSON = gson.toJson(jsonObject);
Now when I use writer.print(newJSON); it will write the JSON in the right format, beautifully formatted and indented.