Hi I'm attempting to deserializer.deserialize this data from Google Analytics
[[/s417, 14945, 93.17823577906019], [/s413, 5996, 72.57178438000356],
[/s417/, 3157, 25.690567351200837], [/s420, 2985, 44.12472727272727],
[/s418, 2540, 64.60275150472916], [/s416, 2504, 69.72643979057591],
[/s415, 2379, 44.69660861594867], [/s422, 2164, 57.33786505538772],
[/s421, 2053, 48.18852894317578], [/s414, 1839, 93.22588376273218],
[/s412, 1731, 54.8431860609832], [/s411, 1462, 71.26186830015314],
[/s419, 1423, 51.88551401869159], [/, 63, 11.303571428571429],
[/s420/, 22, 0.3333333333333333], [/s413/, 21, 7.947368421052632],
[/s416/, 16, 96.0], [/s421/, 15, 0.06666666666666667], [/s411/, 13,
111.66666666666667], [/s422/, 13, 0.07692307692307693], [/g150, 11, 0.09090909090909091], [/s414/, 10, 2.0], [/s418/, 10, 0.4444444444444444], [/s415/, 9, 0.2222222222222222], [/s412/, 8, 0.6666666666666666], [/s45, 6, 81.0], [/s164, 5, 45.25], [/s28, 5, 16.2], [/s39, 5, 25.2], [/s27, 4, 59.5], [/s29, 4, 26.5], [/s365, 3, 31.666666666666668], [/s506, 3, 23.333333333333332], [/s1139, 2, 30.5], [/s296, 2, 11.0], [/s311, 2, 13.5], [/s35, 2, 55.0], [/s363, 2, 15.5], [/s364, 2, 17.5], [/s419/, 2, 0.0], [/s44, 2, 85.5], [/s482, 2, 28.5], [/s49, 2, 29.5], [/s9, 2, 77.0], [/s146, 1, 13.0], [/s228, 1, 223.0], [/s229, 1, 54.0], [/s231, 1, 0.0], [/s30, 1, 83.0], [/s312, 1, 15.0], [/s313, 1, 155.0], [/s316, 1, 14.0], [/s340, 1, 22.0], [/s350, 1, 0.0], [/s362, 1, 24.0], [/s43, 1, 54.0], [/s442, 1, 87.0], [/s465,
1, 14.0], [/s468, 1, 67.0], [/s47, 1, 41.0], [/s71, 1, 16.0], [/s72,
1, 16.0], [/s87, 1, 48.0], [/s147, 0, 0.0], [/s417, 0, 0.0]]
With this
#Immutable
private static JSONDeserializer<List<List<String>>> deserializer = new JSONDeserializer<List<List<String>>>();
And it's failing silently on the deserialization.
Only error I'm getting is from the xhtml
com.sun.faces.context.PartialViewContextImpl$PhaseAwareVisitCallback
visit
SEVERE: javax.el.ELException: /views/guide/edit.xhtml #257,102 value="#{GuideEditController.visitsByScene}": flexjson.JSONException:
Missing value at character 2
Any clues?
marekful had the right idea
replaceAll("[^\d,[]\,]+", "") to remove the offending characters did the trick
Related
I am trying to make a simple JSON-DB in Java since the current library on maven is horrendously overcomplicated. I have this method that takes in a key and value to put into a JSONObject and write it to my database.json file.
public static void put(String path, String key, Object[] value){
//creates new JSONObject where data will be stored
JSONObject jsondb = new JSONObject();
try{
//adds key and value to JSONObject
jsondb.put(key, value);
} catch (JSONException e){
e.printStackTrace();
} //end try-catch
try(PrintWriter out = new PrintWriter(new FileWriter(path, true))) {
out.write(jsondb.toString());
out.write(',');
} catch (Exception e){
e.printStackTrace();
} //end try-catch
} //end put()
Here is my main method where I write some data to the file
public static void main(String[] args) throws Exception {
String path = "app.json";
Object[] amogus = {"Hello", 1, 2, 3, 4, 5};
Object[] amogus1 = {"Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0};
JsonDB db = new JsonDB(path);
db.put(path, "arr", amogus);
db.put(path, "arr1", amogus1);
}
What happens is that it save each data in a set of curly braces. So when I write to the file more than once like I do in my main method it saves it like this:
{"arr": ["Hello", 1, 2, 3, 4, 5]}{"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]}
This causes VSCode to throw an error since this isnt valid JSON. How would I make the method remove the curly braces and add commas to make the above JSON valid JSON? I cant seem to find the documentation for this library (The library is org.json on maven).
This is a syntax issue. Your json is not valid because the json syntax want your data to be between curly braces like this:
{
"arr": ["Hello", 1, 2, 3, 4, 5],
"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
}
instead of this, which is not a valid json:
{"arr": ["Hello", 1, 2, 3, 4, 5]}
{"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]}
Create your data in your map object and let your json library convert (serialize) your object in a valid json.
I solved this problem by reading all of the content in the file and appending the JSON object that I wanted to write to the file and then write it all at once.
As part of my computer science IA I am creating a tool that reads match history and details of dota games and generates stats and hero stats. To do this I have accessed the valve API and grabbed a few jsons of matches and match history from it, then cut them down slightly so they only contain the information I need in the json.
Below is a sample of the details of one of the matches in a json format:
"result": {
"players": [
{
"account_id": 40884464,
"player_slot": 0,
"hero_id": 31,
"kills": 8,
"deaths": 8,
"assists": 14,
"last_hits": 72,
"denies": 0,
"gold_per_min": 304,
"xp_per_min": 412,
"level": 18,
},
{
"account_id": 70638797,
"player_slot": 1,
"hero_id": 35,
"kills": 6,
"deaths": 7,
"assists": 4,
"last_hits": 212,
"denies": 37,
"gold_per_min": 371,
"xp_per_min": 356,
"level": 17,
},
{
"account_id": 76281087,
"player_slot": 2,
"hero_id": 5,
"kills": 3,
"deaths": 13,
"assists": 10,
"last_hits": 22,
"denies": 0,
"gold_per_min": 215,
"xp_per_min": 259,
"level": 14,
},
{
"account_id": 4294967295,
"player_slot": 3,
"hero_id": 28,
"kills": 11,
"deaths": 11,
"assists": 11,
"last_hits": 166,
"denies": 18,
"gold_per_min": 413,
"xp_per_min": 485,
"level": 20,
},
{
"account_id": 81692493,
"player_slot": 4,
"hero_id": 2,
"kills": 1,
"deaths": 9,
"assists": 7,
"last_hits": 135,
"denies": 8,
"gold_per_min": 261,
"xp_per_min": 314,
"level": 16,
},
{
"account_id": 10101141,
"player_slot": 128,
"hero_id": 30,
"kills": 7,
"deaths": 8,
"assists": 25,
"last_hits": 90,
"denies": 2,
"gold_per_min": 382,
"xp_per_min": 421,
"level": 18,
},
{
"account_id": 62101519,
"player_slot": 129,
"hero_id": 7,
"kills": 6,
"deaths": 8,
"assists": 20,
"last_hits": 305,
"denies": 0,
"gold_per_min": 556,
"xp_per_min": 585,
"level": 22,
},
{
"account_id": 134700328,
"player_slot": 130,
"hero_id": 4,
"kills": 17,
"deaths": 2,
"assists": 13,
"last_hits": 335,
"denies": 16,
"gold_per_min": 729,
"xp_per_min": 724,
"level": 25,
},
{
"account_id": 35357393,
"player_slot": 131,
"hero_id": 83,
"kills": 4,
"deaths": 4,
"assists": 23,
"last_hits": 16,
"denies": 4,
"gold_per_min": 318,
"xp_per_min": 407,
"level": 18,
},
{
"account_id": 4294967295,
"player_slot": 132,
"hero_id": 101,
"kills": 13,
"deaths": 8,
"assists": 12,
"last_hits": 57,
"denies": 3,
"gold_per_min": 390,
"xp_per_min": 405,
"level": 18,
}
]
,
"radiant_win": false,
"duration": 2682,
"start_time": 1461781997,
"match_id": 2324299045,
"match_seq_num": 2036251155,
"cluster": 133,
"game_mode": 1,
"flags": 0,
"engine": 1,
"radiant_score": 30,
"dire_score": 48
}
Using an intelliJ plugin I have created 3 Java classes, one with the match result, one for the details of the result, and one for the details of the players within the result, each with the variables gets sets in:
TestMatch fields:
private TestMatchResult result;
TestMatchResult fields:
private int duration;
private int start_time;
private int cluster;
private boolean radiant_win;
private int match_seq_num;
private int engine;
private TestMatchResultPlayers[] players;
private long match_id;
private int dire_score;
private int flags;
private int game_mode;
private int radiant_score;
TestMatchResultPlayers fields:
private int kills;
private int gold_per_min;
private int last_hits;
private int account_id;
private int assists;
private int level;
private int player_slot;
private int xp_per_min;
private int hero_id;
private int denies;
private int deaths;
I have downloaded and added the gson library as a dependency into the intelliJ project.
I am trying to parse the json into the java classes as an object and would like to do that for all the match jsons, however I am not quite sure how to do that at the moment, all I have is:
public static void getMatch()
{
Gson gson = new Gson();
}
Could someone who understands gson better than myself give me a little bit of guidance as to how I'd go about parsing that json into the class(es) as an object for several match jsons? Once I've done that the rest of what I need to do is easy since it's just a case of taking the variables and running calculations on them then displaying them. If it's not possible or practical I can make a test CSV and read from that instead as I know how to use them, but only just come across jsons as that is what the valve API returns requests in so figured I may as well learn how to use them.
Thanks!
you need to use the method Gson.fromJson()
Example:
public static void getMatch()
{
Gson gson = new Gson();
TestMatch tm = gson.fromJson(jsonString, TestMatch.class);
}
There have been many MaltParser and/or NLTK related questions:
Malt Parser throwing class not found exception
How to use malt parser in python nltk
MaltParser Not Working in Python NLTK
NLTK MaltParser won't parse
Dependency parser using NLTK and MaltParser
Dependency Parsing using MaltParser and NLTK
Parsing with MaltParser engmalt
Parse raw text with MaltParser in Java
Now, there's a more stabilized version of MaltParser API in NLTK: https://github.com/nltk/nltk/pull/944 but there are issues when it comes to parsing multiple sentences at the same time.
Parsing one sentence at a time seems fine:
_path_to_maltparser = '/home/alvas/maltparser-1.8/dist/maltparser-1.8/'
_path_to_model= '/home/alvas/engmalt.linear-1.7.mco'
>>> mp = MaltParser(path_to_maltparser=_path_to_maltparser, model=_path_to_model)
>>> sent = 'I shot an elephant in my pajamas'.split()
>>> sent2 = 'Time flies like banana'.split()
>>> print(mp.parse_one(sent).tree())
(pajamas (shot I) an elephant in my)
But parsing a list of sentences doesn't return a DependencyGraph object:
_path_to_maltparser = '/home/alvas/maltparser-1.8/dist/maltparser-1.8/'
_path_to_model= '/home/alvas/engmalt.linear-1.7.mco'
>>> mp = MaltParser(path_to_maltparser=_path_to_maltparser, model=_path_to_model)
>>> sent = 'I shot an elephant in my pajamas'.split()
>>> sent2 = 'Time flies like banana'.split()
>>> print(mp.parse_one(sent).tree())
(pajamas (shot I) an elephant in my)
>>> print(next(mp.parse_sents([sent,sent2])))
<listiterator object at 0x7f0a2e4d3d90>
>>> print(next(next(mp.parse_sents([sent,sent2]))))
[{u'address': 0,
u'ctag': u'TOP',
u'deps': [2],
u'feats': None,
u'lemma': None,
u'rel': u'TOP',
u'tag': u'TOP',
u'word': None},
{u'address': 1,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 2,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'I'},
{u'address': 2,
u'ctag': u'NN',
u'deps': [1, 11],
u'feats': u'_',
u'head': 0,
u'lemma': u'_',
u'rel': u'null',
u'tag': u'NN',
u'word': u'shot'},
{u'address': 3,
u'ctag': u'AT',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'AT',
u'word': u'an'},
{u'address': 4,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'elephant'},
{u'address': 5,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'in'},
{u'address': 6,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'my'},
{u'address': 7,
u'ctag': u'NNS',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NNS',
u'word': u'pajamas'},
{u'address': 8,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'Time'},
{u'address': 9,
u'ctag': u'NNS',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NNS',
u'word': u'flies'},
{u'address': 10,
u'ctag': u'NN',
u'deps': [],
u'feats': u'_',
u'head': 11,
u'lemma': u'_',
u'rel': u'nn',
u'tag': u'NN',
u'word': u'like'},
{u'address': 11,
u'ctag': u'NN',
u'deps': [3, 4, 5, 6, 7, 8, 9, 10],
u'feats': u'_',
u'head': 2,
u'lemma': u'_',
u'rel': u'dep',
u'tag': u'NN',
u'word': u'banana'}]
Why is that using parse_sents() don't return an iterable of parse_one?
I could however, just get lazy and do:
_path_to_maltparser = '/home/alvas/maltparser-1.8/dist/maltparser-1.8/'
_path_to_model= '/home/alvas/engmalt.linear-1.7.mco'
>>> mp = MaltParser(path_to_maltparser=_path_to_maltparser, model=_path_to_model)
>>> sent1 = 'I shot an elephant in my pajamas'.split()
>>> sent2 = 'Time flies like banana'.split()
>>> sentences = [sent1, sent2]
>>> for sent in sentences:
>>> ... print(mp.parse_one(sent).tree())
But this is not the solution I'm looking for. My question is how to answer why doesn't the parse_sent() return an iterable of parse_one(). and how could it be fixed in the NLTK code?
After #NikitaAstrakhantsev answered, I've tried it outputs a parse tree now but it seems to be confused and puts both sentences into one before parsing it.
# Initialize a MaltParser object with a pre-trained model.
mp = MaltParser(path_to_maltparser=path_to_maltparser, model=path_to_model)
sent = 'I shot an elephant in my pajamas'.split()
sent2 = 'Time flies like banana'.split()
# Parse a single sentence.
print(mp.parse_one(sent).tree())
print(next(next(mp.parse_sents([sent,sent2]))).tree())
[out]:
(pajamas (shot I) an elephant in my)
(shot I (banana an elephant in my pajamas Time flies like))
From the code it seems to be doing something weird: https://github.com/nltk/nltk/blob/develop/nltk/parse/api.py#L45
Why is it that the parser abstract class in NLTK is swooshing two sentences into one before parsing? Am I calling the parse_sents() incorrectly? If so, what is the correct way to call parse_sents()?
As I see in your code samples, you don't call tree() in this line
>>> print(next(next(mp.parse_sents([sent,sent2]))))
while you do call tree() in all cases with parse_one().
Otherwise I don't see the reason why it could happen: parse_one() method of ParserI isn't overridden in MaltParser and everything it does is simply calling parse_sents() of MaltParser, see the code.
Upd: The line you're talking about isn't called, because parse_sents() is overridden in MaltParser and is directly called.
The only guess I have now is that java lib maltparser doesn't work correctly with input file containing several sentences (I mean this block - where java is run). Maybe original malt parser has changed the format and now it is not '\n\n'.
Unfortunately, I can't run this code by myself, because maltparser.org is down for the second day. I checked that the input file has expected format (sentences are separated by double endline), so it is very unlikely that python wrapper merges sentences.
Well, I have the following questions, to perform the join between the tables by setting the nickname (alias), I need to make a decode, used the alias alias, but to use because it does not recognize the use of pure sql.
How do I return the name that defines the criteria for the tables? I'm using sqlGroupProjection, if you can suggest another way.
Criteria criteria = dao.getSessao().createCriteria(Chamado.class,"c");
criteria.createAlias("c.tramites","t").setFetchMode("t", FetchMode.JOIN);
projetos.add( Projections.rowCount(),"qtd");
criteria.add(Restrictions.between("t.dataAbertura", Formata.getDataD(dataInicio, "dd/MM/yyyy"), Formata.getDataD(dataFim, "dd/MM/yyyy")));
projetos.add(Projections.sqlGroupProjection("decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO",
"decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)",
new String[]{"COD_ESTADO"},
new Type[]{Hibernate.INTEGER}));
criteria.setProjection(projetos);
List<Relatorio> relatorios = criteria.setResultTransformer(Transformers.aliasToBean(Relatorio.class)).list();
SQL generated by criteria:
select count(*) as y0_,
decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO
from CHAMADOS this_
inner join TRAMITES t1_ on this_.COD_CHAMADO = t1_.COD_CHAMADO
where t1_.DT_ABERTURA between ? and ?
group by decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)
I have a file that my Java application takes as input which I read 6 bytes at a time. When I read it in off the file system everything works fine. If I build everything into a jar the first 4868 reads work fine, but after that it starts returning the byte arrays in the wrong order and also ends up having read more data at the end.
Here is a simplified version of my code which reproduces the problem:
InputStream inputStream = this.getClass().getResourceAsStream(filePath);
byte[] byteArray = new byte[6];
int counter = 0;
while ((inputStream.read(byteArray) != -1))
{
counter++;
System.out.println("Read #" + counter +": " + Arrays.toString(byteArray));
}
System.out.println("Done.");
This is the [abbreviated] output I get when reading off of the file system:
...
Read #4867: [5, 0, 57, 7, 113, -26]
Read #4868: [2, 0, 62, 7, 114, -26]
Read #4869: [2, 0, 68, 7, 115, -26]
Read #4870: [3, 0, 75, 7, 116, -26]
Read #4871: [2, 0, 83, 7, 117, -26]
...
Read #219687: [1, 0, 4, -8, 67, 33]
Read #219688: [1, 0, 2, -8, 68, 33]
Read #219689: [5, 0, 1, -8, 67, 33]
Done.
And here is what I get reading from a jar:
...
Read #4867: [5, 0, 57, 7, 113, -26]
Read #4868: [2, 0, 62, 7, 113, -26] //everything is fine up to this point
Read #4869: [7, 114, -26, 2, 0, 68]
Read #4870: [7, 115, -26, 3, 0, 75]
Read #4871: [7, 116, -26, 2, 0, 83]
...
Read #219687: [95, 33, 1, 0, 78, -8]
Read #219688: [94, 33, 1, 0, 76, -8]
Read #219689: [95, 33, 1, 0, 74, -8]
...
Read #219723: [67, 33, 1, 0, 2, -8]
Read #219724: [68, 33, 5, 0, 1, -8]
Read #219725: [67, 33, 5, 0, 1, -8]
Done.
I unzipped the jar and confirmed that the files being read are identical, so what could cause the reader to return different results?
Your reading loop is wrong.
inputStream.read() method returns number of bytes it really read. You have to check this number before transforming the data into string.
When you are reading from file the bytes are not arrived all together. At one of the iterations of your loop you probably read 4 of expected 6 bytes, so your transformation to string does not work.
If you are reading integers I'd recommend you to wrap your raw input string using Scanner or good old DataInputStream and read integers directly.