I am developing a Springboot API for the school management systems. I have a student class that has a list of courses. The student in each course has to complete a certain amount of tasks to meet the requirements of that course. Let's say, course CS101 has 15 tasks to be completed, likewise, CS102 has 20 tasks to be completed. The student has to finish those tasks. I need to show the student, so far how many of them are done and how many of them remained. I am confused about how to show the number of tasks that are completed so far. The following JSON is my course list response for a specific student.
"course": [
{
"id": 1,
"name": "CS101",
"totalToComplete": 15
},
{
"id": 2,
"name": "CS102",
"totalToComplete": 20
},
]
The following response is my StudentProfileDto. Here, I will show all the information about the student including the list of courses that are taken and tasks that are completed in that course.
As I mentioned above I just need to send a JSON response that shows the number of completed tasks in that course. For instance, in UI, students will see the text "You have finished 10 tasks out of 15 in CS101". I couldn't figure out how to send the response. I can send java codes upon request, otherwise, the question will be too long.
{
"id": 1,
"studentId": 20170602012, // I could've use this as a primary key :)
"noProcedures": 0, // not relevant with the question
"noPatientLogs": 0, // not relevant with the quesiton.
"courses": [ // list of courses that are taken by students with id 1
{
"id": 1,
"name": "CS101",
"totalToComplete": 15
},
{
"id": 1,
"name": "CS102",
"totalToComplete": 20
}
],
}
Related
I need to process a big JSON payload(~1MB) coming from an API, a portion of the JSON is something like this:
{
"id": "013dd2a7-fec4-4cc5-b819-f3cf16a1f820",
//more attributes
"entry_mode": "LDE",
"periods": [
{
"type": "quarter",
"id": "fe96dc03-660c-423c-84cc-e6ae535edd2d",
"number": 1,
"sequence": 1,
"scoring": {
//more attribtues
},
"events": [
{
"id": "e4426708-fadc-4cae-9adc-b7f170f5d607",
"clock": "12:00",
"updated": "2013-12-22T03:41:40+00:00",
"description": "J.J. Hickson vs. DeAndre Jordan (Blake Griffin gains possession)",
"event_type": "opentip",
"attribution": {
"name": "Clippers",
"market": "Los Angeles",
"id": "583ecdfb-fb46-11e1-82cb-f4ce4684ea4c",
"team_basket": "left"
},
"location": {
"coord_x": 572,
"coord_y": 296
},
"possession": {
"name": "Clippers",
"market": "Los Angeles",
"id": "583ecdfb-fb46-11e1-82cb-f4ce4684ea4c"
}
},
//more events
]
}
]
}
This is a nearly-realtime API that I need to process only the events, identify a set of event UUIDs, look for duplicates in the database and save new events.
I could use a JSONObject/JSONArray or use regex with string parsing to and fetch the events portion. Processing time is critical since this should be nearly-realtime and memory efficiency is important since there can be multiple payloads coming in at once.
Which one is more efficient for this use case?
Use a proper streaming JSON parser. You know what you want to pull out of the stream, you know when you can quit parsing it, so read the stream in small, manageable chunks, and quit as soon as you know you are done.
Circa 2017, I'm not aware of any browser/native JSON streaming APIs, so you'll need to find a Javascript-based streaming library. Fortunately, streaming is not a new concept, so there are a number of options already in existence:
http://oboejs.com/
https://github.com/dominictarr/JSONStream
https://github.com/creationix/jsonparse
https://github.com/dscape/clarinet
http://danieltao.com/lazy.js/demos/json/
I have just started android development and here is my situation elaborately.
Sample data in JSON format:
{
"ht_id": 1086,
"name": "Mobile Phones",
"rate": 12
},
{
"ht_id": 1111,
"name": "Primary cells and primary batteries",
"rate": 28
},
{
"ht_id": 1112,
"name": "Electric accumulators, including separators therefor, whether or not rectangular (including square)",
"rate": 28
}
I have some 250 static records total and planning to ship the same with the app. I want to search if input string is contained in the data and return corresponding "rate" in real time.
As I want to do this in real time, search speed is the utmost criteria.
So, I am wondering how to store data with the app (DB/CSV/text/JSON) and later how to read in memory for super fast retrieval in android app(eg: hashSet)
Hi guys I encountered a problem on handling json on Android today. The json I got from the web service looks like the below:
e.g. from https://serviice.somewebite.com/list.json, I got this :
{
"task_id": 5,
"profile_id": 3,
"text": "{profileName} wrote {taskName}",
"created_at": "2015-08-10",
"event": "post"
},
{
"task_id": 6,
"profile_id": 2,
"text": "{profileName} wrote {taskName}",
"created_at": "2015-10-24",
"event": "post"
},
...... (and many similar entities)
the task_id is actually an id linking to other remote json, for the first entity, task_id = 5, which I should get the task json from https://serviice.somewebite.com/task/5.json, like the below:
{
"id": 5,
"name": "Clean house",
"description": "Clean every room in the house",
"state": "assigned",
}
And it is the similar way to get the profile json in the list.json file.
I have been handling json with pojo + retrofit + GSON before, but all the content are in one json, not like this.
What I want is getting all the needed jsons and show them in a listview including created_at(in list.json), task_description(in task.json), task_name(in task.json)
Currently I can only figure out one way to hanlde it. First get the list.json , and then scan the task ids and profile ids in list.json, and then get all the needed json via multiple asynchronous http requests. Seems too much boiler plate code here.
Is there an existing library/framework for this scenario? I am on Android platform using Java. Thanks a lot!
I am saving a json response inside my app using sharedPreference(jsonObject.toString()). It contains JSONArray, when the user updates some value of any one element, I wish to save the updated changes on the sharedPreference. Please help me for this task.
Example:-
{
"locations": {
"record": [
{
"id": 8817,
"loc": "NEW YORK CITY"//update this as California and save the response
},
{
"id": 2873,
"loc": "UNITED STATES"
},
{
"id": 1501
"loc": "NEW YORK STATE"
}
]
}
}
It seems like you're trying to override the purpose of SharedPreferenecs,
It's purpose is to save primitive values such as strings integers or booleans, for simple use of single values, I wouldn't treat a Json Array as a single primitive value.
If I were you I would go with QuokMoon's offer with the Local Sqlite Database, this will allow you simple access for CRUD operations, the setup time is a bit longer, but the benefits you'll find are far beyond SharedPreferences.
I've got multiple websites, where each website has visitors that "trigger" multiple events I want to track. I have a log of those events, from all websites, each event is filled with the website-id, the event-name and the user-id that did the event (for the sake of simplicity, let's say that's it).
The requirements:
Be able to get, per website-id and event-name, how many unique visitors got it.
This should support also date range (distinct unique visitors on the range).
I was thinking of creating a collection per "website-id" with the following data model (as example):
collection ev_{websiteId}:
[
{
_id: "error"
dailyStats: [
{
_id: 20121005 <-- (yyyyMMdd int, should be indexed!)
hits: 5
users: [
{
_id: 1, <-- should be indexed!
hits: 1
},
{
_id: 2
hits: 3
},
{
_id: 3,
hits: 1
}
]
},
{
_id: 20121004
hits: 8
users: [
{
_id: 1,
hits: 2
},
{
_id: 2
hits: 3
},
{
_id: 3,
hits: 3
}
]
},
]
},
{
_id: "pageViews"
dailyStats: [
{
_id: 20121005
hits: 500
users: [
{
_id: 1,
hits: 100
},
{
_id: 2
hits: 300
},
{
_id: 3,
hits: 100
}
]
},
{
_id: 20121004
hits: 800
users: [
{
_id: 1,
hits: 200
},
{
_id: 2
hits: 300
},
{
_id: 3,
hits: 300
}
]
},
]
},
]
I'm using the _id to hold the event-id.
I'm using dailyStats._id to hold when it happened (an integer in yyyyMMdd format).
I'm using dailySattes.users._id to represent a user's unique-id hash.
In order to get the unique users, I should basically be able to run (mapreduce?) distinct count number of items in the array(s), per the given date range (I will convert the date range to yyyyMMdd).
My questions:
does this data model makes sense to you? I'm concerned about scalability of this model over time (if I've got a lot of daily unique visitors in some client, it make cause a huge document).
I was thinking of deleting dailyStats documents by _id < [date as yyyyMMdd]. This way I can keep my documents size to a sane number, but still, there are limits here.
Is there an easy way to run "upsert" that will also create the dailyStats if not already created, add the user, if not already created and increment "hits" property for both?
what about map-reduce? how would you approach it (need to run distinct on the users._id for all subdocuments in the given date range)? is there an easier way with the new aggregation framework?
btw - another option to solve unique visitors is using Redis Bitmaps but I am not sure it's worth holding multiple data storage (maintenance-wise).
Few comments on the current above architecture.
I'm slightly worried as you've pointed out about the scalability and how much pre-aggregation you're really doing.
Most of the Mongo instances I've worked with when doing metrics have similar cases to what you pointed out but you really seem to be relying heavily on doing updates to a single document and upserting various parts of it which is going to slow down and potentially cause a bit of locking..
I might suggest a different path, one that Mongo even suggests when talking with them about doing metrics. Seeing as you already have a structure that you're looking to do I'd create something along the lines of:
{
"_id":"20121005_siteKey_page",
"hits":512,
"users":[
{
"uid":5,
"hits":512,
}
}
This way you are limiting your document sizes to something that is going to be reasonable to do quick upserts on. From here you can do mapreduce jobs in batches to further extend out what you're looking to see.
It also depends on your end goal, are you looking to provide realtime metrics? What sort of granularity are you attemtping to get? Redis Maps may be something you want to at least look at: Great article here.
Regardless it is a fun problem to solve :)
Hope this has helped!