I am new to databases and before I start learning mySQL and using a driver in Java to connect with my database on the server-side, I wanted to get the design of my database down first. I have two columns in the database, CRN NUMBER, and DEVICE_TOKEN. The CRN number will be a string of five digits, and the DEVICE_TOKEN will be a string device token(iOS device token for push notifications). Let me try to describe what I am trying to do. I am going to have users send my server data from the iOS app, mainly their device token for push notifications and a CRN(course) they want to watch. There are going to be MANY device tokens requesting to watch the same CRN number. I wanted to know the most efficient way to store these in a database. I am going to have one thread looking through all of the rows in the DB, and polling the website for each CRN. If the event I am looking for takes place, I want to notify every device token associated with this CRN. Initially, I wanted to have one column being the CRN, and the other column being DEVICE_TOKENS. I have learned though that this is not possible, and that each column should only correspond to one entry. Can someone help me figure out the best way to design this database, that would be the most efficient?
CRN DEVICE_TOKEN
12345 "string_of_correct_size"
12345 "another_device_token"
Instead of me making multiple request to the website for the same CRN, it would be MUCH more efficient for me to poll the website per unique CRN ONCE per iteration, and then notify all device tokens of the change. How should I store this information? Thanks for your time
In this type of problem where you have a one-to-many relationship (one CRN with many Device_Tokens), you want to have a separate table to store the CRN where a unique ID is assigned for each new CRN. A separate table should then be made for your DEVICE_TOKENS that relates has columns for a unique ID, CRN, and DEVICE_TOKEN.
With this schema, you can go through the rows of the CRN table, poll against each CRN, and then just do a simple JOIN with the DEVICE_TOKEN table to find all subscribed devices if a change occurs.
The most normal way to do this would be to normalize out the Courses with a foreign key from the device tokens. E.g. Two tables:
Courses
id CRN
1 12345
InterestedDevices
id course DEVICE_TOKEN
1 1 "string_of_correct_size"
2 1 "another_device_token"
You can then find interested devices with a SQL like the following:
SELECT *
FROM Courses
JOIN InterestedDevices ON Courses.id = InterestedDevices.course
WHERE Courses.CRN = ?
This way you avoid duplicating the course information over and over.
Related
I have table named - College and Student
I want to generate unique Enrollment number for the student at the time of registration
In single College there are multiple registrations are possible at the same time For example College - ABC have many persons who can register student
My logic for generate Enrollment id is YY_College-Pk_Last-five-digit-increment
YY_COLFK_DDDDD
At the registration time of student I will first fire Max query like
select Max(Enrollment_No) from student where College_Fk=101
And get last Enrollment_No and split last five digit and increment by 1 and insert it
When there is a chance to submit two students' data at the same time there is chance of generating single Enrollment_No for two students
How to manage this problem
On the Java side of things you could draw some inspiration from concepts such as UUIDs (see https://www.baeldung.com/java-uuid for example).
But as you are using a database, you should rather use the capabilities of that part, see How to generate unique id in MySQL? for some examples.
In other words: the database is your single source of truth. It offers you the ability to have IDs that are guaranteed to be unique!
if you want to add two student data at same time then u must be using insert statement twice, so for each data you have to yo
I have an app where anyone who downloads the app can add data to the database. The data THEY added will get displayed in a listView for only them to see. The users don't have to register an account or anything. Now when multiple people use the app, different data gets added to the database. So my question is, what are good ways to differentiate data, so the person who added data, only sees the data they added.
I have two ideas, either add the data ID to sharedpreferences, so when I select the data from the database, I select data where the data ID equals to the one in sharedpreferences OR when I add the data to the database, I add a unique key, so I can select * where unique key equals x.
I like the second idea more, but what would be a good unique key to use. I've thought of using
private String android_id = Secure.getString(getContext().getContentResolver(),
Secure.ANDROID_ID);
but is that a reliable solution, are there any other unique key solutions I could use?
Any feedback is much appreciated!
I think that the best way to handle such cases is to generate a UUID for each user.
Android itself will help you generate one.
You can generate the UUID the first time the app is launched, and store id in SharedPreferences.
I am developing an android application as my final school project and I came to a problem.
I've created a table in SQLite which stores the information from users like ID, Name, Phone, Email Address etc. but I'd like to insert the data in an ordered way to always insert users using sequential ordering. In my Add_new_user_activity I have an EditText field which I want to be dynamically auto set with the next available ID from the existing ID's in the table, but I don't know how to handle the gaps that could be generated if an user is deleted between two sequential IDS.
Let say that I have this sequential records on the table:
Users from 1 to 50 with it's corresponding ID's.
Then I delete the 27 and 29 user.
The next time I want to add a new user I want the EditText to know that there is a gap between the ID 26 and 28 and take the 27 for the new user ID and do the same if I add new users. In this case if I add 2 more new users their respective ID's would have to be 29 and 51.
Is there a way to solve it efficiently?
Thank you in advance!
What you're trying to do sounds dangerous, as that's not the intended use of AUTO_INCREMENT.
If you really want to find the lowest unused key value, don't use AUTO_INCREMENT at all, and manage your keys manually. However, this is NOT a recommended practice.
Take a step back and ask "why you need to recycle key values?" Do unsigned INT (or BIGINT) not provide a large enough key space?
You need to add one more field in your DB say "deleted" and whenever you delete a user just set it to 1. Now whenever you have to add a new user first you need to get list of user-ids having 1 in "deleted" field in ascending order. The first value of the list will hold the desired ID. Then you may just update the value having that ID. If you got no such field, go for insert with incremented ID.
In Scala program I use JDBC to get data from a simple table with 20 rows in SQL DB (Hive).
Table contains movie titles rated by users with rows in the following frormat:
user_id, movie_title, rating, date.
I start first JDBC cursor enumerating users. Next, with JDBC cursor 2, for every user I find movie titles he rated. Next, JDBC cursor 3, for every title the current user rated, I find other users who also rated this title. As a result I get a groups of users where every user rated at least one similar title with the first user who started this group. I need to get all such groups existing in the dataset.
So to group users by movie I do 3 nested select requests, pseudo-code:
1) select distinct user_id
2) for each user_id:
select distinct movie_title //select all movies that user saw
3) for each movie_title:
select distinct user_id //select all users who saw this movie
On a local table with 20 rows these nested queries work 26 min! Program returns first user_id after a minute!
Providing that real app will have to deal with 10^6 users, is there any way to optimize 3 nested selects in this case?
Without seeing the exact code is difficult to assess why it is taking so long. Given you've got 20 rows you there must be something fundamentally wrong there.
However as a general advise, I'd suggest looking back at the solution and thinking whether it can't be run with a single SQL query (instead of running hundreds of queries), which will allow you to benefit from features like indexes and save you huge amount of network traffic.
Assuming you have the following table Movies(user_id: NUMERIC, movie_title: VARCHAR(50), rating: NUMERIC, date: DATE) try running something along those lines (haven't tested it so might need to tweak it a bit):
SELECT DISTINCT m1.user_id, m2.user_id
FROM Movies m1, Movies m2
WHERE m1.user_id != m2.user_id
AND m1.movie_title = m2.movie_title
Once you've got the results you can group them in your Java/Scala code by first user_id and load it to the Multimap-like data structure.
I have following setup
MySQL server instance1
MySQL server instance2
in both these table I have a single table records which are partitioned. I have to retrieve the data from each instance and show the data in JQGrid.
Here are the consideration to be made:
1) From each database instance only 1000 records needs to be got.
2) Merge these 1000 records and sort in ascending order by a default column.
3) Again from the merged records get only 1000 records to be shown in a Grid.
4) For the next 1000 records we should not show any of the earlier records which have been shown.
The major problem I am having is how to uniquely identify the last row shown from the fetched records.
I thought about doing it this way:
1) Get the rowid for each record from all the connection. But from two instances the rowid would be same then how would I identify which record is from which database?
2) Check for rowid and primary key combination. But if the client sets the primary key's auto-increment value as same on all the instance then we would not get a unique combination.
Am I missing something or is there any other way to do it?
I am using JDBC connection to connect the database.
[SOLVED IT]
Solved the problem by writing a small function which calculates and create a map for the number of records to be fetched from each connection for each iteration.
Sorry can't add the code here as it is clients IP.
You usually uniquely identify a row with a POJO class where you override your hashCode() and equals() methods basing on your class fields. You can put your last row into a HashMap and check against it.
A rownumber retrieved from the tables can be achieved by creating one using RANK() and PARTITION BY. Create a temporary or tmp_ table with your merging results then drop it.
hope this helps abit.
if the two mysql instance can see each other, how about create a view, UNION the two query results. then in java world, you could do query/pagination on this view?