So I have an array of PlayerNames that are in a specific 'clan' that I need to save, so when I load the server it will loop through all of the entries.
Here is what I currently have
MySQL Table
Not sure what I would put for the 'members' basically what I want is to store UUID's of an array I have. It can come out as a string but I just need them to be able to store like
members: uuid, uuid, uuid, uuid
I understand how to build the connection, ResultSet, and the Statement part, I just don't know how to make MySQL know that I am trying to save these list of members as an array. Any help would be appreciated, I apologize if I did something wrong.
The problem here is in your data structure. It's not Normalized where as RDBMS are designed to store normalized data. All Create/Update/Delete/Retrieve operations become a lot easier if the data is normalized.
To normalize your tables, you need to stop storing member ids in the same column as CSV.
The Gang table should
uuid
title
kills
I assume you already have a members table and it looks something like
uuid
member_name
Then you need a new table called gang_members
gang_id
member_id
And create a unique index on those two columns.
Also see https://stackoverflow.com/a/41215681/267540 , Is storing a delimited list in a database column really that bad? and https://stackoverflow.com/a/41305027/267540
Related
I am new to stored procedure, I am using hibernate concept to retrieve data from the database. client server traffic is more so I decide to move to SP by doing simple logics in server side and return needed values to front end. Now I want to know that is there any way to store records to list, so that I can rotate the list of records in a loop and ask them to come one by one and get a single field from a record and make a process then return a value to front end like we are doing in Java? List,getter,setter and generic class to store needed entities. I am confused with this.Please advise and guide me to know well about stored procedures.
It sounds like you are wanting to use a cursor over your query results, generate a temporary table, and then select the contents of that temporary table to return from your stored procedure.
You should be able to find plenty of examples online for cursors and temporary tables.
Here is the problem that i am currently stuck with for the past few days. And I am looking for guidance / approaches on how to handle.Hints & suggestions welcome.
so here is the problem.The database has a table "group" which has two columns : group_id on parent_group_id.group_id is the primary key for the table .All entries in this table represent groups/sub-groups.If one adds a sub-group from the front end ,then an entry gets inserted in to the group table with an auto-generated group_id which MySQL generates.the parent_group_id corresponds to the group_id of the group on which a sub-group was added.So in essence it's acting like a foreign key to the group_id column.My task cut out here is to generate an XML in java using the data from the group table. So this is where i am stuck.I know it's gonna be a recursive function which needs to be written but cant figure out a way how to dynamically create the nodes and fill the data from the Db at the same time.The end XML needs to be sent as json data to the front end.
A group can have n-sub groups and the hierarchy can go on.For ex- Say Vehicle is root node with group_id =1.It can have cars & bikes as sub-groups.so the parent_group_id will be 1 for car and bike and group id say will be 2& 3 respectively.
P.S: this is the first time i am posting here having had used this site for the past one year.Please let me know if any more info is needed or whether you are able to comprehend my problem.
If you split the task into two, it will be more manageable.
Here are some useful links on querying hierarchical data in relational databases and specifically in MySQL:
What are the options for storing hierarchical data in a relational database?
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
http://www.slideshare.net/billkarwin/models-for-hierarchical-data
http://en.wikipedia.org/wiki/Common_table_expressions#Common_table_expression
As long as you have the query result properly sorted, you will be able to traverse it recursively, building the XML tree step by step.
Was able to solve it by using recursive functions :). Loaded all the data using the entity class and then iterated over the data using recursive functions to build the tree like structure.I didn't try and take the sql way.
<h1>Abasement</h1>
<hw>A*base"ment</hw> <tt>(#)</tt>,
<tt>n.</tt>
<ety>[Cf. F. <ets>abaissement</ets>.]</ety>
<def>The act of abasing, humbling, or bringing low; the state of being abased or humbled;humiliation.</def>
I have a file with definitions of words like this, enclosed in tags. I'd like to convert this to an SQLite database with the fields Word, Pronunciation, Part_Of_Speech, Etymology and Definition. Any help is appreciated.
[My first question here!]
You can't just simply 'convert' it (unless there happens to be a tool out there that would do it for you). You have to create a SQLite database first and create a table for the data. Maybe name the table 'Words', then create columns for each field, and figure out how to do a SQL INSERT to get your data into it.
I have a requirement to implement contact database. This contact database is special in a way that user should be able to dynamically (on runtime) add properties he/she wants to track about the contact. Some of these properties are of type string, other numbers and dates. Some of the properties have pre-defined values, others are free fields etc.. User wants to be also able to query such structure fast and easily. The database needs to handle easily 500 000 contacts each having around 10 properties.
It leads to dynamic property model having Contact class with dynamic properties.
class Contact{
private Map<DynamicProperty, Collection<DynamicValue> values> propertiesAndValues;
//other userfull methods
}
The question is how can I store such a structure in "some database" - it does not have to be RDBMS so that I can easily express queries such as
Get all contacts whose name starts with Martin, they are from Company of size 5000 or less, order by time when this contact was inserted in a database, only first 100 results (provide pagination), where each of these segments correspond to a dynamic property.
I need:
filtering - equal, partial equal, (bigger, smaller for integers, dates) and maybe aggregation - but it is not necessary at this point
sorting
pagination
I was considering RDBMS, but this leads more less to this structure which is quite hard to query and it tends to be slow for this amount of data
contact(id serial pk,....);
dynamic_property(dp_id serial pk, ...);
--only one of the values is not empty
dynamic_property_value(dpv_id serial pk, dynamic_property_fk int, value_integer int, date_value timestamp, text_value text);
contact_properties(pav_id serial pk, contact_id_fk int, dynamic_propert_fk int);
property_and_its_value(pav_id_fk int, dpv_id int);
I consider following options:
store contacts in RDBMS and use Lucene for querying - is there anything that would help with this?
Store dynamic properties as XML and store it to rdbms and use xpath support - unfortunatelly it seems to be pretty slow for 500000 contacts
use another database - Mango DB or Jackrabbit to store this information
Which way would you go and why?
Wikipedia has a great entry on Entity-Attribute-Value modeling which is a data modeling technique for representing entities with arbitrary properties. It's typically used for clinical data, but might apply to your situation as well.
Have you considered using Lucene for your querying needs? You could probably get away with just using Lucene and store all your data in the index. Although I wouldn't recommend using Lucene as your only persistence store.
Alternatively, you could use Lucene along with a RDBMS and take advantage of something like Compass.
You could try other kind of databases like CouchDB which is a document oriented db and is distributed
If you want a dumb solution, for your contacts table you could add some 50 columns like STRING_COLUMN1, STRING_COLUMN2... upto 10, DATE_COLUMN1..DATE_COLUMN10. You have another DESCRIPTION column. So if a row has a name which is a string then STRING_COLUMN1 stores the value of your name and the DESCRIPTION column value would be "STRING_COLUMN1-NAME". In this case querying can be a bit tricky. I know many purists laugh at this, but I have seen a similar requirement solved this way in one of the apps :)
I have a classifieds website, with approx 30 categories of classifieds.
I am on the stage where I have to build MySQL tables and index them with SOLR.
Each row in a table has around 15 fields...
I am looking for performance!
I wonder which of these two methods works best:
1- Have one MySQL table for each category, meaning 30 tables, and then have multiple indexes in SOLR ( This would mean that if the user only wants to search in one specific category, then that table/index is searched, thus gaining performance (I think). However, if the user searches ALL categories at once, then all tables/indexes would have to be searched. )
2- Have one and only one MySQL table, and only one index in SOLR.
Thanks
Assuming that all of the different types of classifieds have the same structure, I would do the following:
Store the text in a single table, along with another field for category (and other fields for whatever other information is associated with a category).
In Solr, build an index that has a text field, a category field, and a PK field. The text and category fields would be indexed but not stored, and the PK field (storing the primary key corresponding to your MySQL table) would be stored but not indexed.
Allow the user to do two kinds of searches: one with just text, and one with text and category. For the latter, the category should be an exact match. The Solr search will return a list of PKs which will allow you to then retrieve documents from MySQL.
You will not see much of a performance improvement by splitting your index up into 30 indices, because Solr/Lucene is already very efficient at finding data via its inverted indices. Specifying the category name is sufficient.