Having trouble modeling DB Table, whose key (in key-value) is configurable.
So Basically i want to read battery data, which has key=ParamName, KeyValue=value of param name. Also, each battery has its own IP address.
Examples of ParamName would be 'BatteryStatus', 'BatteryTemperature', 'BatteryCurrent' etc.
My Java program would read the battery information using IP address, and get the values of all required ParamNames.
Now I could have easily defined the table with each ParamName as a column:
IP | BatteryStatus | BatteryTemperature| etc
But the problem is the ParamNames are defined in a configurable file, and I should be able to add new ParamNames or delete existing ParamNames without touchig code or DB. So I cannot use a fixed table structure then?
If I create something like below, it will duplicate the IP's
IP | ParamName | ParamValue
102.103.123.1 | BatteryStatus | "normal"
102.103.123.1 | BatteryTemperature| 32
102.103.123.1 | BatteryCurrent | 220
102.103.123.2 | BatteryStatus | "normal"
102.103.123.2 | BatteryTemperature| 35
etc.
As you can see, im trying to store a Key-{Key-Value} pair in DB. Any ideas how to do this effectively?
Having a triplet table as in your example is pretty close, assuming proper index on IP and ParamName.
The first optimisation I'd do is to replace ParamName with an INT column, with either another table that ParamName would be a foreign key to, or a lookup hashtable in your client code.
You could do the same for IP (a linked table, not the lookup hashtable), or you could just translate them to the numeric value and use it directly (What type should I store IP addresses for MySQL?), which is optimal both space- and performance-wise, just not (directly) very human-readable.
Related
I just need this last piece of the puzzle to finish my plugin. Currently I am having a problem with how to set up my MySQL table for all alt accounts that are logged into a server. I know that I either need to have a set number that is super high and make the uuid's add to the next empty cell or just add a new column for every uuid, but I just need to know the most efficient way to add all UUID's to a single IP (The primary key). Looks something I like at the moment:
IP |
Row #1
Row #2
etc
Don't use the IP as Primary Key. The fact that a Primary Key is unique and you have multiple occurrences of IP addresses with different UUIDs makes it hard to accomplish what you need.
Try something like this:
id (PK)| ip_address | uuid | date
--------------------------------------
1 | 1.2.3.4 | as-df-gh | 12345
2 | 1.2.3.4 | df-as-gh | 12346
3 | 2.3.4.5 | as-gh-df | 12347
4 | 3.4.5.6 | as-df-gh | 12348
Whenever someone logs in you can then add another row (or if you don't need the login date column, first check if there's already one with the IP / UUID pair and skip it).
Now you can select all UUIDs from a certain IP address:
SELECT uuid FROM your_table WHERE ip_address = '1.2.3.4'
results in
uuid
--------
as-df-gh
df-as-gh
Or the other way around:
SELECT ip_address FROM your_table WHERE uuid = 'as-df-gh'
results in
ip_address
----------
1.2.3.4
3.4.5.6
I have created simple entity with Hibernate with #Lob String field. Everything works fine in Java, however I am not able to check the values directly in DB with psql or pgAdmin.
Here is the definition from DB:
=> \d+ user_feedback
Table "public.user_feedback"
Column | Type | Modifiers | Storage | Stats target | Description
--------+--------+-----------+----------+--------------+-------------
id | bigint | not null | plain | |
body | text | | extended | |
Indexes:
"user_feedback_pkey" PRIMARY KEY, btree (id)
Has OIDs: no
And here is that I get from select:
=> select * from user_feedback;
id | body
----+-------
34 | 16512
35 | 16513
36 | 16514
(3 rows)
The actual "body" content is for all rows "normal" text, definitely not these numbers.
How to retrieve actual value of body column from psql?
This will store the content of LOB 16512 in file out.txt :
\lo_export 16512 out.txt
Although using #Lob is usually not recommended here (database backup issues ...). See store-strings-of-arbitrary-length-in-postgresql for alternatives.
Hibernate is storing the values as out-of-line objects in the pg_largeobject table, and storing the Object ID for the pg_largeobject entry in your table. See PostgreSQL manual - large objects.
It sounds like you expected inline byte array (bytea) storage instead. If so, you may want to map a byte[] field without a #Lob annotation, rather than a #Lob String. Note that this change will not be backward compatible - you'll have to export your data from the database then drop the table and re-create it with Hibernate's new definition.
The selection of how to map your data is made by Hibernate, not PostgreSQL.
See related:
proper hibernate annotation for byte[]
How to store image into postgres database using hibernate
Im trying to store a list,collection of data objects in Hbase. For example ,a User table where a the userId is the Rowkey and column family Contacts with column Contacts:EmailIds where EmailIds is a list of emails as
{abcd#example.com,bpqrs#gmail.com....etc}
How do we model this in Hbase ? How do we do this in Java?/Python?Ive tried pickling and unpickling data in Python but this is one solution which I do not want to use due to performance issues.
You can use it in the following manner:
| userid | contacts |
| test | c:email1=test#example.com; c:email2=te.st#example.com |
or
| userid | contacts |
| test | c:test#example.com=1; c:te.st#example.com=2 |
This way you can use versioning, add/remove as much email addresses as you want, use filters, and it is really easy to iterate over these KV pairs in the client code
I'm actually working on a talend job. I need to load from an excel file to an oracle 11g database.
I can't figure out how to break a field of my excel entry file within talend and load the broken string into the database.
For example I've got a field like this:
toto:12;tata:1;titi:15
And I need to load into a table, for example grade:
| name | grade |
|------|-------|
| toto |12 |
| titi |15 |
| tata |1 |
|--------------|
Thank's in advance
In a Talend job, you can use tFileInputExcel to read your Excel file, and then tNormalize to split your special column into individual rows with a separator of ";". After that, use tExtractDelimitedFields with a separator of ":" to split the normalized column into name and grade columns. Then you can use a tOracleOutput component to write the result to the database.
While this solution is more verbose than the Java snippet suggested by AlexR, it has the advantage that it stays within Talend's graphical programming model.
for(String pair : str.split(";")) {
String[] kv = pair.split(":");
// at this point you have separated values
String name = kv[0];
String grade = kv[1];
dbInsert(name, grade);
}
Now you have to implement dbInsert(). Do it either using JDBC or using any higher level tools (e.g. Hivernate, iBatis, JDO, JPA etc).
I have a database table like this
Port Code | Country |Port_Name
--------------------------------------
1234 | Australia | port1
2345 | India | Mumbai
2341 | Australia | port2
...
The table consists of around 12000 entries.I need to auto-complete as the user enter's the query.Now the query can be any either a port-code,country or a port name.For example if the users partial query is '12' and the drop-down should display 1234 | Australia | port1.The problem that i'm facing now is that for each user entry i'm querying the database which makes the auto-complete really slow.So is there a way to optimize this ?
in smartgwt use comboboxitem.Then override getPickListFilterCriteria of comboxitem like this
ComboBoxItem portSelect = new ComboBoxItem("PORT_ATTRIB", "") {
#Override
public Criteria getPickListFilterCriteria() {
if (getValue() != null && getValue() instanceof String) {
criteria = new AdvancedCriteria(OperatorId.AND, new Criterion[]{new Criterion("portValue",
OperatorId.EQUALS, getDisplayValue())});
}
return criteria;
}
};
Every key press will give you a criteria which u can pass to your query.Query will be something likeselect * from port where portName like '+criteria+%' or portCode like '+criteria+%
You could do this with Lucene and a RAMDirectory. You build an index on your data, and implement a data lookup service to check from time to time if changes in the database occured. Or any other update from your database for your Lucene Index. See Lucene for Indexing your DB and for Querying use the MultiFieldQueryParser.
Is your database indexed correctly? Lookups on indexed columns should be pretty fast - 12k rows is not a great deal for any relational DB.
Another thing I could suggest is to load the database table data, into an in memory table. I've done this in MySQL long time back : http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html . This will help specially if the data does not change very frequently - so a one time load up of the data into a in memory table will be quick. After that, all queries should be executed on this in memory table and these are amazingly fast.