So I have the following table that I must map to Java Objects:
+---------+-----------+---------------------+---------------------+--------+
| task_id | attribute | lastModified | activity | row_id |
+---------+-----------+---------------------+---------------------+--------+
| 1 | 1 | 2016-08-23 21:05:09 | first activity | 1 |
| 1 | 3 | 2016-08-23 21:08:28 | connect to db | 2 |
| 1 | 3 | 2016-08-23 21:08:56 | create web services | 3 |
| 1 | 4 | 2016-08-23 21:08:56 | data dump | 4 |
| 1 | 5 | 2016-08-23 21:08:56 | test cases | 5 |
| 1 | 6 | 2016-08-23 21:08:57 | dao object | 6 |
| 1 | 7 | 2016-08-23 21:08:57 | buy streetfood | 7 |
| 2 | 6 | 2016-08-23 21:08:57 | drink coke | 8 |
| 2 | 6 | 2016-08-23 21:09:00 | drink tea | 9 |
| 2 | 1 | 2016-08-23 21:12:30 | make tea | 10 |
| 2 | 2 | 2016-08-23 21:13:32 | charge phone | 11 |
| 2 | 3 | 2016-08-23 21:13:32 | shower | 12 |
| 2 | 4 | 2016-08-23 21:13:32 | sleep | 13 |
+---------+-----------+---------------------+---------------------+--------+
Here, each Task object( identified by the task_id column) has multiple attribute objects. These attribute objects have the lastModified, and activity fields. So far my approach has been to create a Row object have each row of the table mapped to a Row object via myBatis. Then do some Java-side processing to sort everything out. Is there a way to directly map this table via myBatis annotations and/or xml so that the 2 Task objects are created with each of them having a list of populated Atttribute objects inside?
Here is mybatis document:http://www.mybatis.org/mybatis-3/sqlmap-xml.html .May be you can use mybatis collection to solve your problem.
Related
I have a requirement where I have to save employee detail to a child table, employee data remains the same for multiple parent records (columns as well), I want to use the existing child table to insert if not present and update if already present, how I can do that using Spring JPA / hibernate?
Parent Table
|id|project_id|owner_emp_id|developer_emp_id|lead_emp_id|tester_emp_id|
|:---- |:------:| -----:|:---- |:------:| -----:|
| 1 | 100 | emp_10 | emp_20 | emp_20 | emp_30 |
| 2 | 200 | emp_11 | emp_21 | emp_22 | emp_30 |
employee child table
|emp_id|first_name|olast_name|email|phone|
|:---- |:------:| -----:|:---- |:------:|
| emp_10 |..| .. | .. | .. |
| emp_20 | .. | .. | .. | .. |
| emp_30 | .. | .. | .. | .. |
| emp_11 | .. | .. | .. | .. |
| emp_21 | .. | .. | .. | .. |
| emp_22 | .. | .. | .. | .. |
from the above example when saving the child table during 1st record emp_20 data need to save only once. similarly, the 2nd record insert emp_30 already presents an update or ignores saving it, how to do this in JPA?
Let's say I have a table employee (with id and name columns as follows):
|---------------------|------------------|
| id | name |
|---------------------|------------------|
| 12 | adrian |
|---------------------|------------------|
| 5 | anne |
|---------------------|------------------|
| 9 | bryce |
|---------------------|------------------|
| 10 | burns |
|---------------------|------------------|
| 1 | charles |
|---------------------|------------------|
| 2 | clyde |
|---------------------|------------------|
I want to create a Page with, for example, size 3, and I need that the result page will be ordered first by id and then by name.
The point is, if I use the next query:
employeeRepository.findAll(PageRequest.of(0, 3, Sort.by("name").ascending().and(Sort.by("id").ascending()))
Is not re-ordering the page result by id just keeps the name order.
I've tried with, ordering first with spring data (OrderByName) and the by page request:
employeeRepository.findAllByOrderByName(PageRequest.of(page, size, Sort.by("id").ascending()));
But I always get the next result:
Page 1
|---------------------|------------------|
| id | name |
|---------------------|------------------|
| 12 | adrian |
|---------------------|------------------|
| 5 | anne |
|---------------------|------------------|
| 9 | bryce |
|---------------------|------------------|
Page 2
|---------------------|------------------|
| id | name |
|---------------------|------------------|
| 10 | burns |
|---------------------|------------------|
| 1 | charles |
|---------------------|------------------|
| 2 | clyde |
|---------------------|------------------|
But the result I need is:
Page 1
|---------------------|------------------|
| id | name |
|---------------------|------------------|
| 5 | anne |
|---------------------|------------------|
| 9 | bryce |
|---------------------|------------------|
| 12 | adrian |
|---------------------|------------------|
Page 2
|---------------------|------------------|
| id | name |
|---------------------|------------------|
| 1 | charles |
|---------------------|------------------|
| 2 | clyde |
|---------------------|------------------|
| 10 | burns |
|---------------------|------------------|
Notice that each page is re-ordered by id but keeping the first order by name.
Is there a way to do that just using Page and Sort or a post-process of the Page result is needed?
Any idea?
I am trying to load data from a text file into a MySQL table, by calling MySQL's LOAD DATA INFILE from a Java process. This file can contain some data for the current date and also for previous days. The table can also contain data for previous dates. The problem is that some of the columns in the file for previous dates might have changed. But I don't want to update all of these columns but only want the latest values for some of the columns.
Example,
Table
+----+-------------+------+------+------+
| id | report_date | val1 | val2 | val3 |
+----+-------------+------+------+------+
| 1 | 2012-12-01 | 10 | 1 | 1 |
| 2 | 2012-12-02 | 20 | 2 | 2 |
| 3 | 2012-12-03 | 30 | 3 | 3 |
+----+-------------+------+------+------+
Data in Input file:
1|2012-12-01|10|1|1
2|2012-12-02|40|4|4
3|2012-12-03|40|4|4
4|2012-12-04|40|4|4
5|2012-12-05|50|5|5
Table after the load should look like
mysql> select * from load_infile_tests;
+----+-------------+------+------+------+
| id | report_date | val1 | val2 | val3 |
+----+-------------+------+------+------+
| 1 | 2012-12-01 | 10 | 1 | 1 |
| 2 | 2012-12-02 | 40 | 4 | 2 |
| 3 | 2012-12-03 | 40 | 4 | 3 |
| 4 | 2012-12-04 | 40 | 4 | 4 |
| 5 | 2012-12-05 | 50 | 5 | 5 |
+----+-------------+------+------+------+
5 rows in set (0.00 sec)
Note that column val3 values are not updated. Also I need to do this for large files as well, some files can be >300Megs or more, and so it needs to be a scalable solution.
Thanks,
Anirudha
It would be good to use LOAD DATA INFILE with REPLACE option, but in this case records will be dropped and added again, so old val3 values will be lost.
Try to load data into temporary table, then update your table from temp. table using INSERT ... SELECT/UPDATE or INSERT ... ON DUPLICATE KEY UPDATE statements.
I have the tables accounts and action. accounts needs to be modified according to the instruction stored in action.
In action each row contains an account-id, an action (i=insert, u=update, d=delete, x=invalid operation) and an amount by which to update the account.
On an insert, if the account already exists, an update should be done
instead
On an update, if the account does not exist, it is created by an
insert
On a delete, if the row does not exist, no action is taken
Input
accounts:
+---id----value--+
| 1 | 1000 |
| 2 | 2000 |
| 3 | 1500 |
| 4 | 6500 |
| 5 | 500 |
+----------------+
action:
+---account_id---o---new_value---status---+
| 3 | u | 599 | |
| 6 | i | 2099 | |
| 5 | d | | |
| 7 | u | 1599 | |
| 1 | i | 399 | |
| 9 | d | | |
| 10 | x | | |
+-----------------------------------------+
Output
accounts:
+---id----value--+
| 1 | 399 |
| 2 | 800 |
| 3 | 599 |
| 4 | 1400 |
| 6 | 20099 |
| 7 | 1599 |
+----------------+
action:
+---account_id---o---new_value-------------------status----------------+
| 3 | u | 599 | Update: Success |
| 6 | i | 20099 | Update: Success |
| 5 | d | | Delete: Success |
| 7 | u | 1599 | Update: ID not founds. Value inserted |
| 1 | i | 399 | Insert: Acc exists. Updated instead |
| 9 | d | | Delete: ID not found |
| 10 | x | | Invalid operation: No action taken |
+----------------------------------------------------------------------+
I am experienced with Java and JDBC, but unfortunately I just don't know, how to start here.
Do I need an additional table? Do I have to use triggers?
I've seen two techniques for an upsert. With the first technique, within a transaction, you test first to see if the row exists, and use the results to determine whether to perform an insert or an update. With the second technique, you try performing an update and verify the number of records updated (JDBC gives you this). If it's zero, then you do an insert, if one, then you're done.
I want to store millions of time series entries (long time, double value) with Java. (Our monitoring system is currently storing every entry in a large mySQL table but performance is very bad.)
Are there time series databases implemented in java out there?
checkout http://opentsdb.net/ as used by StumbleUpon?
checkout http://square.github.com/cube/ as used by square
I hope to see additional suggestions in this thread.
The performance was bad because of wrong database design. I am using mysql and the table had this layout:
+-------------+--------------------------------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------------------------------+------+-----+-------------------+-----------------------------+
| fk_category | smallint(6) | NO | PRI | NULL | |
| method | enum('min','max','avg','sum','none') | NO | PRI | none | |
| time | timestamp | NO | PRI | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| value | float | NO | | NULL | |
| accuracy | tinyint(1) | NO | | 0 | |
+-------------+--------------------------------------+------+-----+-------------------+-----------------------------+
My fault was an inapproriate index. After adding a multi column primary key all my queries are lightning fast:
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| job | 0 | PRIMARY | 1 | fk_category | A | 18 | NULL | NULL | | BTREE | | |
| job | 0 | PRIMARY | 2 | method | A | 18 | NULL | NULL | | BTREE | | |
| job | 0 | PRIMARY | 3 | time | A | 452509710 | NULL | NULL | | BTREE | | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
Thanks for all you answers!
You can take a look at KDB. It's primarily used by financial companies to fetch market time series data.
What do you need to do with the data and when?
If you are just saving the values for later, a plain text file might do nicely, and then later upload to a database.