I'm reading user data from a Java GUI and trying to record it into two different database tables with a single statement. I'm fond of the 'insert into' statement, I just dont know how to enter data into two different tables (which are linked with a foreign key in the one of them), using inner joins and stuff..
Please, any help is welcome.
So far I've had all columns I need in one table but after normalising the database to 3NF I'm not sure of how to insert into all of them..
You need to use two insert statements. In the first statement you have insert data in the primary table and in the second statement you have to insert on secondary table ( where first table reference id will be used)
If you do opposite there will be constraint violation error from database.
Related
I am using JPA and Hibernate to connect to a table and insert data in the same. I have a table say : User which has three columns ID, Name and Address. I have an entity class for the same and to insert the data I simply use the EntityManager's object and persist the data in the db which works like a charm for me.
Now I have a scenario where I want to check whether the values that I am persisting already exist, if that is the case I have to log an error. Currently how I am doing that is manually loading the rows from the table and manually checking if the same values exist or not which is fairly simple for the example table (User) that has only three columns. But what if I have a table with 30 columns?
Do I manually load the data based on one condition and check for other columns or is there a better and a short way to do that ?
30 columns, is that you primary key as? If the data you are checking for duplication is the primary key, or unique constraint then you can use Hibnerate to fetch an object before save and report back if it exists. If the 30 columns are not part of the key then I would use equals method, and as such fetch all rows. However if there are many rows and this would be slow then I would probably write an dedicated SQL to check wherever an object exists, i.e.
UserDao public boolean rowExists(User user) { ... }
I am implementing application specific data import feature from one database to another.
I have a CSV file containing say 10000 rows. These rows need to be inserted/updated into database.
I am using mysql database and inserting from Java.
There might be the case, where couple of rows may present in database that means those need to be updated. If not present in database, those need to be inserted.
One possible solution is that, I can read one by one line, check the entry in database and build insert/update queries accordingly. But this process may take much time to create update/insert queries and execute them in database. Some times my CSV file may have millions of records.
Is there any other faster way to achieve this feature?
I don't know how you determine "is already present", but if it's any kind of database level constraint (probably on a primary key?) you can make use of the REPLACE INTO statement, which will create a record unless it gets an error in which case it'll update the record that prevents it from being inserted.
It works just like INSERT basically:
REPLACE INTO table ( id, field1, field2 )
VALUES ( 1, 'value1', 'value'2 )
If a row with ID 1 exists, it's updated with these values; otherwise it's created.
Given that you're using MySQL you could use the INSERT ... ON DUPLICATE KEY UPDATE ... statement, which functions similarly to the SQL standard MERGE statement. MYSQL doc reference here and general Wikipedia reference to SQL MERGE functionality here. The statement would look something like
INSERT INTO MY_TABLE
(PRIMARY_KEY_COL, COL2, COL3, COL4)
VALUES
(1, 2, 3, 4)
ON DUPLICATE KEY
UPDATE COL2 = 2,
COL3 = 3,
COL4 = 4
In this example I'm assuming that PRIMARY_KEY_COL is a primary or unique key on MY_TABLE. If the INSERT statement would fail due to a duplicate value on the primary or unique key then the UPDATE clause is executed. Also note (on the MySQL doc page) that there are some gotcha's associated with auto-increment columns on an InnoDB table.
Share and enjoy.
Do you need to do this often or just once in a while?
I need to load csv files from time to time to a database for analysis and I created a SSIS-Datasolution with a Data Flow task which loads the csv-File into a table on the SQL Server.
For more infos look at this blog
http://blog.sqlauthority.com/2011/05/12/sql-server-import-csv-file-into-database-table-using-ssis/
Add a stored procedure in SQL for inserting. In the stored procedure use a try catch block to do the insert. If the insert fails do an update. Then you can simply call this method from your program.
Alternatively:
UPDATE Table1 SET (...) WHERE Column1='SomeValue'
IF ##ROWCOUNT=0
INSERT INTO Table1 VALUES (...)
Is it possible to have autoincrementing id among several tables? What I mean exactly - I have (let's say five) tables, one of them is a table containing information about sales (sale_id, sold_item_id) and another four contain info about different kind of sold stuff. I want these four to share one pool of ids. How do I do that?
Edit.
I decided to choose Juxhin solution and I created additional table. Everytime I create a record in one of these 4 tables, I autoincrement new id in that additional table and this id is in one of columns of that new row.
This sounds like the use case for a sequence and the link seems to indicate that javadb supports this.
So you create one common sequence for all tables:
CREATE SEQUENCE MASEQUENCE
and then use it when inserting into your tables:
INSERT INTO TAB1(ID,....) VALUES(NEXT VALUE FOR MYSEQUENCE,...)
Each NEXT VALUE will advance the sequence and so all ids will be unique across all tables.
If you want a new record to be inserted into all 5 tables when you insert something into one of them, then you can create a trigger for this.
It may also be helpful to create foreign keys on the id columns in the other tables (to keep the tables in sync).
Just a quick question about locking tables in a postgres database using JDBC. I have a table for which I want to add a new record to, however, To do this for the primary key, I use an increasing integer value.
I want to be able to retrieve the max value of this column in Java and store it as a variable to be used as a new primary key when adding a new row.
This gives me a small problem, as this is going to be modelled as a multi-user system, what happens when 2 locations request the same max value? This will of course create a problem when trying to add the same primary key.
I realise that I should be using an EXCLUSIVE lock on the table to prevent reading or writing while getting the key and adding a new row. However, I can't seem to find any way to deal with table locking in JDBC, just standard transactions.
psuedo code as such:
primaryKey = "SELECT MAX(id) FROM table1;";
primary key++;
//id retrieved again from 2nd source
"INSERT INTO table1 (primaryKey, value 1, value 2);"
You're absolutely right, if two locations request at around the same time, you'll run into a race condition.
The way to handle this is to create a sequence in postgres and select the nextval as the primary key.
I don't know exactly what direction you're heading and how your handle your data, but you could also set the column as a serial and not even include the column in your insert query. The column will automatically auto increment.
Sorry if my question is not specific or if it has been answered before. I tried looking for it and for a better way to ask but this is the most accurate way.
I have developed a program in Java in which I insert a new row into my database in the following way:
INSERT INTO table_name VALUES (?,?,?)
The thing is that I have this query in many parts of the program, and now I decided to add a fourth column to my table. Do I have to update EVERY SINGLE query with a new question mark in the program? If I dont, it crashes.
What is the best way to proceed in these cases?
YES.
you need to add extra ? (parameter placeholder) because you are using implicit INSERT statement. That means that you didn't specify the column names of the table to which the values will be inserted.
INSERT INTO table_name VALUES (?,?,?)
// the server assumes that you are inserting values for all
// columns in your table
// if you fail to add value on one column. an exception will be thrown
The next time you create an INSERT statement, make sure that you specify the column names on it so when you alter the table by adding extra column, you won't update all your place holders.
INSERT INTO table_name (Col1, col2, col3) VALUES (?,?,?)
// the server knows that you are inserting values for a specific column
Do I have to update EVERY SINGLE query with a new question mark in the program?
Probably. What you should do, while you're updating every single one of those queries, is to encapsulate them into an object, probably using a Data Source pattern such as a Table Data Gateway or a Row Data Gateway. That way you Don't Repeat Yourself and the next time you update the table, you only have one place to update the query.
Because of the syntax you've used, you might run some issues. I've referring to the lack of column names. Your INSERT queries will start failing as soon as you change your table structure.
If you had used the following syntax:
INSERT INTO table_name (C1, C2, C3) VALUES (?,?,?)
assuming your new column has a proper default value, then it would've work fine.