If you have an insert with an ON DUPLICATE KEY clause, and there is a duplicate key, is there any way to get back the primary key that was duplicated? or do I have to do my own manual query? As far as I can tell getGeneratedKeys() from the CallableStatement class will not return as a new insert wasn't actually done.
EDIT
Sorry if it wasn't clear but I want to get the PRIMARY KEY of the record back.
So if I were have the following table (excuse syntax, just typing it freehand):
CREATE TABLE some_table(
id int(11) unsigned NOT NULL AUTO_INCREMENT,
value varchar(500)NOT NULL,
count int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (id),
UNIQUE KEY (value)
);
INSERT INTO some_table(value) ON DUPLICATE KEY UPDATE count = count + 1;
If I were to add 'test' as the value, a new record would be added and the id would be returned by getGeneratedKeys();
If I were to attempt to add 'test' again, the key already exists and therefore the count would be updated. What I want is the primary key/id of that row which was updated. Do I have to see that I get no results back from getGeneratedKeys() - as none where generated - and do another select after the fact?
ON DUPLICATE KEY UPDATE count = count + 1, id = LAST_INSERT_ID(id)
Note: This shouldn't be necessary as of MySQL 5.5.
Related
I have a table where I have a composite primary key. Here the combination of sensor and subsystem should be unique. On each insert I want to delete the existing entries based on the mid in a single transaction. I am using hibernate and postgres. Every Time I am trying to save getting a duplicate key violation even when I am trying to delete it first because it is happening in the same transaction. Please suggest a convenient solution. (Don't want to introduce any temp tables or something).
PS: on conflict will not serve the purpose in this case as first i want to remove all the existing entries for mid.
CREATE TABLE public.abc (
id int8 NOT NULL DEFAULT nextval('abc_id_seq'::regclass),
mid int8 NOT NULL,
nsid int8 NOT NULL,
sensor text NOT NULL,
subsystem text NOT NULL,
mapped_by_user text NOT NULL,
creation_time timestamp NOT NULL,
modification_time timestamp NOT NULL,
obj_version int8 NOT NULL DEFAULT 0,
CONSTRAINT abc_pkey PRIMARY KEY (sensor,subsystem),
CONSTRAINT fk_abckey FOREIGN KEY (mid,nsid) REFERENCES public.def(mid,nsid)
)
abcRepository.delete(nid);
abcRepository.saveAndFlush(entity);
I have a pre-existing table, containing 'fname', 'lname', 'email', 'password' and 'ip'. But now I want an auto-increment column. However, when I enter:
ALTER TABLE users
ADD id int NOT NULL AUTO_INCREMENT
I get the following:
#1075 - Incorrect table definition; there can be only one auto column and it must be defined as a key
Any advice?:)
Try this
ALTER TABLE `users` ADD `id` INT NOT NULL AUTO_INCREMENT;
for an existing primary key
If you don't care whether the auto-id is used as PRIMARY KEY, you can just do
ALTER TABLE `myTable` ADD COLUMN `id` INT AUTO_INCREMENT UNIQUE FIRST;
I just did this and it worked a treat.
If you want to add AUTO_INCREMENT in an existing table, need to run following SQL command:
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT primary key
First you have to remove the primary key of the table
ALTER TABLE nametable DROP PRIMARY KEY
and now yo can add the autoincrement ...
ALTER TABLE nametable ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
Well, you must first drop the auto_increment and primary key you have and then add yours, as follows:
-- drop auto_increment capability
alter table `users` modify column id INT NOT NULL;
-- in one line, drop primary key and rebuild one
alter table `users` drop primary key, add primary key(id);
-- re add the auto_increment capability, last value is remembered
alter table `users` modify column id INT NOT NULL AUTO_INCREMENT;
If you run the following command :
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT PRIMARY KEY;
This will show you the error :
ERROR 1060 (42S21): Duplicate column name 'id'
This is because this command will try to add the new column named id to the existing table.
To modify the existing column you have to use the following command :
ALTER TABLE users MODIFY id int NOT NULL AUTO_INCREMENT PRIMARY KEY;
This should work for changing the existing column constraint....!
Delete the primary key of a table if it exists:
ALTER TABLE `tableName` DROP PRIMARY KEY;
Adding an auto-increment column to a table :
ALTER TABLE `tableName` ADD `Column_name` INT PRIMARY KEY AUTO_INCREMENT;
Modify the column which we want to consider as the primary key:
alter table `tableName` modify column `Column_name` INT NOT NULL AUTO_INCREMENT PRIMARY KEY;
Just change the ADD to MODIFY and it will works !
Replace
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT
To
ALTER TABLE users MODIFY id int NOT NULL AUTO_INCREMENT;
Drop the primary index from the table:
ALTER TABLE `tableName` DROP INDEX `PRIMARY`;
Then add the id column (without a primary index). I have used a big int because I am going to have lots of data but INT(11) should work just as well:
ALTER TABLE `tableName` ADD COLUMN `id` BIGINT(11) NOT NULL FIRST;
Then modify the column with auto-increment (thanks php). It needs to be a primary key:
ALTER TABLE `tableName ` MODIFY COLUMN `id` BIGINT(11) UNSIGNED PRIMARY KEY AUTO_INCREMENT;
I have just tried this on a table of mine and it appears to have worked.
ALTER TABLE users CHANGE id int( 30 ) NOT NULL AUTO_INCREMENT
the integer parameter is based on my default sql setting
have a nice day
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT primary key FIRST
For PostgreSQL you have to use SERIAL instead of auto_increment.
ALTER TABLE your_table_name ADD COLUMN id SERIAL NOT NULL PRIMARY KEY
ALTER TABLE `table` ADD `id` INT NOT NULL AUTO_INCREMENT unique
Try this. No need to drop your primary key.
This SQL request works for me :
ALTER TABLE users
CHANGE COLUMN `id` `id` INT(11) NOT NULL AUTO_INCREMENT ;
If you want to add an id with a primary key and identity:
ALTER TABLE user ADD id INT NOT NULL AUTO_INCREMENT FIRST , ADD PRIMARY KEY (id);
Check for already existing primary key with different column. If yes, drop the primary key using:
ALTER TABLE Table1
DROP CONSTRAINT PK_Table1_Col1
GO
and then write your query as it is.
Proceed like that :
Make a dump of your database first
Remove the primary key like that
ALTER TABLE yourtable DROP PRIMARY KEY
Add the new column like that
ALTER TABLE yourtable add column Id INT NOT NULL AUTO_INCREMENT FIRST, ADD primary KEY Id(Id)
The table will be looked and the AutoInc updated.
I am new in derby library. why I got this error when I use the auto_increment in my query?
here is my java code
this.conn.createStatement().execute(create table user("user_id int auto_increment, PRIMARY KEY(user_id))");
I tried this in mysql server and its works but in derby I got this error
java.sql.SQLSyntaxErrorException: Syntax error: Encountered "auto_increment" at line 1
why I got this error?
Derby does not have auto_increment as a keyword. In derby you need to use identity columns to implement auto increment behaviour
For example
CREATE TABLE students
(
id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
name VARCHAR(24) NOT NULL,
address VARCHAR(1024),
CONSTRAINT primary_key PRIMARY KEY (id)
) ;
Above statement will create Student table with id as auto increment column and primary key as well.
Hope this helps
I was working with UIs where the user will click the add button to add employees, but when I do it, it gives me an error like this
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (`finalpayroll`.`personal_info`, CONSTRAINT `personal_info_ibfk_1`
How would I fix this?? I know I am using a parent key, and its foreign key is the User, and also take note that the parent key has already a data, but it seems my query won't work, why is that? I am using a foreign key with delete cascade and on update cascade so that when I delete a data, all of the child table rows will be deleted, vice versa. here's my key for adding or inserting statements
public void addEmployee(Personal p ,Contact c,Employee e) {
Connection conn = Jdbc.dbConn();
Statement statement = null;
String insert1 = "INSERT INTO personal_info (`First_Name`, `Middle_Initial`, `Last_Name`, `Date_Of_Birth`, `Marital_Status`, `Beneficiaries`) VALUES ('"+p.getFirstName()+"', '"+p.getMiddleInitial()+"'" +
" , '"+p.getLastName()+"', '"+p.getDateOfBirth()+"', '"+p.getMaritalStatus()+"', '"+p.getBeneficiaries()+"')";
try {
statement = conn.createStatement();
statement.executeUpdate(insert1);
statement.close();
conn.close();
JOptionPane.showMessageDialog(null, "Employee Added!!");
} catch(Exception ex) {
ex.printStackTrace();
}
}
Users table:
CREATE TABLE `users` (
`idusers` int(11) NOT NULL AUTO_INCREMENT,
`emp_id` varchar(45) DEFAULT NULL,
`emp_pass` varchar(45) DEFAULT NULL,
PRIMARY KEY (`idusers`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
Personal_info table:
CREATE TABLE `personal_info` (
`idpersonal_info` int(11) NOT NULL AUTO_INCREMENT,
`First_Name` varchar(45) DEFAULT NULL,
`Middle_Initial` varchar(45) DEFAULT NULL,
`Last_Name` varchar(45) DEFAULT NULL,
`Date_Of_Birth` varchar(45) DEFAULT NULL,
`Marital_Status` varchar(45) DEFAULT NULL,
`Beneficiaries` varchar(45) DEFAULT NULL,
PRIMARY KEY (`idpersonal_info`),
CONSTRAINT `personal_info_ibfk_1`
FOREIGN KEY (`idpersonal_info`)
REFERENCES `users` (`idusers`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
You are trying to insert a record with 6 fields: First_Name, Middle_Initial, Last_Name, Date_Of_Birth, Marital_Status and Beneficiaries. Your schema is currently unknown but none of these fields seem to be a candidate foreign key to id of User table you mentioned. Thus I think there is a default value for that foreign key column and that default value is missing in User table.
Needless to say, you shouldn't have a default value for a foreign key of any table..
I am adding these information regarding your questions in comments and update on your question:
A foreign key is a link between a child table and parent table, personal_info and users tables in your case respectively. Child table's foreign key column must reference to a key value in parent table which means that for every value in child table's FK column, there must be a value in parent table's linked column.
Now, in your case when you try to insert a new personal_info record MySQL assigns a idpersonal_info to it, since you defined it as auto increment. But since there is a link to users table, MySQL searchs for the new idpersonal_info to be inserted in users table's idusers column. And as you are getting this exception, you surely don't have that value in the users table.
You can change your table structure as follows:
CREATE TABLE `personal_info` (
`idpersonal_info` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
... OTHER FIELD DEFINITIONS,
PRIMARY KEY (`idpersonal_info`),
CONSTRAINT `user_id_fk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`idusers`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB
And your query will need to include user_id field as well. So it will be something like this:
INSERT INTO personal_info
(`user_id`, `First_Name`, `Middle_Initial`, `Last_Name`, `Date_Of_Birth`, `Marital_Status`, `Beneficiaries`)
VALUES ( .... SET YOUR VALUES HERE. DON'T FORGET TO SET A VALID USER_ID
Looks like in your Personal_Info table you have a column called "finalpayroll", that points to a column in another table (a foreign key) and it's required (not nullable). In your insert you're not giving it a value. So what you could do is make that column nullable.
Or could be the other way around as #Konstantin Naryshkin is saying
What the error means is that you are trying to insert a value into a column with a foreign key a value that is not in the remote table.
I assume that there is a user column that we are not seeing. Since you are not explicitly setting the value, I assume that it is getting a default. The default value is not in the parent table.
i have two tables where in the first one i have 14 millions and in the second one i have 1.5 million of data.
So i wonder how could i transfer this data to another table to be normalized ?
And how do i convert some type to another, for example: i have a field called 'year' but its type is varchar, but i want it an integer instead, how do i do that ?
I thought about do this using JDBC in a loop while from java, but i think this is not effeciently.
// 1.5 million of data
CREATE TABLE dbo.directorsmovies
(
movieid INT NULL,
directorid INT NULL,
dname VARCHAR (500) NULL,
addition VARCHAR (1000) NULL
)
//14 million of data
CREATE TABLE dbo.movies
(
movieid VARCHAR (20) NULL,
title VARCHAR (400) NULL,
mvyear VARCHAR (100) NULL,
actorid VARCHAR (20) NULL,
actorname VARCHAR (250) NULL,
sex CHAR (1) NULL,
as_character VARCHAR (1500) NULL,
languages VARCHAR (1500) NULL,
genres VARCHAR (100) NULL
)
And this is my new tables:
DROP TABLE actor
CREATE TABLE actor (
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
sex VARCHAR(1) NOT NULL
)
DROP TABLE actor_character
CREATE TABLE actor_character(
id INT PRIMARY KEY IDENTITY,
character VARCHAR(100)
)
DROP TABLE director
CREATE TABLE director(
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
addition VARCHAR(150)
)
DROP TABLE movie
CREATE TABLE movie(
id INT PRIMARY KEY IDENTITY,
title VARCHAR(200) NOT NULL,
year INT
)
DROP TABLE language
CREATE TABLE language(
id INT PRIMARY KEY IDENTITY,
language VARCHAR (100) NOT NULL
)
DROP TABLE genre
CREATE TABLE genre(
id INT PRIMARY KEY IDENTITY,
genre VARCHAR(100) NOT NULL
)
DROP TABLE director_movie
CREATE TABLE director_movie(
idDirector INT,
idMovie INT,
CONSTRAINT fk_director_movie_1 FOREIGN KEY (idDirector) REFERENCES director(id),
CONSTRAINT fk_director_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_director_movie PRIMARY KEY(idDirector,idMovie)
)
DROP TABLE genre_movie
CREATE TABLE genre_movie(
idGenre INT,
idMovie INT,
CONSTRAINT fk_genre_movie_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_genre_movie_2 FOREIGN KEY (idGenre) REFERENCES genre(id),
CONSTRAINT pk_genre_movie PRIMARY KEY (idMovie, idGenre)
)
DROP TABLE language_movie
CREATE TABLE language_movie(
idLanguage INT,
idMovie INT,
CONSTRAINT fk_language_movie_1 FOREIGN KEY (idLanguage) REFERENCES language(id),
CONSTRAINT fk_language_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_language_movie PRIMARY KEY (idLanguage, idMovie)
)
DROP TABLE movie_actor
CREATE TABLE movie_actor(
idMovie INT,
idActor INT,
CONSTRAINT fk_movie_actor_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_movie_actor_2 FOREIGN KEY (idActor) REFERENCES actor(id),
CONSTRAINT pk_movie_actor PRIMARY KEY (idMovie,idActor)
)
UPDATE:
I'm using SQL Server 2008.
Sorry guys i forgot to mention that are different databases :
The not normalized is call disciplinedb and the my normalized call imdb.
Best regards,
Valter Henrique.
If both tables are in the same database, then the most efficient transfer is to do it all within the database, preferably by sending a SQL statement to be executed there.
Any movement of data from the d/b server to somewhere else and then back to the d/b server is to be avoided unless there is a reason it can only be transformed off-server. If the destination is different server, then this is much less of an issue.
Though my tables were dwarfs compared to yours, I got over this kind of problem once with stored procedures. For MySQL, below is a simplified (and untested) essence of my script, but something similar should work with all major SQL bases.
First you should just add a new integer year column (int_year in example) and then iterate over all rows using the procedure below:
DROP PROCEDURE IF EXISTS move_data;
CREATE PROCEDURE move_data()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE orig_id INT DEFAULT 0;
DECLARE orig_year VARCHAR DEFAULT "";
DECLARE cur1 CURSOR FOR SELECT id, year FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
PREPARE stmt FROM "UPDATE table1 SET int_year = ? WHERE id = ?";
read_loop: LOOP
FETCH cur1 INTO orig_id, orig_year;
IF done THEN
LEAVE read_loop;
END IF;
SET #year= orig_year;
SET #id = orig_id;
EXECUTE stmt USING #orig_year, #id;
END LOOP;
CLOSE cur1;
END;
And to start the procedure, just CALL move_data().
The above SQL has two major ideas to speed it up:
Use CURSORS to iterate over a large table
Use PREPARED statement to quickly execute pre-known commands
PS. for my case this speeded things up from ages to seconds, though in your case it can still take a considerable amount of time. So it would be probably best to execute from command line, not some web interface (e.g. PhpMyAdmin).
I just recently did this for ~150 Gb of data. I used a pair of merge statements for each table. The first merge statement said "if it's not in the destination table, copy it there" and the second said "if it's in the destination table, delete it from the source". I put both in a while loop and only did 10000 rows in each operation at a time. Keeping it on the server (and not transferring it through a client) is going to be a huge boon for performance. Give it a shot!