i am using sqlite with java JDBC, and i created the following table:
private final String sqlTableNode = "CREATE TABLE "+this.TABLE_NODE+
" ( "+this.NODE_TABLE_ID_COL+" INTEGER, " +
this.NODE_TABLE_NODE_ID_COL+" TEXT NOT NULL, " +
this.NODE_TABLE_LAT_COL+" TEXT NOT NULL, " +
this.NODE_TABLE_LNG_COL+" TEXT NOT NULL, " +
"PRIMARY KEY ("+this.NODE_TABLE_LAT_COL+", "+this.NODE_TABLE_LNG_COL+") );";
my question is, how can i make the first column "this.NODE_TABLE_PK_COL" is auto incremented while it is not a primary key??
Unfortunately, it's not possible. The SQLite documentation on Autoincrement says at the bottom of the page:
Because AUTOINCREMENT keyword changes the behavior of the ROWID selection algorithm, AUTOINCREMENT is not allowed on WITHOUT ROWID tables or on any table column other than INTEGER PRIMARY KEY. Any attempt to use AUTOINCREMENT on a WITHOUT ROWID table or on a column other than the INTEGER PRIMARY KEY column results in an error.
As the documentation shows, autoincrement works only for the primary key.
#rmaik:
No. You can't. You can find the documentation here : https://www.sqlite.org/autoinc.html
It clearly states :
AUTOINCREMENT is not allowed on WITHOUT ROWID tables or on any table column other than INTEGER PRIMARY KEY. Any attempt to use AUTOINCREMENT on a WITHOUT ROWID table or on a column other than the INTEGER PRIMARY KEY column results in an error.
If I'm understanding your requirement, what you can do in this scenario is that, create primary key on
this.NODE_TABLE_ID_COL.
Since, it is integer, it fits the criteria for Primary key.
And create unique key on :
UNIQUE KEY ("+this.NODE_TABLE_ID_COL+", "+this.NODE_TABLE_LAT_COL+", "+this.NODE_TABLE_LNG_COL+") )
Related
I have a pre-existing table, containing 'fname', 'lname', 'email', 'password' and 'ip'. But now I want an auto-increment column. However, when I enter:
ALTER TABLE users
ADD id int NOT NULL AUTO_INCREMENT
I get the following:
#1075 - Incorrect table definition; there can be only one auto column and it must be defined as a key
Any advice?:)
Try this
ALTER TABLE `users` ADD `id` INT NOT NULL AUTO_INCREMENT;
for an existing primary key
If you don't care whether the auto-id is used as PRIMARY KEY, you can just do
ALTER TABLE `myTable` ADD COLUMN `id` INT AUTO_INCREMENT UNIQUE FIRST;
I just did this and it worked a treat.
If you want to add AUTO_INCREMENT in an existing table, need to run following SQL command:
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT primary key
First you have to remove the primary key of the table
ALTER TABLE nametable DROP PRIMARY KEY
and now yo can add the autoincrement ...
ALTER TABLE nametable ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
Well, you must first drop the auto_increment and primary key you have and then add yours, as follows:
-- drop auto_increment capability
alter table `users` modify column id INT NOT NULL;
-- in one line, drop primary key and rebuild one
alter table `users` drop primary key, add primary key(id);
-- re add the auto_increment capability, last value is remembered
alter table `users` modify column id INT NOT NULL AUTO_INCREMENT;
If you run the following command :
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT PRIMARY KEY;
This will show you the error :
ERROR 1060 (42S21): Duplicate column name 'id'
This is because this command will try to add the new column named id to the existing table.
To modify the existing column you have to use the following command :
ALTER TABLE users MODIFY id int NOT NULL AUTO_INCREMENT PRIMARY KEY;
This should work for changing the existing column constraint....!
Delete the primary key of a table if it exists:
ALTER TABLE `tableName` DROP PRIMARY KEY;
Adding an auto-increment column to a table :
ALTER TABLE `tableName` ADD `Column_name` INT PRIMARY KEY AUTO_INCREMENT;
Modify the column which we want to consider as the primary key:
alter table `tableName` modify column `Column_name` INT NOT NULL AUTO_INCREMENT PRIMARY KEY;
Just change the ADD to MODIFY and it will works !
Replace
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT
To
ALTER TABLE users MODIFY id int NOT NULL AUTO_INCREMENT;
Drop the primary index from the table:
ALTER TABLE `tableName` DROP INDEX `PRIMARY`;
Then add the id column (without a primary index). I have used a big int because I am going to have lots of data but INT(11) should work just as well:
ALTER TABLE `tableName` ADD COLUMN `id` BIGINT(11) NOT NULL FIRST;
Then modify the column with auto-increment (thanks php). It needs to be a primary key:
ALTER TABLE `tableName ` MODIFY COLUMN `id` BIGINT(11) UNSIGNED PRIMARY KEY AUTO_INCREMENT;
I have just tried this on a table of mine and it appears to have worked.
ALTER TABLE users CHANGE id int( 30 ) NOT NULL AUTO_INCREMENT
the integer parameter is based on my default sql setting
have a nice day
ALTER TABLE users ADD id int NOT NULL AUTO_INCREMENT primary key FIRST
For PostgreSQL you have to use SERIAL instead of auto_increment.
ALTER TABLE your_table_name ADD COLUMN id SERIAL NOT NULL PRIMARY KEY
ALTER TABLE `table` ADD `id` INT NOT NULL AUTO_INCREMENT unique
Try this. No need to drop your primary key.
This SQL request works for me :
ALTER TABLE users
CHANGE COLUMN `id` `id` INT(11) NOT NULL AUTO_INCREMENT ;
If you want to add an id with a primary key and identity:
ALTER TABLE user ADD id INT NOT NULL AUTO_INCREMENT FIRST , ADD PRIMARY KEY (id);
Check for already existing primary key with different column. If yes, drop the primary key using:
ALTER TABLE Table1
DROP CONSTRAINT PK_Table1_Col1
GO
and then write your query as it is.
Proceed like that :
Make a dump of your database first
Remove the primary key like that
ALTER TABLE yourtable DROP PRIMARY KEY
Add the new column like that
ALTER TABLE yourtable add column Id INT NOT NULL AUTO_INCREMENT FIRST, ADD primary KEY Id(Id)
The table will be looked and the AutoInc updated.
I am using Apache DBUtils
Long rowId = queryRunner.insert(sql, new ScalarHandler<Long>(), params);
My table schema is
CREATE TABLE abc
(
userid bigint,
api_key text,
key_id integer NOT NULL DEFAULT nextval('api_keys_key_id_seq'::regclass),
CONSTRAINT api_keys_pkey PRIMARY KEY (key_id),
CONSTRAINT userid_fkey FOREIGN KEY (userid)
REFERENCES public.users (userid) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
Problem is rowId is coming from userid column where as primary key of table is key_id and i want the returning id of insert query to be from key_id column.
The solution which i followed was to create table again with key_id as first column. In this way key_id was returned in response to insert query. There may be an option to configure this but i was unable to find.
I know creating a table again for this is not good approach but anyone found answer please do share.
If you have an insert with an ON DUPLICATE KEY clause, and there is a duplicate key, is there any way to get back the primary key that was duplicated? or do I have to do my own manual query? As far as I can tell getGeneratedKeys() from the CallableStatement class will not return as a new insert wasn't actually done.
EDIT
Sorry if it wasn't clear but I want to get the PRIMARY KEY of the record back.
So if I were have the following table (excuse syntax, just typing it freehand):
CREATE TABLE some_table(
id int(11) unsigned NOT NULL AUTO_INCREMENT,
value varchar(500)NOT NULL,
count int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (id),
UNIQUE KEY (value)
);
INSERT INTO some_table(value) ON DUPLICATE KEY UPDATE count = count + 1;
If I were to add 'test' as the value, a new record would be added and the id would be returned by getGeneratedKeys();
If I were to attempt to add 'test' again, the key already exists and therefore the count would be updated. What I want is the primary key/id of that row which was updated. Do I have to see that I get no results back from getGeneratedKeys() - as none where generated - and do another select after the fact?
ON DUPLICATE KEY UPDATE count = count + 1, id = LAST_INSERT_ID(id)
Note: This shouldn't be necessary as of MySQL 5.5.
I am using Hibernate Envers to audit some entities. I manually created the associated audit tables. However, I am having trouble determining what an audit table's primary key should be. For example, consider a fictional table designed to store customers:
CREATE TABLE CUSTOMER
(
CUSTOMER_ID INTEGER,
CUSTOMER_NAME VARCHAR(100),
PRIMARY KEY (CUSTOMER_ID)
)
And you create the audit table:
CREATE TABLE CUSTOMER_REVISION
(
REVISION_ID INTEGER,
REVISION_TYPE_ID INTEGER,
CUSTOMER_ID INTEGER,
CUSTOMER_NAME VARCHAR(100),
PRIMARY KEY (???)
)
Here were the options I considered:
Primary key: REVISION_ID
This cannot be the primary key because multiple entities of the same class may be modified during the same revision.
Primary key: (REVISION_ID, CUSTOMER_ID)
This seems more likely, but I'm not sure if Envers will insert multiple records per customer per revision.
Primary key: (REVISION_ID, REVISION_TYPE_ID, CUSTOMER_ID)
This seems like overkill, but it may be possible that Envers will insert different types of records (add, modify or delete) per customer per revision.
Primary key: A new column
Perhaps the primary key must simply be another column containing a synthetic primary key.
What is the true primary key of an audit table managed by Hibernate Envers?
Judging by the examples in the documentation, it appears that the primary key in my example would be (REVISION_ID, CUSTOMER_ID). Here is the example in the documentation:
create table Address (
id integer generated by default as identity (start with 1),
flatNumber integer,
houseNumber integer,
streetName varchar(255),
primary key (id)
);
create table Address_AUD (
id integer not null,
REV integer not null,
flatNumber integer,
houseNumber integer,
streetName varchar(255),
REVTYPE tinyint,
***primary key (id, REV)***
);
The primary key of audit table is the combination of original id(id) and revision number(rev) of the audit table.
As the official documentation there can be at most one historic entry for a given entity instance at a given revision, which simply means unique combination of above two column.
i have two tables where in the first one i have 14 millions and in the second one i have 1.5 million of data.
So i wonder how could i transfer this data to another table to be normalized ?
And how do i convert some type to another, for example: i have a field called 'year' but its type is varchar, but i want it an integer instead, how do i do that ?
I thought about do this using JDBC in a loop while from java, but i think this is not effeciently.
// 1.5 million of data
CREATE TABLE dbo.directorsmovies
(
movieid INT NULL,
directorid INT NULL,
dname VARCHAR (500) NULL,
addition VARCHAR (1000) NULL
)
//14 million of data
CREATE TABLE dbo.movies
(
movieid VARCHAR (20) NULL,
title VARCHAR (400) NULL,
mvyear VARCHAR (100) NULL,
actorid VARCHAR (20) NULL,
actorname VARCHAR (250) NULL,
sex CHAR (1) NULL,
as_character VARCHAR (1500) NULL,
languages VARCHAR (1500) NULL,
genres VARCHAR (100) NULL
)
And this is my new tables:
DROP TABLE actor
CREATE TABLE actor (
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
sex VARCHAR(1) NOT NULL
)
DROP TABLE actor_character
CREATE TABLE actor_character(
id INT PRIMARY KEY IDENTITY,
character VARCHAR(100)
)
DROP TABLE director
CREATE TABLE director(
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
addition VARCHAR(150)
)
DROP TABLE movie
CREATE TABLE movie(
id INT PRIMARY KEY IDENTITY,
title VARCHAR(200) NOT NULL,
year INT
)
DROP TABLE language
CREATE TABLE language(
id INT PRIMARY KEY IDENTITY,
language VARCHAR (100) NOT NULL
)
DROP TABLE genre
CREATE TABLE genre(
id INT PRIMARY KEY IDENTITY,
genre VARCHAR(100) NOT NULL
)
DROP TABLE director_movie
CREATE TABLE director_movie(
idDirector INT,
idMovie INT,
CONSTRAINT fk_director_movie_1 FOREIGN KEY (idDirector) REFERENCES director(id),
CONSTRAINT fk_director_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_director_movie PRIMARY KEY(idDirector,idMovie)
)
DROP TABLE genre_movie
CREATE TABLE genre_movie(
idGenre INT,
idMovie INT,
CONSTRAINT fk_genre_movie_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_genre_movie_2 FOREIGN KEY (idGenre) REFERENCES genre(id),
CONSTRAINT pk_genre_movie PRIMARY KEY (idMovie, idGenre)
)
DROP TABLE language_movie
CREATE TABLE language_movie(
idLanguage INT,
idMovie INT,
CONSTRAINT fk_language_movie_1 FOREIGN KEY (idLanguage) REFERENCES language(id),
CONSTRAINT fk_language_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_language_movie PRIMARY KEY (idLanguage, idMovie)
)
DROP TABLE movie_actor
CREATE TABLE movie_actor(
idMovie INT,
idActor INT,
CONSTRAINT fk_movie_actor_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_movie_actor_2 FOREIGN KEY (idActor) REFERENCES actor(id),
CONSTRAINT pk_movie_actor PRIMARY KEY (idMovie,idActor)
)
UPDATE:
I'm using SQL Server 2008.
Sorry guys i forgot to mention that are different databases :
The not normalized is call disciplinedb and the my normalized call imdb.
Best regards,
Valter Henrique.
If both tables are in the same database, then the most efficient transfer is to do it all within the database, preferably by sending a SQL statement to be executed there.
Any movement of data from the d/b server to somewhere else and then back to the d/b server is to be avoided unless there is a reason it can only be transformed off-server. If the destination is different server, then this is much less of an issue.
Though my tables were dwarfs compared to yours, I got over this kind of problem once with stored procedures. For MySQL, below is a simplified (and untested) essence of my script, but something similar should work with all major SQL bases.
First you should just add a new integer year column (int_year in example) and then iterate over all rows using the procedure below:
DROP PROCEDURE IF EXISTS move_data;
CREATE PROCEDURE move_data()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE orig_id INT DEFAULT 0;
DECLARE orig_year VARCHAR DEFAULT "";
DECLARE cur1 CURSOR FOR SELECT id, year FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
PREPARE stmt FROM "UPDATE table1 SET int_year = ? WHERE id = ?";
read_loop: LOOP
FETCH cur1 INTO orig_id, orig_year;
IF done THEN
LEAVE read_loop;
END IF;
SET #year= orig_year;
SET #id = orig_id;
EXECUTE stmt USING #orig_year, #id;
END LOOP;
CLOSE cur1;
END;
And to start the procedure, just CALL move_data().
The above SQL has two major ideas to speed it up:
Use CURSORS to iterate over a large table
Use PREPARED statement to quickly execute pre-known commands
PS. for my case this speeded things up from ages to seconds, though in your case it can still take a considerable amount of time. So it would be probably best to execute from command line, not some web interface (e.g. PhpMyAdmin).
I just recently did this for ~150 Gb of data. I used a pair of merge statements for each table. The first merge statement said "if it's not in the destination table, copy it there" and the second said "if it's in the destination table, delete it from the source". I put both in a while loop and only did 10000 rows in each operation at a time. Keeping it on the server (and not transferring it through a client) is going to be a huge boon for performance. Give it a shot!