sql-server jdbc4 drivers doesn't list all columns? - java

I have a (sql server) column which have a System Type but no Data Type (It is actually more then one). Using the Microsoft jdbc4 drivers and trying to list all the columns using metadata for this table this column with the missing Data type doesn't show up:
ResultSet rs = connection.getMetaData().getColumns(null,"schema","tableName", null);
while (rs.next()) {
String column = rs.getString("COLUMN_NAME");
System.out.println("column:" + column); // This will not print the column
}
If you use the odbc drivers, in for example C#, it does show up (The column with the missing Data Type). And if this column is a constraint it will show up when using the the jdbc api for listing all constraints:
ResultSet rs = connection.getMetaData().getIndexInfo(null,
"schema",
"tableName",
true/*list unique*/,
true);
while(rs.next()){
String column = rs.getString("COLUMN_NAME");
System.out.println("column:" + column); // This will print the column, if it is a unique constraint.
}
This behavior is the same regardless if you use jdbc4 drivers or the jtds drivers.
So my question is, if this is a bug or is it something that I'm missing? And Is it possible to list the meta data in an other way to get all the columns for table?
To create a Table which doesn't display a Data Type. This demands that you create a user which haven't read access to the user defined Data Types. This is how can reproduce it:
I found this code here
CREATE TYPE TEST_TYPE2 FROM [int] NOT NULL
GO
CREATE TABLE Customer
(
CustomerID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(100) NOT NULL,
ZIP TEST_TYPE2 NOT NULL
)
GO
-- Create Table without UDDT
CREATE TABLE Customer_Orig
(
CustomerID INT IDENTITY(1,1) NOT NULL,
LastName VARCHAR(100) NOT NULL,
FirstName VARCHAR(100) NOT NULL,
ZIP INT NOT NULL
)
GO
-- Create User with db_datareader to database
USE [master]
GO
CREATE LOGIN testUser WITH PASSWORD=N'THE_PASSWORD',
DEFAULT_DATABASE=[master],
CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
USE TEST_DATABASE
GO
CREATE USER testUser FOR LOGIN testUser
GO
USE TEST_DATABASE
GO
EXEC sp_addrolemember N'db_datareader', N'THE_PASSWORD'
GO
I understand why it doesn't show up, but I way does it work for the odbc drivers?

Related

Create a MySQL index on already existing table with a lot of data

I have a Spring Boot service with a MySQL database (AWS RDS).
There is a specific table, that contains around 2 millions of rows, and some queries on it make the CPU go up to the database instance.
I noticed that there isn't an index on the used column so I would like to try to add this index.
The questions are:
Can I add (without any problems) this index on this table that
already contains a lot of rows? I'm using Flyway to manage the db
migrations.
The specific column contains strings, are there some
other index configurations that It's better to use for this scenario?
Some additional infos:
MySQL version is 5.7.33;
The table, at the moment, does not contain any other relationships;
The table is very simple and it's reported below:
CREATE TABLE IF NOT EXISTS info(
field_1 varchar(36) NOT NULL,
field_2 text DEFAULT NULL,
my_key varchar(36) DEFAULT NULL,
field_3 varchar( 255) DEFAULT NULL,
field_4 varchar(10) DEFAULT NULL,
field_4 varchar(10) DEFAULT NULL,
field_6 varchar(10) DEFAULT NULL,
field_7 varchar(36)NOT NULL,
creation_date datetime DEFAULT NULL,
modification_date datetime DEFAULT NULL,
PRIMARY KEY (field_1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The table contains now around 2 millions of rows;
The query is something like:
SELECT * FROM info WHERE my_key = "xxxx"
and it will be executed a lot of times
The idea is to create this index:
CREATE INDEX my_key ON info (my_key);
With the more recent versions of MySql you can create an index without locking the table:
The table remains available for read and write operations while the index is being created. The CREATE INDEX statement only finishes after all transactions that are accessing the table are completed, so that the initial state of the index reflects the most recent contents of the table.
Obviously creating an index is an extra effort for the database so if your database is in suffering state try to update the index when you have a decrease of the activities performed on the db.

SQLite JDBC - How to populate new Database

I was given two databases, and am supposed to create a table in a new database that stores information about the given databases.
So far I created a table in a new database. I also defined its attributes, but I am stuck on how to populate this table.
The attributes are things like 'original_db, 'original_field' etc , but I don't know how to access this information? Especially since I would need to connect jdbc to 3 databases (the new one, and the 2 old ones) at the same time. Is ths even possible?
I am new to working on databases and SQLite, so sorry if this is a stupid problem.
I would be so grateful for any advice!
What I can grasp from your question is you don't know how to insert data into it?
You could do this in your schema.sql when creating the db e.g the schema would look something like.
DROP TABLE users;
CREATE TABLE users(id integer primary key, username varchar(30), password varchar(30), email varchar(50), receive_email blob, gender varchar(6));
DROP TABLE lists;
CREATE TABLE lists(id integer primary key, user_id integer, list_id integer, name varchar(50), creation_date datetime, importance integer, completed blob);
DROP TABLE list_item;
CREATE TABLE list_item(id integer primary key, list_id integer, name varchar(50), creation_date datetime, completed blob);
then you could make a data.sql file or something with
INESRT into users VALUES(NULL, "name", ...);
and then do in the terminal
sqlite3 database.db <- data.sql
or you could start a sqlite interactive session in the same working directory as your db and manually type in the code
e.g type in
sqlite3 databasename.db
and then type in commands from there

SqlLite in memory database slows ResultSet next

I'm working with latitude and longitude as index and compiled the r*tree module into the sqlite3 database to increase the performance. Further I loaded the tables into an in-memory database. The surprising result was that the ResultSet next method slows down to 25-30 ms instead of 1 ms if the data are coming from hard disk. Normally I expect 250 entry for the result set.
First a connection is set up to the memory.
Class.forName("org.sqlite.JDBC");
Connection connection = DriverManager.getConnection("jdbc:sqlite::memory:");
Then the tables are copied from the hard disk into the memory:
Statement s = connection.createStatement();
s.execute("ATTACH 'myDB.db' AS fs");
s.executeUpdate("CREATE VIRTUAL TABLE coordinates USING rtree(id, min_longitude, max_longitude, min_latitude, max_latitude)");
s.executeUpdate("INSERT INTO coordinates (id, min_longitude, max_longitude, min_latitude, max_latitude)
SELECT id, min_longitude, max_longitude, min_latitude, max_latitude FROM fs.coordinates");
s.executeUpdate("CREATE TABLE locations AS SELECT * from fs.locations");
s.execute("DETACH DATABASE fs");
The last step is query the database and copy the result into an object.
final String sql = "SELECT * FROM locations, coordinates WHERE (locations.id = coordinates.id) AND ((min_latitude >= ? AND max_latitude <= ?) AND (min_longitude >= ? AND max_longitude <= ?))";
PreparedStatement ps = connection.prepareStatement(sql);
// calculate bounding rec and fullfil sql statement.
Result rs = ps.executeQuery();
while (rs.next()) {
// copy the stuff here
}
I tried some stuff like the connection should be "read only" or using TYPE_FORWARD_ONLY and CONCUR_READ_ONLY or increasing the fetch size to perform the result set but rs.next() stays unimpressed.
Does anybody has an idea what's happening in the memory and why rs.next() is so slow? How could I increase the query?
The system uses the driver sqlite-jdbc 3.7.2 from org.xerial and tomcat6.
Update: Added the database schema
CREATE VIRTUAL TABLE coordinates USING rtree(
id,
min_latitude,
max_latitude,
min_longitude,
max_longitude);
CREATE TABLE locations(
id INTEGER UNIQUE NOT NULL,
number_of_locations INTEGER DEFAULT 1,
city_en TEXT DEFAULT NULL,
zip TEXT DEFAULT NULL,
street TEXT DEFAULT NULL,
number TEXT DEFAULT NULL,
loc_email TEXT DEFAULT NULL,
loc_phone TEXT DEFAULT NULL,
loc_fax TEXT DEFAULT NULL,
loc_url TEXT DEFAULT NULL);
In the original database, the query first looks up coordinates with the R-tree index, and then looks up matching locations with the index (implied by UNIQUE) on the id column, as shown by this EXPLAIN QUERY PLAN output:
0|0|1|SCAN TABLE coordinates VIRTUAL TABLE INDEX 2:DaBbDcBd
0|1|0|SEARCH TABLE locations USING INDEX sqlite_autoindex_locations_1 (id=?)
In the in-memory database, the table schema has changed because:
A table created using CREATE TABLE AS has no PRIMARY KEY and no constraints of any kind.
This particular table now looks as follows:
CREATE TABLE locations(
id INT,
number_of_locations INT,
city_en TEXT,
zip TEXT,
street TEXT,
number TEXT,
loc_email TEXT,
loc_phone TEXT,
loc_fax TEXT,
loc_url TEXT
);
It is no longer possible to look up locations by ID efficiently, so the database does full table scans:
0|0|0|SCAN TABLE locations
0|1|1|SCAN TABLE coordinates VIRTUAL TABLE INDEX 1:
When copying databases, you should always use all the original CREATE TABLE/INDEX statements.

How to programmatically transfer a lot of data between tables?

i have two tables where in the first one i have 14 millions and in the second one i have 1.5 million of data.
So i wonder how could i transfer this data to another table to be normalized ?
And how do i convert some type to another, for example: i have a field called 'year' but its type is varchar, but i want it an integer instead, how do i do that ?
I thought about do this using JDBC in a loop while from java, but i think this is not effeciently.
// 1.5 million of data
CREATE TABLE dbo.directorsmovies
(
movieid INT NULL,
directorid INT NULL,
dname VARCHAR (500) NULL,
addition VARCHAR (1000) NULL
)
//14 million of data
CREATE TABLE dbo.movies
(
movieid VARCHAR (20) NULL,
title VARCHAR (400) NULL,
mvyear VARCHAR (100) NULL,
actorid VARCHAR (20) NULL,
actorname VARCHAR (250) NULL,
sex CHAR (1) NULL,
as_character VARCHAR (1500) NULL,
languages VARCHAR (1500) NULL,
genres VARCHAR (100) NULL
)
And this is my new tables:
DROP TABLE actor
CREATE TABLE actor (
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
sex VARCHAR(1) NOT NULL
)
DROP TABLE actor_character
CREATE TABLE actor_character(
id INT PRIMARY KEY IDENTITY,
character VARCHAR(100)
)
DROP TABLE director
CREATE TABLE director(
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
addition VARCHAR(150)
)
DROP TABLE movie
CREATE TABLE movie(
id INT PRIMARY KEY IDENTITY,
title VARCHAR(200) NOT NULL,
year INT
)
DROP TABLE language
CREATE TABLE language(
id INT PRIMARY KEY IDENTITY,
language VARCHAR (100) NOT NULL
)
DROP TABLE genre
CREATE TABLE genre(
id INT PRIMARY KEY IDENTITY,
genre VARCHAR(100) NOT NULL
)
DROP TABLE director_movie
CREATE TABLE director_movie(
idDirector INT,
idMovie INT,
CONSTRAINT fk_director_movie_1 FOREIGN KEY (idDirector) REFERENCES director(id),
CONSTRAINT fk_director_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_director_movie PRIMARY KEY(idDirector,idMovie)
)
DROP TABLE genre_movie
CREATE TABLE genre_movie(
idGenre INT,
idMovie INT,
CONSTRAINT fk_genre_movie_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_genre_movie_2 FOREIGN KEY (idGenre) REFERENCES genre(id),
CONSTRAINT pk_genre_movie PRIMARY KEY (idMovie, idGenre)
)
DROP TABLE language_movie
CREATE TABLE language_movie(
idLanguage INT,
idMovie INT,
CONSTRAINT fk_language_movie_1 FOREIGN KEY (idLanguage) REFERENCES language(id),
CONSTRAINT fk_language_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_language_movie PRIMARY KEY (idLanguage, idMovie)
)
DROP TABLE movie_actor
CREATE TABLE movie_actor(
idMovie INT,
idActor INT,
CONSTRAINT fk_movie_actor_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_movie_actor_2 FOREIGN KEY (idActor) REFERENCES actor(id),
CONSTRAINT pk_movie_actor PRIMARY KEY (idMovie,idActor)
)
UPDATE:
I'm using SQL Server 2008.
Sorry guys i forgot to mention that are different databases :
The not normalized is call disciplinedb and the my normalized call imdb.
Best regards,
Valter Henrique.
If both tables are in the same database, then the most efficient transfer is to do it all within the database, preferably by sending a SQL statement to be executed there.
Any movement of data from the d/b server to somewhere else and then back to the d/b server is to be avoided unless there is a reason it can only be transformed off-server. If the destination is different server, then this is much less of an issue.
Though my tables were dwarfs compared to yours, I got over this kind of problem once with stored procedures. For MySQL, below is a simplified (and untested) essence of my script, but something similar should work with all major SQL bases.
First you should just add a new integer year column (int_year in example) and then iterate over all rows using the procedure below:
DROP PROCEDURE IF EXISTS move_data;
CREATE PROCEDURE move_data()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE orig_id INT DEFAULT 0;
DECLARE orig_year VARCHAR DEFAULT "";
DECLARE cur1 CURSOR FOR SELECT id, year FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
PREPARE stmt FROM "UPDATE table1 SET int_year = ? WHERE id = ?";
read_loop: LOOP
FETCH cur1 INTO orig_id, orig_year;
IF done THEN
LEAVE read_loop;
END IF;
SET #year= orig_year;
SET #id = orig_id;
EXECUTE stmt USING #orig_year, #id;
END LOOP;
CLOSE cur1;
END;
And to start the procedure, just CALL move_data().
The above SQL has two major ideas to speed it up:
Use CURSORS to iterate over a large table
Use PREPARED statement to quickly execute pre-known commands
PS. for my case this speeded things up from ages to seconds, though in your case it can still take a considerable amount of time. So it would be probably best to execute from command line, not some web interface (e.g. PhpMyAdmin).
I just recently did this for ~150 Gb of data. I used a pair of merge statements for each table. The first merge statement said "if it's not in the destination table, copy it there" and the second said "if it's in the destination table, delete it from the source". I put both in a while loop and only did 10000 rows in each operation at a time. Keeping it on the server (and not transferring it through a client) is going to be a huge boon for performance. Give it a shot!

How can I detect a SQL table's existence in Java?

How can I detect if a certain table exists in a given SQL database in Java?
You can use DatabaseMetaData.getTables() to get information about existing tables.
This method works transparently and is independent of the database engine. I think it queries information schema tables behind the scenes.
Edit:
Here is an example that prints all existing table names.
DatabaseMetaData md = connection.getMetaData();
ResultSet rs = md.getTables(null, null, "%", null);
while (rs.next()) {
System.out.println(rs.getString(3));
}
Use java.sql.DatabaseMetaData.getTables(null, null, YOUR_TABLE, null). If the table exists, you will get a ResultSet with one record.
See DatabaseMetaData.getTables
For ALL ANSI-compliant databases:
(mySQL, SQL Server 2005/2008, Oracle, PostgreSQL, SQLLite, maybe others)
select 1 from information_schema.tables where table_name = #tableName
This is not a language-specific, but a database-specific problem. You'd query the metadata in the database for the existence of that particular object.
In SQL Server for instance:
SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[table]')
AND type in (N'U')
Write a query that queries the table/view that will list the tables (this is different depending on DB vendor). Call that from Java.
Googling information_schema.tables will help a lot.
Depending on the DB, you can do (MySQL)
SHOW TABLES
or (Oracle)
SELECT * FROM user_objects WHERE object_type = 'TABLE'
or another thing for SQL Server. Cycle through the results for MySQL or further filter on the Oracle one.
Why not just see if it is in sysobjects (for SQL Server)?
SELECT [name] FROM [sysobjects] WHERE type = 'U' AND [name] = 'TableName'
There is a JDBC feature, database vendor independent - see [java.sql.DatabaseMetaData#getTables()][1]
You can get the DatabaseMetaData instance by calling java.sql.Connection#getMetaData()
[1]: http://java.sun.com/javase/6/docs/api/java/sql/DatabaseMetaData.html#getTables(java.lang.String, java.lang.String, java.lang.String, java.lang.String[])
This is what worked for me for jdbc:derby:
//Create Staff table if it does not exist yet
String tableName = "STAFF";
boolean exists = conn.getMetaData().getTables(null, null, tableName, null).next();
if(!exists){
s = conn.createStatement();
s.execute("create table staff(lastname varchar(30), firstname varchar(30), position varchar(20),salary double,age int)");
System.out.println("Created table " + tableName);
}
Note that tableName has to be all caps.
For MS Access:
Select Count(*) From MSysObjects
Where type=1 And name='your_table_name_here'

Categories