Select fetch one result returning null - java

When I use this statement it does not return results.
Result<Record> result = (Result<Record>) jooq
.select()
.from("Employees")
.where(DSL.cast("FirstName", MySQLDataType.BINARY)
.eq(DSL.cast(firstName, MySQLDataType.BINARY)))
.fetchOne();
I want to select only one Result.
Structure:
--
-- Table structure for table `Employees`
--
CREATE TABLE IF NOT EXISTS `Employees` (
`id` int(11) NOT NULL,
`FirstName` varchar(100) NOT NULL,
`LastName` varchar(150) NOT NULL,
`Age` tinyint(4) NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ;
I want to select is Jack and it exists but it returns null.
Example:
SELECT * FROM Employees WHERE FirstName = Jack;

This:
// Assuming this static import
import static org.jooq.impl.DSL.*;
cast("FirstName", MySQLDataType.BINARY)
Will generate the following SQL
-- With a bind variable:
CAST(? AS BINARY)
-- If you're inlining bind variables:
CAST('FirstName' AS BINARY)
So, this is not referring to your `FirstName` column, but to the 'FirstName' string value. What you really wanted to do is this:
cast(field(name("FirstName")), MySQLDataType.BINARY);
Which will generate
CAST(`FirstName` AS BINARY)
A general note on case sensitivity
If you're using backticks around table / column names in your DDL, you should always be aware of case sensitivity with your object names also in jOOQ. Ideally, you'll use the DSL.name() method as indicated in my answer to create case sensitive names. This also applies to your Employees table, which is added to your SQL statement case insensitively:
from("Employees") // Generates a "plain SQL", case-insensitive table Employees
I recommend you write this instead:
from(name("Employees")) // Generates a case-sensitive table identifier `Employees`
The reason why it works is because MySQL doesn't know case sensitive table names on Windows by default.
Manual references
I suggest reading the jOOQ manual sections about "plain SQL" and "identifiers" to help clarify things:
http://www.jooq.org/doc/latest/manual/sql-building/plain-sql
http://www.jooq.org/doc/latest/manual/sql-building/names

Related

Creating a table in mysql with java named by the user?

The question is, is there a way to create a new table named by the user from a text field. I know its a huge injection port, but i really need new tables, it will work only offline. I tried
String newtable = jTextField1.getText();
PreparedStatement create = conn.prepareStatement("CREATE TABLE IF NOT EXISTS '"+newtable+"'(ID INTEGER NOT NULL AUTO_INCREMENT, IDapol INTEGER, ΗΜΕΡΟΜΗΝΙΑ DATE, ΕΣΟΔΑ DOUBLE, PRIMARY KEY(ID), CONSTRAINT IDapol FOREIGN KEY(IDapol) REFERENCES apol(IDapol)");
but i get an error saying: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near "1718"(ID INTEGER NOT NULL AUTO_INCREMENT, IDapol, INTEGER, ΗΜΕΡΟΜΗΝΙΑ' at line 1
1718 is the value of my textField1.
Any help i could use? Thanks
As per here: https://dev.mysql.com/doc/refman/5.7/en/identifiers.html , "Identifiers may begin with a digit but unless quoted may not consist solely of digits."
Also, currently your code is wide open for an SQL injection attack.
Table names are never in '. Either use backticks (`) or simply nothing:
"CREATE TABLE IF NOT EXISTS " + newtable + " (...)
Your table's name should also start with a character!
CREATE TABLE IF NOT EXISTS 1718
(ID INTEGER NOT NULL AUTO_INCREMENT,
IDapol INTEGER,
ΗΜΕΡΟΜΗΝΙΑ DATE,
ΕΣΟΔΑ DOUBLE,
PRIMARY KEY(ID),
CONSTRAINT IDapol
FOREIGN KEY(IDapol) REFERENCES apol(IDapol);
give table name in `` codes

How to Insert binary data with in insert query into Postgresql using jdbc

I have table structure like below
CREATE TABLE PUBLIC.STAFF(
STAFF_ID INT NOT NULL,
FIRST_NAME VARCHAR(45) NOT NULL,
LAST_NAME VARCHAR(45) NOT NULL,
ADDRESS_ID SMALLINT NOT NULL,
PICTURE BYTEA,
EMAIL VARCHAR(50),
STORE_ID INT NOT NULL,
ACTIVE BOOLEAN NOT NULL,
USERNAME VARCHAR(16) NOT NULL,
PASSWORD VARCHAR(40),
LAST_UPDATE TIMESTAMP NOT NULL
);
and I have couple of insert queries in script file or stored in arraylist like
INSERT INTO STAFF(STAFF_ID, FIRST_NAME, LAST_NAME, ADDRESS_ID, PICTURE, EMAIL, STORE_ID, ACTIVE, USERNAME, PASSWORD, LAST_UPDATE)
VALUES (
1,
'Mike',
'Hillyer',
3,
X'89504e470d0a1a0a0000000d4948445200000079000000750802000000e55ad965000000097048597300000ec300000ec301c76fa8640000200049444154789c4cbb7794246779ffbbf78f7b7ebe466177677772ce3d9d667aa67ba62776ce39545557ce3974ee9eb049ab95563922104142580830d10203061bb049064cb031d916c160100284505aedee4cdd3f16b8b7ce73de53f5d61f75cee7bcf53ccff7fb561dbb7ce9ad972fbdf5aecb6fbd74e7a3b75f7ef4ce7bde72e9ae375fbaffcd676ebff7e29d658c864812c0e90acec0040d123aad8a284f950906205810672b140d900226b218c713028f0a5c8',
'Mike.Hillyer#sakilastaff.com',
1,
TRUE,
'Mike',
'8cb2237d0679ca88db6464eac60da96345513964',
TIMESTAMP '2006-02-15 04:57:16.0'
);
When I am trying to insert the data into postgres using JDBC program I am getting the following error:
ERROR: column "picture" is of type bytea but expression is of type bit
LINE 2: (1, 'Mike', 'Hillyer', 3, X'89504e470d0a1a0a0000000d49484452..
HINT: You will need to rewrite or cast the expression.
********** Error **********
ERROR: column "picture" is of type bytea but expression is of type bit
How to solve this issue using Java?
Try this:
E'\\x89504e470d0a1a0a0000000d4948445200000079000000750802000000e55ad965000000097048597300000ec300000ec301c76fa8640000200049444154789c4cbb7794246779ffbbf78f7b7ebe466177677772ce3d9d667aa67ba62776ce39545557ce3974ee9eb049ab95563922104142580830d10203061bb049064cb031d916c160100284505aedee4cdd3f16b8b7ce73de53f5d61f75cee7bcf53ccff7fb561dbb7ce9ad972fbdf5aecb6fbd74e7a3b75f7ef4ce7bde72e9ae375fbaffcd676ebff7e29d658c864812c0e90acec0040d123aad8a284f950906205810672b140d900226b218c713028f0a5c8'
Yes, when accessing form java it is recommended to use prepared statement. An example can be found here.
But sometimes you need to do it in SQL. In this case should use slightly different notation. Just use \x at the beginning of a string like follows (no need for any special prefixes, but note that x is lowercase):
'\x89504e470d0a1a0a0000000d4948445200000079000000750802000000e55ad965000000097048597300000ec300000ec301c76fa8640000200049444154789c4cbb7794246779ffbbf78f7b7ebe466177677772ce3d9d667aa67ba62776ce39545557ce3974ee9eb049ab95563922104142580830d10203061bb049064cb031d916c160100284505aedee4cdd3f16b8b7ce73de53f5d61f75cee7bcf53ccff7fb561dbb7ce9ad972fbdf5aecb6fbd74e7a3b75f7ef4ce7bde72e9ae375fbaffcd676ebff7e29d658c864812c0e90acec0040d123aad8a284f950906205810672b140d900226b218c713028f0a5c8'
Note that older versions of PostgreSQL only supported Escape Format which uses octal form to encode arbitrary bytes, not hex. Also note that Escape Format needs ::bytea ending.
Details can be found here (substitute version in the url to whichever you are using).
P.S. Obviously way too late for the original question, but this is where an online search lands. So, documenting it here.

Is it too much overhead if ResultSet could be used to getBinaryStream() but wasn't to?

In mysql I have table
CREATE TABLE `articles_attachments` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
`name` VARCHAR(200) NOT NULL ,
`size` BIGINT NOT NULL ,
`article_id` BIGINT NOT NULL ,
`contents` LONGBLOB NOT NULL ,
PRIMARY KEY (`id`) ,
UNIQUE INDEX `id_UNIQUE` (`id` ASC) ,
UNIQUE INDEX `unique_file` (`article_id` ASC, `name` ASC),
INDEX `fk_article` (`article_id` ASC)
) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8 COLLATE = utf8_general_ci;
In application code I often need to just list attachments, but don't get their contents. So when I retrieve rows from that table I don't want resources wasted to serve "content" field.
Tricky part is that I use custom library which does "SELECT * FROM articles_attachments", so it queries to return all fields.
What I can easily do is to override RowMapper (comes from Spring Jdbc) and just don't map "content" field (do not call ResultSet.getBinaryStream).
Question: Will that help to avoid resource waste?... I don't want 100 stream to be opened when I retrieve 100 rows of attachments table.
I did couple tests and it turns out that answer is "Yes, you waste resources (specifically bandwidth) if resulting query contains fields of ~BLOB type even if you don't call ResultSet.getBinaryStream".
I did tested it with:
MySQL 5.6.20 & MirandaDB 10.0.13
mysql-connector-java-5.1.19-bin.jar
HikariCP-java6-2.2.5.jar

Running h2 in MODE=MySQL doesn't support MySQL dumps

I'm trying to embed h2 to test my mysql-application (integration-test)
I added com.h2database:h2:1.3.170 via maven and run the following code:
public class InMemoryTest
{
#Test
public void test() throws Exception {
Class.forName("org.h2.Driver");
Connection conn = DriverManager.
getConnection("jdbc:h2:mem:test;MODE=MySQL;IGNORECASE=TRUE;INIT=RUNSCRIPT FROM 'src/test/resources/test.sql'");
}
}
which gives me the following Exception:
Syntax error in SQL statement "
CREATE TABLE IF NOT EXISTS ""usr_avatar"" (
""usr_avatar_id"" INT(11) NOT NULL AUTO_INCREMENT,
""usr_avatar_user_id"" INT(11) NOT NULL,
""usr_avatar_img"" BLOB NOT NULL,
PRIMARY KEY (""usr_avatar_id""),
UNIQUE KEY ""usr_avatar_id_UNIQUE"" (""usr_avatar_id""),
UNIQUE KEY ""usr_avatar_user_id_UNIQUE"" (""usr_avatar_user_id""),
KEY ""usr_user_id"" (""usr_avatar_user_id""),
KEY ""fk_user_id"" (""usr_avatar_user_id"")
) AUTO_INCREMENT[*]=1 ";
Apparently, the "AUTO_INCREMENT" causes this?
Since this is valid MySQL (I exported the dump from my real database using MySQL Workbench), I'm a bit confused since h2 claims to support MySQL?
Here are a few lines from the .sql:
DROP TABLE IF EXISTS `usr_avatar`;
CREATE TABLE IF NOT EXISTS "usr_avatar" (
"usr_avatar_id" int(11) NOT NULL AUTO_INCREMENT,
"usr_avatar_user_id" int(11) NOT NULL,
"usr_avatar_img" blob NOT NULL,
PRIMARY KEY ("usr_avatar_id"),
UNIQUE KEY "usr_avatar_id_UNIQUE" ("usr_avatar_id"),
UNIQUE KEY "usr_avatar_user_id_UNIQUE" ("usr_avatar_user_id"),
KEY "usr_user_id" ("usr_avatar_user_id"),
KEY "fk_user_id" ("usr_avatar_user_id")
) AUTO_INCREMENT=1 ;
DROP TABLE IF EXISTS `usr_restriction`;
CREATE TABLE IF NOT EXISTS "usr_restriction" (
"usr_restriction_id" int(11) NOT NULL AUTO_INCREMENT,
"usr_restriction_user_id" int(11) DEFAULT NULL,
"usr_restriction_ip" varchar(39) DEFAULT NULL,
"usr_restriction_valid_from" date NOT NULL,
"usr_restriction_valid_to" date DEFAULT NULL,
PRIMARY KEY ("usr_restriction_id"),
UNIQUE KEY "usr_restriction_id_UNIQUE" ("usr_restriction_id"),
KEY "user_id" ("usr_restriction_user_id"),
KEY "usr_user_id" ("usr_restriction_user_id")
) AUTO_INCREMENT=1 ;
What are my options? Should I export the dump with a different software and force it to be plain SQL? Which software could do that? Or am I doing something wrong?
The problem is that H2 doesn't support AUTO_INCREMENT=1, which you have specified in the SQL statement. Try removing it. I don't think it's necessary for MySQL either.
The source SQL exported from MySQL has double-quotes surrounding it's literals. The first DROP statement also has a "back-tick" (`). But when H2 is reporting the error, H2 is showing the literals surrounded by double-double quotes. I think this is the problem.
Try a couple of things. First, take the back-tick in the DROP statement and convert it to single quotes. If that doesn't work, convert all of the double-quotes to single-quotes. If that doesn't work, remove all of the quotes.
I think H2 is trying to create tables with the double-quotes as a part of the actual table names/column names and this is causing it to bomb.
H2 doesn't support AUTO_INCREMENT=1.
Use this instead:
ALTER TABLE table_name ALTER COLUMN id RESTART WITH 1;

How to programmatically transfer a lot of data between tables?

i have two tables where in the first one i have 14 millions and in the second one i have 1.5 million of data.
So i wonder how could i transfer this data to another table to be normalized ?
And how do i convert some type to another, for example: i have a field called 'year' but its type is varchar, but i want it an integer instead, how do i do that ?
I thought about do this using JDBC in a loop while from java, but i think this is not effeciently.
// 1.5 million of data
CREATE TABLE dbo.directorsmovies
(
movieid INT NULL,
directorid INT NULL,
dname VARCHAR (500) NULL,
addition VARCHAR (1000) NULL
)
//14 million of data
CREATE TABLE dbo.movies
(
movieid VARCHAR (20) NULL,
title VARCHAR (400) NULL,
mvyear VARCHAR (100) NULL,
actorid VARCHAR (20) NULL,
actorname VARCHAR (250) NULL,
sex CHAR (1) NULL,
as_character VARCHAR (1500) NULL,
languages VARCHAR (1500) NULL,
genres VARCHAR (100) NULL
)
And this is my new tables:
DROP TABLE actor
CREATE TABLE actor (
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
sex VARCHAR(1) NOT NULL
)
DROP TABLE actor_character
CREATE TABLE actor_character(
id INT PRIMARY KEY IDENTITY,
character VARCHAR(100)
)
DROP TABLE director
CREATE TABLE director(
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
addition VARCHAR(150)
)
DROP TABLE movie
CREATE TABLE movie(
id INT PRIMARY KEY IDENTITY,
title VARCHAR(200) NOT NULL,
year INT
)
DROP TABLE language
CREATE TABLE language(
id INT PRIMARY KEY IDENTITY,
language VARCHAR (100) NOT NULL
)
DROP TABLE genre
CREATE TABLE genre(
id INT PRIMARY KEY IDENTITY,
genre VARCHAR(100) NOT NULL
)
DROP TABLE director_movie
CREATE TABLE director_movie(
idDirector INT,
idMovie INT,
CONSTRAINT fk_director_movie_1 FOREIGN KEY (idDirector) REFERENCES director(id),
CONSTRAINT fk_director_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_director_movie PRIMARY KEY(idDirector,idMovie)
)
DROP TABLE genre_movie
CREATE TABLE genre_movie(
idGenre INT,
idMovie INT,
CONSTRAINT fk_genre_movie_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_genre_movie_2 FOREIGN KEY (idGenre) REFERENCES genre(id),
CONSTRAINT pk_genre_movie PRIMARY KEY (idMovie, idGenre)
)
DROP TABLE language_movie
CREATE TABLE language_movie(
idLanguage INT,
idMovie INT,
CONSTRAINT fk_language_movie_1 FOREIGN KEY (idLanguage) REFERENCES language(id),
CONSTRAINT fk_language_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_language_movie PRIMARY KEY (idLanguage, idMovie)
)
DROP TABLE movie_actor
CREATE TABLE movie_actor(
idMovie INT,
idActor INT,
CONSTRAINT fk_movie_actor_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_movie_actor_2 FOREIGN KEY (idActor) REFERENCES actor(id),
CONSTRAINT pk_movie_actor PRIMARY KEY (idMovie,idActor)
)
UPDATE:
I'm using SQL Server 2008.
Sorry guys i forgot to mention that are different databases :
The not normalized is call disciplinedb and the my normalized call imdb.
Best regards,
Valter Henrique.
If both tables are in the same database, then the most efficient transfer is to do it all within the database, preferably by sending a SQL statement to be executed there.
Any movement of data from the d/b server to somewhere else and then back to the d/b server is to be avoided unless there is a reason it can only be transformed off-server. If the destination is different server, then this is much less of an issue.
Though my tables were dwarfs compared to yours, I got over this kind of problem once with stored procedures. For MySQL, below is a simplified (and untested) essence of my script, but something similar should work with all major SQL bases.
First you should just add a new integer year column (int_year in example) and then iterate over all rows using the procedure below:
DROP PROCEDURE IF EXISTS move_data;
CREATE PROCEDURE move_data()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE orig_id INT DEFAULT 0;
DECLARE orig_year VARCHAR DEFAULT "";
DECLARE cur1 CURSOR FOR SELECT id, year FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
PREPARE stmt FROM "UPDATE table1 SET int_year = ? WHERE id = ?";
read_loop: LOOP
FETCH cur1 INTO orig_id, orig_year;
IF done THEN
LEAVE read_loop;
END IF;
SET #year= orig_year;
SET #id = orig_id;
EXECUTE stmt USING #orig_year, #id;
END LOOP;
CLOSE cur1;
END;
And to start the procedure, just CALL move_data().
The above SQL has two major ideas to speed it up:
Use CURSORS to iterate over a large table
Use PREPARED statement to quickly execute pre-known commands
PS. for my case this speeded things up from ages to seconds, though in your case it can still take a considerable amount of time. So it would be probably best to execute from command line, not some web interface (e.g. PhpMyAdmin).
I just recently did this for ~150 Gb of data. I used a pair of merge statements for each table. The first merge statement said "if it's not in the destination table, copy it there" and the second said "if it's in the destination table, delete it from the source". I put both in a while loop and only did 10000 rows in each operation at a time. Keeping it on the server (and not transferring it through a client) is going to be a huge boon for performance. Give it a shot!

Categories