Currently I'm trying to import an SQL-INSERT dump from a postgres database into my Derby development/testing database using the Eclipse's Data Tools SQL scratchpad. The export created a lot of data that looked like the following:
CREATE TABLE mytable ( testfield BLOB );
INSERT INTO mytable ( testfield ) VALUES ('\x0123456789ABCDEF');
Executing it in Eclispe's SQL Scratchpad results in (translated from german):
Columns of type 'BLOB' shall not contain values of type 'CHAR'.
The problem seems, that the PostgreSQL admin tool exported BLOB data in a format like '\x0123456789ABCDEF' which is not recognized by Derby (Embedded).
Changing this to X'0123456789ABCDEF' or simply '0123456789ABCDEF'did also not work.
The only thing that worked was CAST (X'123456789ABCDEF' AS BLOB), but I'm not yet sure, if this results in the correct binary data when read back in Java and if the X'0123456789ABCDEF'is 100% portable.
CAST (...whatever... AS BLOB) doesn't work in java DB / Apache DERBY!
One must use the built-in system procedure
SYSCS_UTIL.SYSCS_IMPORT_DATA_LOBS_FROM_EXTFILE. I do not think there is any other way. For instance:
CALL SYSCS_UTIL.SYSCS_IMPORT_DATA_LOBS_FROM_EXTFILE (
'MYSCHEMA', 'MYTABLE',
'MY_KEY, MY_VARCHAR, MY_INT, MY_BLOB_DATA',
'1,3,4,2',
'c:\tmp\import.txt', ',' , '"' , 'UTF-8',
0);
where the referenced "import.txt" file will be CSV like (as specified by the ',' and '"' arguments above) and contain as 2nd field (I scrambled the CSV field order versus DB column names order on purpose to illustrate) a file name that contains the binary data in proper for the BLOB's. For instance, "import.txt" is like:
"A001","c:\tmp\blobimport.dat.0.55/","TEST",123
where the supplied BLOB data file name bears the convention "filepath.offset.length/"
Actually, you can first export your table with
CALL SYSCS_UTIL.SYSCS_EXPORT_TABLE_LOBS_TO_EXTFILE(
'MYSCHEMA', 'MYTABLE', 'c:\tmp\export.txt', ',' ,'"',
'UTF-8', 'c:\tmp\blobexport.dat');
to generate sample files with the syntax to reuse on import.
Related
I'm trying to export the GeoTools HSQL 2 database and load it back into HSQL 1 for a legacy system that needs the older database format. The tables include characters like the degree symbol. However, it's coming out as the escape sequence \u0080 rather the encoded character. I need to either fix that or have HSQL 1 import convert the escaped characters back into the correct encoding.
e.g.
cp modules/plugin/epsg-hsql/src/main/resources/org/geotools/referencing/factory/epsg/EPSG.zip /tmp
cd /tmp
unzip EPSG.zip
java -jar hsqldb-2.4.1.jar
# For the file, put jdbc:hsqldb:file:/tmp/EPSG
SELECT 'epsg-dump'
And in the results I see things like this \u00b5:
INSERT INTO EPSG_ALIAS VALUES(389,'epsg_unitofmeasure',9109,7302,'\u00b5rad','')
Looking into hsqldb, I'm not sure how to control the encoding the of the data being written, assuming that this is the correct location to look:
https://github.com/ryenus/hsqldb/blob/master/src/org/hsqldb/scriptio/ScriptWriterText.java
You can use the following procedure:
In the source database, create TEXT tables with exactly the same columns as the original tables. Use CREATE TEXT TABLE thecopyname (LIKE thesourcename) for each table.
Use SET TABLE thecopyname SOURCE 'thecopyname.csv;encoding=UTF-8' for each of the copy tables.
INSERT into each thecopyname table with SELECT * FROM thesourcename.
Use SET TABLE thecopyname SOURCE OFF for each thecopyname
You will now have several thecopyname.csv files (each with its own name) with UTF8 encoding.
Use the reverse procedure on the target database. You need to explicity create the TEXT tables then use SET TABLE thecopyname SOURCE 'thecopyname.csv;encoding=UTF-8'
The encoding looks like Unicode (one to four hex digits).
Try this in bash (quick & dirty):
echo -ne "$(< dump.sql)" > dump_utf8.sql
I use the following command to import data from a .csv file into a MySQL database table like so:
String loadQuery = "LOAD DATA LOCAL INFILE '" + file + "' INTO TABLE source_data_android_cell FIELDS TERMINATED BY ','" + "ENCLOSED BY '\"'"
+ " LINES TERMINATED BY '\n' " + "IGNORE 1 LINES(.....)" +"SET test_date = STR_TO_DATE(#var1, '%d/%m/%Y %k:%i')";
However, as one of the columns in the sourcefile contains a really screwy data which is: viva Y31L.RastaMod䋢_Version the program refuses to import the data into MySQL and keeps throwing this error:
java.sql.SQLException: Invalid utf8 character string: 'viva
Y31L.RastaMod'
I searched up on this but cant really understand what exactly the error was, other than that the INPUT format of this string "viva Y31L.RastaMod䋢_Version" was wrong and didn't fit the utf8 format used in the MySQL database?
However, I already did the following which is SET NAMES UTF8MB4 in my MySQL db, since it was suggested in other questions that UTF8MB4 was more flexible in accepting weird characters.
I explored this further by manually inserting that weird data into MySQL database table in the Command Prompt, which worked fine. In fact, the table displayed almost the full entry: viva Y31L.RastaMod?ã¢_Version. But if I ran my program from the IDE the file gets rejected.
Would appreciate any explanations.
Second minor question related to the import process of csv file into mySQL:
I noticed that I couldn't import a copy of the same file into the MySQL database. Errors thrown included that the data was a duplicate. Is that because MySQL rejects duplicate column data? But when I changed all the data of one column leaving the rest the same in that copied file, it gets imported correctly. Why is that so?
I don't think this immediate error has to do with the destination of the data not being able to cope with UTF-8 characters, but rather the way you are using LOAD DATA. You can try specifying the character set which should be used when loading the data. Consider the following LOAD DATA command, which is what you had originally but slightly modified:
LOAD DATA LOCAL INFILE path/to/file INTO TABLE source_data_android_cell
CHARACTER SET utf8
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES(.....)
SET test_date = STR_TO_DATE(#var1, '%d/%m/%Y %k:%i')
This being said, you should also make sure that the target table uses a character set which supports the data you are trying to load into it.
I have Workspace/Schema EDUCATION in Oracle XE.
In my Java code I want execute queries like this: SELECT * FROM Table instead of SELECT * FROM EDUCATION.Table.
When I write query without EDUCATION I have error: table or view does not exist.
I tried to set the default schema to % (screenshot), but it did not help.
How to avoid writing Workspace/Schema name?
If I understand correctly, you want to access tables in other schemas without using the schema name.
One simple way to do this uses synonyms. In the schema you are connect to:
create synonym table for education.table;
Then you can use table where you would use education.table.
I am transferring data from 1 table (MySQL) to another (Postgres) using iBatis.
Table1(String Name, INTEGER Age, DATE dob, String Note)
The code I am using to INSERT data through Ibatis is:
<insert id="insertData" parameterType="TransferBean" >
INSERT
INTO ${base.tableName}(${base.columns})
VALUES(
${base.columnValueStr}
)
</insert>
Where columns is the list of the columns, and columnValueStr is a list of the values being passed in a comma separated format.
The base query created by iBatis being passed to Postgres is :
INSERT INTO TABLE2(Name,Age,Dob,Note) VALUES("Ayush",NULL,"Sample")
However Postgres is throwing the following error:
column \"Age\" is of type smallint but expression is of type character varying\n Hint: You will need to rewrite or cast the expression.
My guess is that Postgres is reading NULL as 'null'. I have tried passing 0 (which works), and '' (does not work) based on Column Name, but its not a generic and graceful solution.
Filtering is not possible according to the type of the column.
I need help to know if there is a workaround I can make at the query or even JAVA level which would pass the NULL as a proper null.
P.S. I have tried inserting null values into SmallInt from Postgres IDE, and that works fine.
You need to check if a variable = NULL, replace it with text NULL, as the NULL might be rendered as BLANK
make sure it is
INSERT INTO TABLE2(Name,Age,Dob,Note) VALUES('Ayush',NULL,'Sample','')
not the following: (which is not working for SQL)
INSERT INTO TABLE2(Name,Age,Dob,Note) VALUES('Ayush',,'Sample',)
Since you are transferring data from MySQL to PostgreSQL, have a try: www.topnew.net/sidu/ which might be easier for your task. Simply expert from one database, and import to another. As SIDU supports both MySQL and PostgreSQL
I am using an APACHE DERBY database, and basing my database interactions on EntityManager, and I don't want to use JDBC class to build a query to change my tables' names (i just need to put a prefix to each new user to the application, but have the same structure of tables), such as:
//em stands for EntityManager object
Query tableNamesQuery= em.createNamedQuery("RENAME TABLE SCHEMA.EMP_ACT TO EMPLOYEE_ACT");
em.executeUpdate();
// ... rest of the function's work
// The command works from the database command prompt but i don't know how to use it in a program
//Or as i know you can't change system tables data, but here's the code
Query tableNamesQuery= em.createNamedQuery("UPDATE SYS.SYSTABLES SET TABLENAME='NEW_TABLE_NAME' WHERE TABLETYPE='T'");
em.executeUpdate();
// ... rest of the function's work
My questions are :
This syntax is correct?
Will it work?
Is there any other alternative?
Should I just use the SYS.SYSTABLES and find all the tables that has 'T' as tabletype and alter their name their, will it change the access name ?
I think you're looking for the RENAME TABLE statement: http://db.apache.org/derby/docs/10.10/ref/rrefsqljrenametablestatement.html
Don't just issue update statements against the system catalogs, you will corrupt your database.