I was doing this code, and something weird has happened.
I make a bulk insert from an external file. But the result it's just fragmented, or maybe corrupted.
cnx=factoryInstace.getConnection();
pstmt = cnx.prepareStatement("DELETE FROM TEMPCELULAR");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("EXECUTE BLOCK AS BEGIN if (exists(select 1 from rdb$relations where rdb$relation_name = 'EXT_TAB')) then execute statement 'DROP TABLE EXT_TAB;'; END");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("CREATE TABLE EXT_TAB EXTERNAL '"+txtarchivoProcesar.getText()+"'(CELULAR varchar(11))");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("INSERT INTO TEMPCELULAR (CELULAR)SELECT CELULAR FROM EXT_TAB");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("SELECT CELULAR FROM TEMPCELULAR");
ResultSet rs=pstmt.executeQuery();
while(rs.next()){
System.out.println("::"+rs.getString(1));
}
And now, all of a sudden the rows on my table look like this:
::c#gmail.com
::abc2#gmail.
::m
abc3#gma
::.com
abc4#
::ail.com
ab
::#gmail.com
::bc6#gmail.c
::abc7#gmai
::com
abc8#g
::il.com
abc
::gmail.com
::c10#gmail.c
::
The blank spaces between results were not made by me. This is the result as it is.
Source file for external table:
abc#gmail.com
abc2#gmail.com
abc3#gmail.com
abc4#gmail.com
abc5#gmail.com
abc6#gmail.com
abc7#gmail.com
abc8#gmail.com
abc9#gmail.com
abc10#gmail.com
sneciosup#hotmail.com
¿What's wrong with my code?
I've haven't seen this wack results in years.
The database is created of the users pc on the first run. Hence, while in production every time I run the program.
Any help, will be appreciated.
The external table file in Firebird is not just a plaintext file, it is a fixed width format with special requirements to the content and layout. See the Interbase 6.0 Data Definition Guide, page 107-111 (available for download from http://www.firebirdsql.org/en/reference-manuals/ ) or The Firebird Book by Helen Borrie page 281-287.
The problems I see right now are:
you declare the column in the external table to be VARCHAR(11),
while the shortest emailaddress in your file is 13 characters, the
longest is 21 characters, so Firebird will never be able to read a
full emailaddress from the file
you don't specify a separate column for your linebreaks, so your
linebreaks will simply be part of the data which is read
you have declared the column as VARCHAR, this requires the records
to have a very specific format, where the first two bytes declare
the actual data length followed by a string of that length (and even then it
only reads upto the declared length of the column). Either make sure
you follow the requirements for VARCHAR columns, or simply use the
CHAR datatype and make sure you pad the column with spaces upto the
declared length.
I am not 100% sure, but your default database characterset may also be involved in how the data is read.
Related
Can any one suggest how to insert two lines in a single cell? I.e., I need to enter the first line and then the second line should start from new line in the single column cell. Suppose I have a column defined as varchar(100), I need to store the string core java,j2EE will service request as,
core java
j2EE will service request
in a single colomn.
When I retrieve back from the database and displayed on a JSP page it should display in two lines.
I am trying to retrieve Japanese content from database and display using jsp. Is Japanese content (which is in utf-8 format)causing any the problem so that <br /> tag is not parsing as line break.It is coming as string <br /> when I display it on screen.
A newline character ('\n')is still a character, so there's no problem inserting it:
Connection conn = ...;
PreparedStatement ps =
conn.prepareStatement
("INSERT INTO my_table VALUES (?)");
ps.setString (1, "j2EE will service request\nin a single coloumn.");
ps.executeUpdate();
Notes:
Different platforms have different line separators, so depending on how exactly this data is going to be consumed, using System.getProperty("line.separator") may be more appropriate.
For clarity's sake, again, this code omits resource management (e.g., closing the statement) and error-handling code.
I am using JDBC to get data out of a file maker server v12.
For some unknown reason filemaker allows you to have spaces in your table names. I am unable to select these tables because I just get a syntax error.
I have written an application in java to get the data out. Does anyone have any idea how i can select the data from a table with a space in it?
EDIT (from OP's comments):
This is the Java part:
String selectSQL = "SELECT "+this.getImportableColumnsString()+" FROM "+this.getTableName();
PreparedStatement preparedStatement = this.connection.prepareStatement(selectSQL);
ResultSet rs = preparedStatement.executeQuery();
As mentioned in a comment to the question, if the FileMaker table name contains spaces then it must be enclosed in double-quotes in the SQL statement, e.g.,
String selectSQL = "SELECT * FROM \"table name\"";
My first thought is that you could put the table name within ' characters like: SELECT * FROM 'my table'. Does this not work?
Otherwise I suggest you contact the Filemaker Server support on this page:
http://help.filemaker.com/app/ask
It is likely that they have had this question before and knows how to build the query.
//Flipbed
I'm pretty sure in the docs for OCDB and JDBC support FileMaker says that tables may have to comply to naming conventions that are stricter than what FileMaker allows. It is easy to change table names in FileMaker if you have admin access to the database you are sourcing why not just replace the spaces in the table names with underscores.
Related to this question: "Fix" String encoding in Java
My project encoding is UTF-8.
I need to make a query to a DB that uses a particular varchar encoding (apparently EUC-KR).
I take the input as UTF-8, and I want to make the DB query with the EUC-KR encoded version of that string.
First of all, I can select and display the encoded strings using the following:
ResultSet rs = stmt.executeQuery("SELECT name FROM mytable");
while(rs.next())
System.out.println(new String(rs.getBytes(1), "EUC-KR"));
I want to do something like:
PreparedStatement ps = conn.prepareStatement("SELECT * FROM MYTABLE WHERE NAME=?");
ps.setString(1,input);
ResultSet rs = ps.executeQuery();
Which obviously won't work, because my Java program is not using the same encoding as the DB. So, I've tried replacing the middle line with each of the following, to no avail:
ps.setString(1,new String(input.getBytes("EUC-KR")));
ps.setString(1,new String(input.getBytes("EUC-KR"), "EUC-KR"));
ps.setString(1,new String(input.getBytes("UTF-8"), "EUC-KR"));
ps.setString(1,new String(input.getBytes("EUC-KR"), "UTF-8"));
I am using Oracle 10g 10.1.0
More details of my attempts follow:
What does seem to work is saving the name from the first query into a string without any other manipulation, and passing that back as a parameter. It matches itself.
That is,
ResultSet rs = stmt.executeQuery("SELECT name FROM mytable");
rs.next();
String myString = rs.getString(1);
PreparedStatement ps = conn.prepareStatement("SELECT * FROM mytable WHERE name=?");
ps.setString(1, myString);
rs = ps.executeQuery();
... will result with the 1 correct entry in rs. Great, so now I just need to convert my input to whatever format that thing seems to be in.
However, nothing I have tried seems to match the "correct" string when I try reading their bytes using
byte[] mybytearray = myString.getBytes();
for(byte b : mybytearray)
System.out.print(b+" ");
In other words, I can turn °í»ê into 고산 but I can't seem to turn 고산 into °í»ê.
The byte array given by
rs.getBytes(1)
is different from the byte array given by any of the following:
rs.getString(1).getBytes()
rs.getString(1).getBytes("UTF8")
rs.getString(1).getBytes("EUC-KR")
Unhappiness: it turns out that for my DB, NLS_CHARACTERSET = US7ASCII
Which means that what I'm trying to do is unsupported. Thanks for playing everyone :(
You can't accomplish anything with a String constructor. String is always UTF-16 inside. Converting UTF-16 chars to EUC-KR and back again won't help you.
Putting invalid Unicode into String values in the hopes that they will then be converted to EUC-KR is a really bad idea.
What you are doing is supposed to 'just work'. The oracle driver is supposed to talk to the server, find out the desired charset, and go from there.
What, however, is the database charset? If someone is storing EUC-KR without having set the charset to EUC-KR, you are more or less up a creek.
What you need to do is to tell your jdbc driver what charset to use to communicate with the server. You haven't mentioned if you are using Thin or OCI, the answer might be different.
Judging from http://download.oracle.com/docs/cd/E14072_01/appdev.112/e13995/oracle/jdbc/OracleDriver.html, you might want to try turning on defaultNChar.
In general, it's the job of the jdbc driver to transcode String to what the Oracle server wants. You may need tnsnames.ora options if you are using 'OCI'.
edit
OP reports that the nls_charset of the database is US7ASCII. That means that all JDBC drivers will think that it is their job to convert Unicode String values to ASCII. Korean characters will be reduced to ? at best. Officially, then, your are up a creek.
There are some possible tricks to try. One is the very dangerous trick of
new String(string.getBytes("EUC-KR"), "ascii")
that will try to make a string of Unicode chars that just so happens to have the values of EUC-KR in their low bytes. My belief is that this will corrupt data, but you could experiment.
Or, perhaps, ps.setBytes(n, string.getBytes("EUC-KR")), but I myself do not know if Oracle defines the conversion of bytes to chars as a binary copy. It might. Or, perhaps, adding a stored proc that takes a blob as an argument.
Really, what's called for here is to repair the database to use an nls_charset of UTF-8 or EUC-KR, but that's a whole other job.
Have you looked at the correct name for the charset ? Maybe you should be using UTF8 and EUC_KR ..
http://download.oracle.com/javase/1.4.2/docs/guide/intl/encoding.doc.html
Hopefully this is not a stupid answer but have you made sure that charsets.jar is in your classpath. It is NOT by default see this page for more...
The charsets.jar file is an optional feature of the JRE. To install it, you must choose the "custom installation" and select the "Support for additional locales" feature.
How can I get oracle XMLElement to JDBC?
java.sql.Statement st = connection.createStatement(); // works
oracle.jdbc.OracleResultSet rs = st.execute("SELECT XMLElement("name") FROM dual");
rs.getString(1); // returns null, why?
oracle.sql.OPAQUE = (OPAQUE) rs.getObject(1); // this works, but wtf is OPAQUE ?
Basically, I want to read String like <name> </name> or whatever XML formatted output. But I always fail to cast output to anything reasonable. Only weird oracle.sql.OPAQUE works, but I totally dont know what to do with that. Even toString() is not overriden!
Any ideas? How to read Oracle's (I am using Oracle 10.0.2) XMLElement (XMLType) ?
You can't.
Oracle's JDBC driver does not support the JDBC XML type properly.
The only thing you can do, is to convert the XML as part of the query:
SELECT to_clob(XMLElement("name")) from dual
Then you can retrieve the XML using getString()
alternatively you can also use XMLElement("name").getClobVal(), but again this is part of your query and it can be accessed as a String from within your Java class
ORA-1652: unable to extend temp segment by 128 in tablespace temp is a
totally different error, nothing to be with XMLElement.
It is just that you hava to set your temp file to auto resize or give it a bigger size:
ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/app/oracle/product/10.2.0/db_1/oradata/oracle/temp01.dbf' SIZE 10M AUTOEXTEND ON
ALTER DATABASE TEMPFILE '/u01/app/oracle/product/10.2.0/db_1/oradata/oracle/temp01.dbf' RESIZE 200M
I'm busy on a piece of code to get alle the column names of a table from an Oracle database. The code I came up with looks like this:
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection(
"jdbc:oracle:thin:#<server>:1521:<sid>", <username>, <password>);
DatabaseMetaData meta = conn.getMetaData();
ResultSet columns = meta.getColumns(null, null, "EMPLOYEES", null);
int i = 1;
while (columns.next())
{
System.out.printf("%d: %s (%d)\n", i++, columns.getString("COLUMN_NAME"),
columns.getInt("ORDINAL_POSITION"));
}
When I ran this code to my surprise too many columns were returned. A closer look revealed that the ResultSet contained a duplicate set of all the columns, i.e. every column was returned twice. Here's the output I got:
1: ID (1)
2: NAME (2)
3: CITY (3)
4: ID (1)
5: NAME (2)
6: CITY (3)
When I look at the table using Oracle SQL Developer it shows that the table only has three columns (ID, NAME, CITY). I've tried this code against several different tables in my database and some work just fine, while others exhibit this weird behaviour.
Could there be a bug in the Oracle JDBC driver? Or am I doing something wrong here?
Update: Thanks to Kenster I now have an alternative way to retrieve the column names. You can get them from a ResultSet, like this:
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection("jdbc:oracle:thin:#<server>:1521:<sid>", <username>, <password>);
Statement st = conn.createStatement();
ResultSet rset = st.executeQuery("SELECT * FROM \"EMPLOYEES\"");
ResultSetMetaData md = rset.getMetaData();
for (int i=1; i<=md.getColumnCount(); i++)
{
System.out.println(md.getColumnLabel(i));
}
This seems to work just fine and no duplicates are returned! And for those who wonder: according to this blog you should use getColumnLabel() instead of getColumnName().
In oracle, Connection.getMetaData() returns meta-data for the entire database, not just the schema you happen to be connected to. So when you supply null as the first two arguments to meta.getColumns(), you're not filtering the results for just your schema.
You need to supply the name of the Oracle schema to one of the first two parameters of meta.getColumns(), probably the second one, e.g.
meta.getColumns(null, "myuser", "EMPLOYEES", null);
It's a bit irritating having to do this, but that's the way the Oracle folks chose to implement their JDBC driver.
This doesn't directly answer your question, but another approach is to execute the query:
select * from tablename where 1 = 0
This will return a ResultSet, even though it doesn't select any rows. The result set metadata will match the table that you selected from. Depending on what you're doing, this can be more convenient. tablename can be anything that you can select on--you don't have to get the case correct or worry about what schema it's in.
In the update to your question I noticed that you missed one key part of Kenster's answer. He specified a 'where' clause of 'where 1 = 0', which you don't have. This is important because if you leave it off, then oracle will try and return the ENTIRE table. And if you don't pull all of the records over, oracle will hold unto them, waiting for you to page through them. Adding that where clause still gives you the metadata, but without any of the overhead.
Also, I personally use 'where rownum < 1', since oracle knows immediately that all rownums are past that, and I'm not sure if it's smart enough to not try and test each record for '1 = 0'.
In addition to skaffman's answer -
use the following query in Oracle:
select sys_context( 'userenv', 'current_schema' ) from dual;
to access your current schema name if you are restricted to do so in Java.
This is the behavior mandated by the JDBC API - passing nulls as first and second parameter to getColumns means that neither catalog name nor schema name are used to narrow the search.
Link to the documentation . It is true that some other JDBC drivers have different behavior by default (e.g MySQL's ConnectorJ by default restricts to the current catalog), but this is not standard, and documented as such