Writing Big XML in sybase and reading it? - java

I am inserting a vey big xml in the Sybase column which has type 'text'.
I am writing it using setString in PreparedStatement and reading it using getString.
But when I select it using getString I don't get the complete XML.
What can i do to read/write the complete XML?

Doesn't Sybase provide support for CLOB data type (that would be more suitable for storing large XMLs) ? In the PreparedStatement, you will need to use setClob() instead of setString().

Sybase ASE 15 has a bug when writing text columns of more than 8192 bytes: If your string (XML) has an invalid character (that does not conform to your Sybase database's legal character set) after position 8192 then Sybase will only write 8192 characters of your text and tell you that the operation was successful.

Related

performance is slow with hibernate and MS sql server

I'm using hibernate and db is sqlserver.
SQL Server differentiates it's data types that support Unicode from the ones that just support ASCII. For example, the character data types that support Unicode are nchar, nvarchar, longnvarchar where as their ASCII counter parts are char, varchar and longvarchar respectively. By default, all Microsoft’s JDBC drivers send the strings in Unicode format to the SQL Server, irrespective of whether the datatype of the corresponding column defined in the SQL Server supports Unicode or not. In the case where the data types of the columns support Unicode, everything is smooth. But, in cases where the data types of the columns do not support Unicode, serious performance issues arise especially during data fetches. SQL Server tries to convert non-unicode datatypes in the table to unicode datatypes before doing the comparison. Moreover, if an index exists on the non-unicode column, it will be ignored. This would ultimately lead to a whole table scan during data fetch, thereby slowing down the search queries drastically.
The solution we used is ,we figured that there is a property called sendStringParametersAsUnicode which helps in getting rid of this unicode conversion. This property defaults to ‘true’ which makes the JDBC driver send every string in Unicode format to the database by default. We switched off this property.
My question is now we cannot send data in unicode conversion. in future if db column of varchar is changed to nvarchar (only one column not all varchar columns), now we should sent the string in unicode format.
Please suggest me how to handle the scenario.
Thanks.
You need to specify property: sendStringParametersAsUnicode=false in connection string url.
jdbc:sqlserver://localhost:1433;databaseName=mydb;sendStringParametersAsUnicode=false
Unicode is the native string representation for communication with SQL Server, if you are converting to MBCS (Multibyte character sets), then you are doing 2 converts for every string. I suggest that if you are concerned with performance, use all Unicode instead of all MBCS
ref: http://social.msdn.microsoft.com/Forums/en/sqldataaccess/thread/249c629f-b8f2-4a8a-91e8-aad0d83919ca

what is the range or ms sql xml argument?

I am using mssql with j2ee spring framework.
When insert a data to a table, i am using bulk insert with xml argument in mssql.
Can you anyone say how much data we can pass using this.
I would like to know this range with xml argument.
T.Saravanan
On the SQL Server side, it is is 2GB
The stored representation of xml data type instances cannot exceed 2 gigabytes (GB) in size
"Stored" means after some processing for efficiency
SQL Server internally represents XML in an efficient binary representation that uses UTF-16 encoding. User-provided encoding is not preserved, but is considered during the parse process.

Truncating strings

I'm working with third party user data that may or may not fit into our database. The data needs to be truncated if it is too long.
We are using IBatis with Connector/J. If the data is too long a SQL exception is thrown. I have had two choices: either truncate the strings in Java or truncate the strings in sql using substring.
I don't like truncating the strings in sql, because I am writing table structure in our Ibatis XML, but SQL on the other hand knows about our database collation (which isn't consistent and would be expensive to make consistent) and can truncate string in a multibyte safe manner.
Is there a way to have the Connector/J just straight insert this SQL and if not which route would people recommend?
According to the MySQL documentation it's possible that inserting data that exceeds the length could be treated as a warning:
Inserting a string into a string
column (CHAR, VARCHAR, TEXT, or BLOB)
that exceeds the column's maximum
length. The value is truncated to the
column's maximum length.
One of the Connector/J properties is jdbcCompliantTruncation. This is its description:
This sets whether Connector/J should
throw java.sql.DataTruncation
exceptions when data is truncated.
This is required by the JDBC
specification when connected to a
server that supports warnings (MySQL
4.1.0 and newer). This property has no effect if the server sql-mode includes
STRICT_TRANS_TABLES. Note that if
STRICT_TRANS_TABLES is not set, it
will be set as a result of using this
connection string option.
If I understand correctly then setting this property to false doesn't throw the exception but inserts the truncated data. This solution doesn't require you to truncate the data in program code or SQL statements, but delegates it to the database.

JDBC, MySQL: getting bits into a BIT(M!=1) column

I'm new to using JDBC + MySQL.
I have several 1/0 values which I want to stick into a database with a PreparedStatement. The destination column is a BIT(M!=1). I'm unclear on which of the setXXX methods to use. I can find the references for what data comes out as easily enough, but how it goes in is eluding me.
The values effectively live as an ordered collection of booleans in the objects used by the application. Also, I'll occasionally be importing data from flat text files with 1/0 characters.
To set a BIT(M) column in MySQL
For M==1
setBoolean(int parameterIndex, boolean x)
From the javadoc
Sets the designated parameter to the
given Java boolean value. The driver
converts this to an SQL BIT value when
it sends it to the database.
For M>1
The support for BIT(M) where M!=1 is problematic with JDBC as BIT(M) is only required with "full" SQL-92 and only few DBs support that.
Check here Mapping SQL and Java Types: 8.3.3 BIT
The following works for me with MySQL (at least with MySQL 5.0.45, Java 1.6 and MySQL Connector/J 5.0.8)
...
PreparedStatement insert = con.prepareStatement(
"INSERT INTO bittable (bitcolumn) values (b?)"
);
insert.setString(1,"111000");
...
This uses the special b'110101010' syntax of MySQL to set the value for BIT columns.
You can use get/setObject with a byte array (byte[]). 8 bits are packed into each byte with the least significant bit being in the last array element.

CallableStatement setString - Unsupported character(s)?

I have a Java application decoding a UTF-8 encoded String received over the wire and saving it to a varchar column in my database (SQL Server 2000). I am saving the record using JDBC's CallableStatement (calling the setString method to set the parameter for this column).
The problem I'm seeing is that a particular String has been written that contains ASCII value 0 (NUL). This suggests to me that SQL server cannot represent a particular Unicode character and the JDBC driver has decided to substitute in ASCII value 0, although I may be wrong.
Has anyone else encountered this problem?
Is there a mechanism I can use to cause the CallableStatement call to fail in this situation?
Ideally I would like to guarantee that data has been saved exactly as specified, or else "fail fast".
My database character set is Latin1_General_AS_CS.
Thanks in advance.
You need to be using 'NVARCHAR' type in the database.
Just a WAG, but would using .setBytes(String parameterName, byte[] x) do trick? The byte array would come from myString.getBytes(). You might want to try using using different character sets with getBytes(), too.

Categories