I have a simple EntityBean with a #Lob annotation. If I delete this annotation I get no errors on JBossAS 6.0.0.Final and MySQL5. But if I annotate it with #Lob (because mt contains about 100 to 5000 characters in my case) I get errors in my testing environment if I persist the entity.
without #Lob: mt is mapped to VARCHAR
with #Lob: mt is mapped to LONGTEXT (this is what I want, but I get errors)
This my entity:
#Entity
#Table(name = "Description")
public class Description implements Serializable
{
public static final long serialVersionUID=1;
#Id #GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
#Lob
private String mt;
} // ... getter/setter
The error are here:
...
Caused by: org.hibernate.exception.GenericJDBCException: could not insert
[my.Description]
...
Caused by: java.sql.SQLException: Connection is not associated with a managed
connection.org.jboss.resource.adapter.jdbc.jdk6.WrappedConnectionJDK6#3e4dd
...
I really don't know why I get this (reproduceable) error. The environment seems to be ok, many other tests are passed and it even works without the #Lob annotation.
This question is related to JPA: how do I persist a String into a database field, type MYSQL Text, where the usage of #Lob for JPA/MySQL is the accepted answer.
Update 1 The error above is OS specific. On a W7 machine I have no problems with #Lob, with OSX Lion always the error. I will try to update MySQL and the driver.
Update 2 The proposed workaround by Kimi with #Column(columnDefinition = "longtext") works fine, even on OSX. In both cases MySQL creates the same column: LONGTEXT.
Update 3 I updated MySQL to mysql-5.5.17-osx10.6-x86_64 and the connector to mysql-connector-java-5.1.18. Still the same error.
There is no need to annotate a String property with #Lob. Just set the column length with #Column(length = 10000).
EDIT:
Moreover, you can always set the length to your database specific maximum value, as well as define the column type with the columndefinition setting to whichever suits your needs.
For example, if you're using MySQL 5.0.3 or later, the maximum amount of data that can be stored in each data type is as follows:
VARCHAR: 65,535 bytes (~64Kb, 21,844 UTF-8 encoded characters)
TEXT: 65,535 bytes (~64Kb, 21,844 UTF-8 encoded characters)
MEDIUMTEXT: 16,777,215 bytes (~16Mb, ~5.5 million UTF-8 encoded characters)
LONGTEXT: 4,294,967,295 bytes (~4GB, ~1.4 billion UTF-8 encoded characters).
As i understand it, #Lob just sets the column type and length depending on the underlying database.
Another way that worked for me is to update launch configuration of jBoss with
-Dhibernate.jdbc.use_streams_for_binary=true
I'm running jBoss6 with MySQL5
Related
While this mapping works in MySQL on saving objects:
#Id
private String id;
on Oracle it throws: ORA-01465: invalid hex number when I am saving my object.
This is how I create id: UUID.randomUUID().toString()
My app must support both MySQL 5 and Oracle 12. So I can add only some mysql / oracle specific adapters / extensions that could be easily turned off while switching from one db to another. I cannot change JPA entities code if that would mean binding them to specific database. It must work on both databases.
What could I do so that it wouldn't break the application while switching from one MySQL to Oracle ?
Just remove '-' from UUID.randomUUID().toString()
For example,
UUID.randomUUID().toString().replaceAll("-","")
I'm using Spring Boot with Hibernate, JPA and PostgreSQL. I'm wanting to convert database large objects into text content. Previously I was defining my long text in my JPA entity as #Lob:
#Lob
String description;
I then discovered that often problems are created using #Lob's and decided to rather change them to:
#Type(type="org.hibernate.type.StringClobType")
String description;
Which is represented in the database as a text type. Unfortunately, now the reference numbers (oid's) of the previous large objects are stored in my rows instead of the actual content. For example:
id | description
---------------------
1 | 463784 <- This is a reference to the object rather than the content
instead of:
id | description
---------------------
1 | Once upon a time, in a galaxy...
My question is now that we have thousands of rows of data in the database, how do I write a function or perform a query to replace the large object id with the actual text content stored in the large object?
Special thanks to #BohuslavBurghardt for pointing me to this answer. For your convenience:
UPDATE table_name SET column_name = lo_get(cast(column_name as bigint))
I needed some additional conversion:
UPDATE table_name SET text_no_lob = convert_from(lo_get(text::oid), 'UTF8');
I had the same problem with Spring, Postgres and JPA (Hibernate). I had a payload field that was like below
#NotBlank
#Column(name = "PAYLOAD")
private String payload;
I wanted to change the data type to text to support large data. So I used #Lob and I got the same error. To resolve that I first changed my field in my Entity like below:
#NotBlank
#Column(name = "PAYLOAD")
#Lob
#Type(type = "org.hibernate.type.TextType")
private String payload;
And because my data in this column was some scalar(Number) I have changed it to normal text with below command in Postgres:
UPDATE MYTABLE SET PAYLOAD = lo_get(cast(PAYLOAD as bigint))
Thanks a lot #Sipder.
I'm trying to save a CLOB into the database and recovering it, but I'm getting an SQLException:
Caused by: java.sql.SQLException: Lob read/write functions called while another read/write is in progress: getBytes()
at oracle.jdbc.driver.T4CConnection.getBytes(T4CConnection.java:2427)
at oracle.sql.BLOB.getBytes(BLOB.java:348)
at oracle.jdbc.driver.OracleBlobInputStream.needBytes(OracleBlobInputStream.java:181)
I figured that the problem is when I tried to get the CLOB, because it's still saving.
If the CLOB is small it works fine, but when the CLOB is a little bigger it fails.
Sorry about my english and thanks
EDIT:
The annotation is:
#Lob
#Column(nullable = false)
private String body;
The save method
emailRepository.save(email);
Setting the hibernate property
<property name="hibernate.temp.use_jdbc_metadata_defaults" value="false"/>
sorved the problem for me.
Setting the lobCreator for SessionFactory to NonContextualLobCreator is probably a better solution (not tried yet).
However I'm not sure what causes this error.
I encountered a similar problem in one of the projects, setting
updatable = false
fixed the issue for me.
Example:
#Lob
#Column(name = "CONTENT", updatable = false)
Blob content;
Hibernate somehow tries to re-save the content, even when it was not changed.
I'm getting following exception, while updating table in Hibernate
ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column
I have extracted sql query as well, it looks like
Update table_name set columnName (LOB)=value, colmun2 (String with 4000)=value where id=?;
Entity class
class Test{
#Lob
private String errorText;
#Column(length = 4000)
private String text;
}
Please help me, what is wrong in this
Thanks
Ravi Kumar
Running oerr ora 24816 to get the details on the error yields:
$ oerr ora 24816
24816, ... "Expanded non LONG bind data supplied after actual LONG or LOB column"
// *Cause: A Bind value of length potentially > 4000 bytes follows binding for
// LOB or LONG.
// *Action: Re-order the binds so that the LONG bind or LOB binds are all
// at the end of the bind list.
So another solution that uses only 1 query would be to move your LOB/LONG binds after all your non-LOB/LONG binds. This may or may not be possible with Hibernate. Perhaps something more like:
update T set column2 (String with 4000)=:1, columnName (LOB)=:3 where id=:2;
This DML limitation appears to have been around since at least Oracle 8i.
References:
http://openacs.org/forums/message-view?message_id=595742
https://community.oracle.com/thread/417560
I do realise that this thread is quite old, but I thought I'd share my own experience with the same error message here for future reference.
I have had the exact same symptoms (i.e. ORA-24816) for a couple of days. I was a bit side-tracked by various threads I came across suggesting that this was related to order of parameter binding. In my case this was not a problem. Also, I struggled to reproduce this error, it only occurred after deploying to an application server, I could not reproduce this through integration tests.
However, I took a look at the code where I was binding the parameter and found:
preparedStatement.setString(index, someStringValue);
I replaced this with:
preparedStatement.setClob(index, new StringReader(someStringValue));
This did the trick for me.
This thread from back in 2009 was quite useful.
I found issue.
1. When hibernate updating data in DB and entity has 4000 chars column and lob type column then hibernate throwing this exception
I have solved this issue by writing two update queires
1. First i have saved entity by using Update()
2. Written another update query for lob column update
Thanks
ravi
I have also encountered same error in oracle db and foudn that Hibernate Guys fixed here
In my case we were already using hiberante 4.3.7 but didnt mention that field is Lob in Entity
Reproducing Steps
Have fields with varchar2 datatype and clob data type.Make sure your column name are in this alphabetic order clob_field,varchar_two_field1,varchar_two_field2.
Now update clob_field with < 2000 bytes and varchar_two_field1 with 4000 bytes size.
This should end up with error ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column
Solution
Make sure you have hiberante 4.1.8, < 4.3.0.Beta1
Annotate your clob/blob field in respective Entity as
import javax.persistence.Lob;
...
#Lob
#Column(name = "description")
private String description;
....
If you want to see the difference , by after making above changes enable debug for sql statements by setting "true" for "hibernate.show_sql" in persistence.xml.
I came across this issue today while trying to Insert the data into a table. To avoid this error, Just keep all the fields having "LOB" data type at the end of the insert statement.
For Example
Table1 has 8 Fields (Field1,Field2,.... Field8 etc..),
of which
Field1 and Field2 are of CLOB data types
and the rest are Varchar2 Data types
. Then while inserting the data make sure you keep Field1 and Field2 values at the end like below.
INSERT INTO TABLE1 ( Field3,Field4,Field5,Field6,Field7,Field8,Field1,Field2)
Values ('a','b','c','d','e','f','htgybyvvbshbhabjh','cbsdbvsb')
Place your LOB binding at the last. See if that solves the issue..
I have an Image class that has a byte[] to contain the actual image data. I'm able to upload and insert the image just fine in my webapp. When I attempt to display the image after reading it from JPA the length of my byte[] is always either 2x-1 or 2x-2, where x is the length of the bytea field in postgres 9. Obviously the browser won't display the image saying it's corrupted. I could use some help figuring out why I'm getting (about) twice what I expect. Here's the mapping of my image class. Using eclipselink with JPA 2 hitting postgres 9 on a mac.
When I select from the database with
select *, length(bytes) from image;
I get a length of 9765. In a breakpoint in my controller the byte[] length is 19529 which is one byte shy of twice what's in the database.
#Entity
#Table( name = "image" )
#SequenceGenerator( name = "IMAGE_SEQ_GEN", sequenceName = "IMAGE_SEQUENCE" )
public class Image
extends DataObjectAbstract<Long>
{
#Id
#GeneratedValue( strategy = GenerationType.SEQUENCE, generator = "IMAGE_SEQ_GEN" )
private Long key;
#Column( name="content_type" )
private String contentType;
#Lob
#Basic( optional=false )
#Column( name="bytes" )
private byte[] bytes;
// constructor and getters and setters
}
pgadmin shows me the following for the image table
CREATE TABLE image
(
"key" bigint NOT NULL,
bytes bytea,
content_type character varying(255),
"version" integer,
CONSTRAINT image_pkey PRIMARY KEY (key)
)
WITH (
OIDS=FALSE
);
The "bytea_output = escape" is just a workaround, Postgres 8.0 changed the bytea encoding to hex.
Use a current JDBC driver since 9.0-dev800 (9.0 Build 801 is up-to-date currently) and the problem will be solved.
In PostgreSQL 9 byte[] is sent to client using hex encoding.
If this is reason for error you have to find update for JPA.
Or you may change config of DB server but previous is better.
Supplementary answer for GlassFish 3.x users (principles may apply to other app servers)
You may be inadvertently using an old PostgreSQL JDBC driver. You can test this by injecting a DataSource somewhere (e.g. an EJB) and executing the following on it:
System.out.println(ds.getConnection().getMetaData().getDriverVersion());
In my case, it was 8.3 which was unexpected since I was deploying with 9.1 drivers.
To find out where this was coming from:
System.out.println(Class.forName("org.postgresql.Driver").getProtectionDomain().getCodeSource().getLocation());
As it turned out for me, it was in the lib directory of my GlassFish domain. I'm not sure how it got there - GlassFish certainly doesn't ship that way - so I just removed it and the problem went away.
Try looking at the data you're getting. It may give you a clue as to what's happening.
Check whether you have an old postgresql jar. I faced the same problem, and found both 8.3 postgresql jar and a 9.1 postgresql jar in my lib. After remove 8.3 postgresql, byte[] works fine.