I'm using Spring Boot with Hibernate, JPA and PostgreSQL. I'm wanting to convert database large objects into text content. Previously I was defining my long text in my JPA entity as #Lob:
#Lob
String description;
I then discovered that often problems are created using #Lob's and decided to rather change them to:
#Type(type="org.hibernate.type.StringClobType")
String description;
Which is represented in the database as a text type. Unfortunately, now the reference numbers (oid's) of the previous large objects are stored in my rows instead of the actual content. For example:
id | description
---------------------
1 | 463784 <- This is a reference to the object rather than the content
instead of:
id | description
---------------------
1 | Once upon a time, in a galaxy...
My question is now that we have thousands of rows of data in the database, how do I write a function or perform a query to replace the large object id with the actual text content stored in the large object?
Special thanks to #BohuslavBurghardt for pointing me to this answer. For your convenience:
UPDATE table_name SET column_name = lo_get(cast(column_name as bigint))
I needed some additional conversion:
UPDATE table_name SET text_no_lob = convert_from(lo_get(text::oid), 'UTF8');
I had the same problem with Spring, Postgres and JPA (Hibernate). I had a payload field that was like below
#NotBlank
#Column(name = "PAYLOAD")
private String payload;
I wanted to change the data type to text to support large data. So I used #Lob and I got the same error. To resolve that I first changed my field in my Entity like below:
#NotBlank
#Column(name = "PAYLOAD")
#Lob
#Type(type = "org.hibernate.type.TextType")
private String payload;
And because my data in this column was some scalar(Number) I have changed it to normal text with below command in Postgres:
UPDATE MYTABLE SET PAYLOAD = lo_get(cast(PAYLOAD as bigint))
Thanks a lot #Sipder.
Related
So I had an AVRO file and have not had any experience with that type of file, so I read the contents of that file and saved it to a text file, and so now I am trying to parse each line and add that to a MySQL table. I know how to connect to a MySQL database using Java and will basically execute a query that adds the data from each line.
But the part I am having trouble with is parsing my data, basically this is what each line looks like (and each value is a 'String'):
{"content": "HTML", "GLOBALEVENTID": "331284989", "SQLDATE": "20140111", "MonthYear": "201401", "Year": "2014"}
So there are more columns than this but I shortened it, also the "content" field is actually the HTML of a webpage so it can contain a lot of random characters which I think could be an issue when parsing. But so my question is that I am trying to do parse out the values of each column and add it into an array (content, GLOBALEVENTID, etc.), so then I can add it to a MySQL table that already has these columns defined? Anything that can help me point me in the right direction is appreciated!
There are two approaches to solve this problem, depending on the what you are trying to achieve:
Case 1) If this is just a one time load
Answer: For a one time load, reading the AVRO file, parsing it to text file and then seeding data to MySQL using RDBMS APIs is too much work.
Instead, I would suggest to use the MySQL Import Utility.
If you go to the Schema Browser, and right click on the table name, you will find an option "Import..."
The options are explanatory. Usually, one time loads are done using a CSV or XLS file. You can modify your already existing program to convert AVRO file in to CSV file and use this file to Import data into MySQL table.
Case 2) If AVRO file is to be read through a program, and this action will be done multiple times in future.
In this case, you may use one of the many libraries (eg: Jackson/GSON), to parse the modified AVRO file into valid Java Object POJO. Make sure that the Object representation is a ORM (e.g: JPA/Hibernate) entity.
For example:
JSON: {"content": "HTML", "GLOBALEVENTID": "331284989", "SQLDATE": "20140111", "MonthYear": "201401", "Year": "2014"}
Class File:
#Entity
#Table(name = "CONTENT")
class Content {
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator = "SOME_SEQUENCE")
private Long id;
#Column(name = "DATA")
private String data;
#Column(name = "GLOBALEVENTID")
private String globalEventId;
#Column(name = "DATE")
#Temporal(TemporalType.TIMESTAMP)
private String date;
....
....
}
Once the data has been parsed into the ORM entity, saving it to the Database should be very easy. As per your convinece, you may use entityManager.save/update or entitryManager.saveAll/updateAll
How can I populate Postgres SQL base INSERT INTO SQL script with following Java Entity class based data collection. Issue is how can I write content of the byte array as a INSERT INTO value ('content of the byte[]') on SQL script.
That mean it can not feed this data via java application, according to requirement it needs some row SQL script for populate existing data base on production environment. Thanks.
Entity class
#Entity
#Table(name="image_table")
public class ImageData implements Serializable {
#Id
#GeneratedValue
#Column(name = "id")
private Integer id;
#Column(name = "content")
private byte[] content;
}
Format of the row SQL script need to generate
INSERT INTO image_table (id, content) VALUES ('1', '<content of the byte[]>');
INSERT INTO image_table (id, content) VALUES ('2', '<content of the byte[]>');
To answer your question literally: You can write a SQL INSERT script for an integer and a blob value, but that would be rather horrible with hex escaped strings for bytes and you could easily run into problems with long statements for larger blobs; PostgreSQL itself has no practical limit, but most regular text editors do.
As of PostgreSQL 9.0 you can use the hex format (documentation here). Basically, a hex value in text representation looks like E'\x3FB5419C' etc. So you output from your method E'\x, then the byte[] as a hex string, then the closing '. Writing the hex string from the byte[] content can be done with org.apache.commons.codec.binary.Hex.encodeHexString(content) or use this SO answer for a plain Java solution. Depending on your production environment you may have to escape the backslash \\ and fiddle with the newline character. I suggest you give this a try with a small blob to see what works for you.
A better approach is direct insertion. Assuming that you are using the PostgreSQL JDBC driver, the documentation has a worked out example. Given that you have a byte[] class member, you should use the setBytes() method instead of setBinaryStream() (which expects an InputStream instance):
PreparedStatement ps = conn.prepareStatement("INSERT INTO image_table (id, content) VALUES (?, ?)");
ps.setInteger(1, 1);
ps.setBytes(2, content);
ps.executeUpdate();
ps.setInteger(1, 2);
ps.executeUpdate();
ps.close();
You have to place the #Lob annotation on top of your content column. Your final result would be something like this:
import javax.persistence.Lob;
.
.
.
#Lob
#Column(name = "content")
private byte[] content;
I'm getting following exception, while updating table in Hibernate
ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column
I have extracted sql query as well, it looks like
Update table_name set columnName (LOB)=value, colmun2 (String with 4000)=value where id=?;
Entity class
class Test{
#Lob
private String errorText;
#Column(length = 4000)
private String text;
}
Please help me, what is wrong in this
Thanks
Ravi Kumar
Running oerr ora 24816 to get the details on the error yields:
$ oerr ora 24816
24816, ... "Expanded non LONG bind data supplied after actual LONG or LOB column"
// *Cause: A Bind value of length potentially > 4000 bytes follows binding for
// LOB or LONG.
// *Action: Re-order the binds so that the LONG bind or LOB binds are all
// at the end of the bind list.
So another solution that uses only 1 query would be to move your LOB/LONG binds after all your non-LOB/LONG binds. This may or may not be possible with Hibernate. Perhaps something more like:
update T set column2 (String with 4000)=:1, columnName (LOB)=:3 where id=:2;
This DML limitation appears to have been around since at least Oracle 8i.
References:
http://openacs.org/forums/message-view?message_id=595742
https://community.oracle.com/thread/417560
I do realise that this thread is quite old, but I thought I'd share my own experience with the same error message here for future reference.
I have had the exact same symptoms (i.e. ORA-24816) for a couple of days. I was a bit side-tracked by various threads I came across suggesting that this was related to order of parameter binding. In my case this was not a problem. Also, I struggled to reproduce this error, it only occurred after deploying to an application server, I could not reproduce this through integration tests.
However, I took a look at the code where I was binding the parameter and found:
preparedStatement.setString(index, someStringValue);
I replaced this with:
preparedStatement.setClob(index, new StringReader(someStringValue));
This did the trick for me.
This thread from back in 2009 was quite useful.
I found issue.
1. When hibernate updating data in DB and entity has 4000 chars column and lob type column then hibernate throwing this exception
I have solved this issue by writing two update queires
1. First i have saved entity by using Update()
2. Written another update query for lob column update
Thanks
ravi
I have also encountered same error in oracle db and foudn that Hibernate Guys fixed here
In my case we were already using hiberante 4.3.7 but didnt mention that field is Lob in Entity
Reproducing Steps
Have fields with varchar2 datatype and clob data type.Make sure your column name are in this alphabetic order clob_field,varchar_two_field1,varchar_two_field2.
Now update clob_field with < 2000 bytes and varchar_two_field1 with 4000 bytes size.
This should end up with error ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column
Solution
Make sure you have hiberante 4.1.8, < 4.3.0.Beta1
Annotate your clob/blob field in respective Entity as
import javax.persistence.Lob;
...
#Lob
#Column(name = "description")
private String description;
....
If you want to see the difference , by after making above changes enable debug for sql statements by setting "true" for "hibernate.show_sql" in persistence.xml.
I came across this issue today while trying to Insert the data into a table. To avoid this error, Just keep all the fields having "LOB" data type at the end of the insert statement.
For Example
Table1 has 8 Fields (Field1,Field2,.... Field8 etc..),
of which
Field1 and Field2 are of CLOB data types
and the rest are Varchar2 Data types
. Then while inserting the data make sure you keep Field1 and Field2 values at the end like below.
INSERT INTO TABLE1 ( Field3,Field4,Field5,Field6,Field7,Field8,Field1,Field2)
Values ('a','b','c','d','e','f','htgybyvvbshbhabjh','cbsdbvsb')
Place your LOB binding at the last. See if that solves the issue..
I have a simple EntityBean with a #Lob annotation. If I delete this annotation I get no errors on JBossAS 6.0.0.Final and MySQL5. But if I annotate it with #Lob (because mt contains about 100 to 5000 characters in my case) I get errors in my testing environment if I persist the entity.
without #Lob: mt is mapped to VARCHAR
with #Lob: mt is mapped to LONGTEXT (this is what I want, but I get errors)
This my entity:
#Entity
#Table(name = "Description")
public class Description implements Serializable
{
public static final long serialVersionUID=1;
#Id #GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
#Lob
private String mt;
} // ... getter/setter
The error are here:
...
Caused by: org.hibernate.exception.GenericJDBCException: could not insert
[my.Description]
...
Caused by: java.sql.SQLException: Connection is not associated with a managed
connection.org.jboss.resource.adapter.jdbc.jdk6.WrappedConnectionJDK6#3e4dd
...
I really don't know why I get this (reproduceable) error. The environment seems to be ok, many other tests are passed and it even works without the #Lob annotation.
This question is related to JPA: how do I persist a String into a database field, type MYSQL Text, where the usage of #Lob for JPA/MySQL is the accepted answer.
Update 1 The error above is OS specific. On a W7 machine I have no problems with #Lob, with OSX Lion always the error. I will try to update MySQL and the driver.
Update 2 The proposed workaround by Kimi with #Column(columnDefinition = "longtext") works fine, even on OSX. In both cases MySQL creates the same column: LONGTEXT.
Update 3 I updated MySQL to mysql-5.5.17-osx10.6-x86_64 and the connector to mysql-connector-java-5.1.18. Still the same error.
There is no need to annotate a String property with #Lob. Just set the column length with #Column(length = 10000).
EDIT:
Moreover, you can always set the length to your database specific maximum value, as well as define the column type with the columndefinition setting to whichever suits your needs.
For example, if you're using MySQL 5.0.3 or later, the maximum amount of data that can be stored in each data type is as follows:
VARCHAR: 65,535 bytes (~64Kb, 21,844 UTF-8 encoded characters)
TEXT: 65,535 bytes (~64Kb, 21,844 UTF-8 encoded characters)
MEDIUMTEXT: 16,777,215 bytes (~16Mb, ~5.5 million UTF-8 encoded characters)
LONGTEXT: 4,294,967,295 bytes (~4GB, ~1.4 billion UTF-8 encoded characters).
As i understand it, #Lob just sets the column type and length depending on the underlying database.
Another way that worked for me is to update launch configuration of jBoss with
-Dhibernate.jdbc.use_streams_for_binary=true
I'm running jBoss6 with MySQL5
I've got an entity Case that has an id CaseId (unfortunately a string due to compability with a legacy system). This id is foreign key in the table Document, and each Case can have many documents (onetomany). I've put the following in my Case entity:
#Id
#Column(name = "CaseId", length = 20, nullable = false)
private String caseId;
#OneToMany(fetch=FetchType.EAGER)
#JoinColumns ( {
#JoinColumn(name="caseId", referencedColumnName="CaseId")
} )
private Set<Document> documents;
The table for Document contains "CaseId varchar(20) not null". Right now, in the database, all cases have six documents. Yet when I do myCase.documents().size, I only ever get a single document. What should I do to get all the documents?
Cheers
Nik
The mapping looks correct. But it would be interesting to see:
the Document entity (and its equals/hashCode)
the SQL performed (see this previous answer to activate SQL logging)