i have a java class persisting a string (>4k) into a database table into a CLOB field.
If the string is less than 4k then it works.
I have the field annotated using #Lob and initially I was getting an exception due to batching not supported for streams so I made the batch size 0 in the Hibernate config which is giving the exception:
Caused by: java.sql.SQLException: ORA-01460: unimplemented or unreasonable conversion requested
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:582)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1986)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1144)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2152)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:2035)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2876)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:609)
at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:23)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2062)
... 36 more
I only get this issue when using the code from Grails. When I use the same code from a pure Java application then I don't get the issue. Both applications have the same hibernate config (except I need to set the batch size to 0 in Grails). Is the issue the difference in the Hibernate versions which is 3.2.6ga in Grails as far as I can see and 3.2.5ga for the java application. The Oracle driver is the same in both cases.
Any answers welcome.
Try with annotating the field with #Column(length = Integer.MAX_VALUE). This hibernate bug report mentions it helped in Derby.
Related
I am using cassandra-2.0.10 and hector api.
I have tried :
public static void createCounterColumnFamily(Keyspace keyspace, String ccfName) {
Mutator<String> mutator = HFactory.createMutator(keyspace,StringSerializer.get());
mutator.addCounter("salary", ccfName, HFactory.createCounterColumn("salary", 10L));
mutator.execute();}
But, I'm getting this exception :
Exception in thread "main" me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:unconfigured columnfamily counter_column_family_1)
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at com.cassandra.practice.CounterColumnFamily.createCounterColumnFamily(CounterColumnFamily.java:18)
at com.cassandra.practice.Bootstrapper.main(Bootstrapper.java:33)
Caused by: InvalidRequestException(why:unconfigured columnfamily counter_column_family_1)
at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
... 4 more
Am I missing something ?
You need to explicitly create both the column family and the keyspace before you can insert data.
That said the Hector API was deprecated more than five years ago and the Cassandra version you are using is just as far behind. More significantly, the Thrift API has been deprecated completely for years and has already been removed from trunk in the upcoming 4.0 release.
Please switch to 3.11.4 of Cassandra available here:
http://cassandra.apache.org/download/
And to using CQL and the java driver for such available here:
https://github.com/datastax/java-driver/tree/3.x/manual
Out of curiosity, how did you come across the versions you are currently using?
When accessing MySQL through JDBC, the following exception was thrown from the jdbc connector (5.1.39).
Value '\u000248$2ef8cd3c-e4d7-4ad5-8d60-504f6e7db07a\u00132016-11-21
17:26:37\u00132016-11-21
17:26:37\u0010ABCDEFGH\n2016-08-01\n2016-08-16\u0007SOMETHING\u00012\u00041481\u00011\u00042016\b50016387\u000b01026940427\u0012company
XYZ???\u00012$17b9f783-a7c2-4d49-bbc1-8ad73479a0b6\u00132016-11-13
13:31:26\u00132016-11-21
17:44:00\u00011\u00041481\u001bXXXXXXXXXXX\u000bcompanya\u000b00662850544\u000eabcd#email.com\bregular\u000248$57eff2d9-35e0-415a-81e4-04797192133f\u00132016-11-13
13:35:35\u00132016-11-22
14:40:03\u00072361.93\u000248\u0003EUR\u00011\bSTATUS?\n2016-12-31\n2017-03-09?\u00011\u000283\u00185828d21111000070071715f2\u000248\u000b0.001937241\u000b0.037620570\u000b0.120000000\u000b0.052392000\u000b1.000000000\u00010\u00010\u00010\u00010\u00010\u000b0.037620570\u000b0.001414463\u000b0.004799110\u00011\u00012\u00011\u000248\u000e348.743925612\f19.186074388\u000b0.012392000\u000b0.001574005\u000b0.004008749\u00010\u0000\u00130000-00-00
00:00:00\u00130000-00-00
00:00:00\u00010\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000'
can not be represented as java.sql.Timestamp
This looks like the JDBC driver cannot correctly determine the end of strings in the result row. Our table are in latin1.
Is there anything that should be done on the connection level to prevents these issues?
The issue was solved upgrading from 5.7.11 a 5.7.26. Lesson learned: do not underestimate the importance of minor releases
When using OpenJPA to execute a select statement in in-memory database org.apache.derby, I encounter this error:
javax.ejb.EJBException: The bean encountered a non-application exception; nested exception is:
<openjpa-2.1.2-SNAPSHOT-r422266:1636464 fatal general error> org.apache.openjpa.persistence.PersistenceException: Syntax error: Encountered "optimize" at line 1, column 80. {SELECT t0.VERSION, t0.SOMEOTHER_COLUMN FROM MYTABLE t0 WHERE t0.MYTABLE_CODE = ? optimize for 1 row} [code=20000, state=42X01] FailedObject: UDA [org.apache.openjpa.util.StringId] [java.lang.String]
The OpenJPA client is embedded in a IBM WebSphere client: com.ibm.ws.jpa.thinclient-8.5.5.5.jar
Apparently OpenJPA adds the 'optimize for 1 row' part because it thinks it is dealing with DB2? How could this be possible? Is there any way I can turn off this feature explicitly?
I did find some explanation on the 'optimize for 1 row' postfix:
https://www.ibm.com/developerworks/community/blogs/22586cb0-8817-4d2c-ae74-0ddcc2a409bc/entry/optimize_for_1_row1?lang=en
Apparently OpenJPA adds the 'optimize for 1 row' part because it thinks it is dealing with DB2? How could this be possible? Is there any way I can turn off this feature explicitly?
With the information provided, I'm not sure why this is the case.
Fortunately, you can override this with the following property in your persistence.xml:
<property name="openjpa.jdbc.DBDictionary" value="derby"/>
Solved it. The application has derby configured, but it is using a data access service which in turn had db2 specified as the DB dictionary.
I am servicing on existing code and therefore could not find that setting right away. Thank you both for pointing me in the right direction.
When I start Tomcat on Windows, I receive the following exception:
java.sql.SQLException: Unknown type '246' in column 10 of 12 in binary-encoded result set.
at com.mysql.jdbc.MysqlIO.extractNativeEncodedColumn(MysqlIO.java:3710)
at com.mysql.jdbc.MysqlIO.unpackBinaryResultSetRow(MysqlIO.java:3620)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1282)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2198)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:413)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1899)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1347)
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedSt atement.java:1393)
at com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPrepared Statement.java:958)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1705)
I am using the mysql-connector-java-5.1.21 JDBC driver.
I have deployed my app in Tomcat as a war. I use both normal and prepared statements.
This may be a known bug: https://bugs.mysql.com/bug.php?id=14609
According to that bug tracker, an attempt to fix it was made for versions 5.0.1 and 3.1.13 of the JDBC driver (not on server side), and it might not be a full fix.
Also see MySQLi - Server returned unknown type 246
This happens when your application expects a certain datatype, but you receive another type from the database. E.g. you expect a double, but the database gives you an int.
You should check the type you are expecting.
I try to create a hibernate mapping for an oracle database. The datebase is pretty old from before oracle 8 but is now on 10. Hibernate reverse engineering balks at a long raw column. This datatype is deprecated and should be converted to blob.
But this is not my database. If the customer refuses to convert how would a hibernate mapping look like ?
Try mapping it to byte[].
If you get java.sql.SQLException: Stream has already been closed, then try setting useFetchSizeWithLongColumn = true in the connection properties for the OJDBC driver. See the OracleDriver API