I am getting below exception, when trying to insert a batch of rows to an existing table
ORA-00942: table or view does not exist
I can confirm that the table exists in db and I can insert data to that table using oracle
sql developer. But when I try to insert rows using preparedstatement in java, its throwing table does not exist error.
Please find the stack trace of error below
java.sql.SQLException: ORA-00942: table or view does not exist
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1889)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2047)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1940)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout>>(OracleStatement.java:2709)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:589)
at quotecopy.DbConnection.insertIntoDestinationDb(DbConnection.java:591)
at quotecopy.QuoteCopier.main(QuoteCopier.java:72)
Can anyone suggest the reasons for this error ?
Update : Issue solved
There was no problem with my database connection properties or with my table or view name. The solution to the problem was very strange. One of the columns that I was trying insert was of Clob type. As I had a lot of trouble handling clob data in oracle db before, gave a try by replacing the clob setter with a temporary string setter and the same code executed with out any problems and all the rows were correctly inserted!!!.
ie. peparedstatement.setClob(columnIndex, clob)
was replaced with
peparedstatement.setString(columnIndex, "String")
Why an error table or view does exist error was throws for error in inserting clob data. Could anyone of you please explain ?
Thanks a lot for your answers and comments.
Oracle will also report this error if the table exists, but you don't have any privileges on it. So if you are sure that the table is there, check the grants.
There seems to be some issue with setCLOB() that causes an ORA-00942 under some circumstances when the target table does exist and is correctly privileged. I'm having this exact issue now, I can make the ORA-00942 go away by simply not binding the CLOB into the same table.
I've tried setClob() with a java.sql.Clob and setCLOB() with an oracle.jdbc.CLOB but with the same result.
As you say, if you bind as a string the problem goes away - but this then limits your data size to 4k.
From testing it seems to be triggered when a transaction is open on the session prior to binding the CLOB. I'll feed back when I've solved this...checking Oracle support.
There was no problem with my database connection properties or with my table or view name. The solution to the problem was very strange. One of the columns that I was trying insert was of Clob type. As I had a lot of trouble handling clob data in oracle db before, gave a try by replacing the clob setter with a temporary string setter and the same code executed with out any problems and all the rows were correctly inserted!!!.
ie. peparedstatement.setClob(columnIndex, clob)
was replaced with
peparedstatement.setString(columnIndex, "String")
#unbeli is right. Not having appropriate grants on a table will result in this error. For what it's worth, I recently experienced this. I was experiencing the exact problem that you described, I could execute insert statements through sql developer but would fail when using hibernate. I finally realized that my code was doing more than the obvious insert. Inserting into other tables that did not have appropriate grants. Adjusting grant privileges solved this for me.
Note: Don't have reputation to comment, otherwise this may have been a comment.
We experienced this issue on a BLOB column. Just in case anyone else lands on this question when encountering this error, here is how we resolved the issue:
We started out with this:
preparedStatement.setBlob(parameterIndex, resultSet.getBlob(columnName)); break;
We resolved the issue by changing that line to this:
java.sql.Blob blob = resultSet.getBlob(columnName);
if (blob != null) {
java.io.InputStream blobData = blob.getBinaryStream();
preparedStatement.setBinaryStream(parameterIndex, blobData);
} else {
preparedStatement.setBinaryStream(parameterIndex, null);
}
I found how to solve this problem without using JDBC's setString() method which limits the data to 4K.
What you need to do is to use preparedStatement.setClob(int parameterIndex, Reader reader). At least this is what that worked for me. Thought Oracle drivers converts data to character stream to insert, seems like not. Or something specific causing an error.
Using a characterStream seems to work for me. I am reading tables from one db and writing to another one using jdbc. And i was getting table not found error just like it is mentioned above. So this is how i solved the problem:
case Types.CLOB: //Using a switch statement for all columns, this is for CLOB columns
Clob clobData = resultSet.getClob(columnIndex); // The source db
if (clobData != null) {
preparedStatement.setClob(columnIndex, clobData.getCharacterStream());
} else {
preparedStatement.setClob(columnIndex, clobData);
}
clobData = null;
return;
All good now.
Is your script providing the schema name, or do you rely on the user logged into the database to select the default schema?
It might be that you do not name the schema and that you perform your batch with a system user instead of the schema user resulting in the wrong execution context for a script that would work fine if executed by the user that has the target schema set as default schema. Your best action would be to include the schema name in the insert statements:
INSERT INTO myschema.mytable (mycolums) VALUES ('myvalue')
update: Do you try to bind the table name as bound value in your prepared statement? That won't work.
It works for me:
Clob clob1;
while (rs.next()) {
rs.setString(1, rs.getString("FIELD_1"));
clob1 = rs.getClob("CLOB1");
if (clob1 != null) {
sta.setClob(2, clob1.getCharacterStream());
} else {
sta.setClob(2, clob1);
}
clob1 = null;
sta.setString(3, rs.getString("FIELD_3"));
}
Is it possible that you are doing INSERT for VARCHAR but doing an INSERT then an UPDATE for CLOB?
If so, you'll need to grant UPDATE permissions to the table in addition to INSERT.
See https://stackoverflow.com/a/64352414/1089967
Here I got the solution for the question. The problem is on glass fish if you are using it. When you create JNDI name make sure pool name is correct and pool name is the name of connection pool name that you are created.
Due to legacy code issues I need to calculate a unique index manually and can't use auto_increment, when inserting a new row to the database.
The problem is that multiple inserts of multiple clients (different machines) can occur simultaneously. Therefore I need to lock the row with the highest id from being read by other transactions while the current transaction is active. Alternatively I could lock the whole table from any reads. Time is not an issue in this case because writes/reads are very rare (<1 op per second)
It tried to set the isolation level to 8 (Serializable), but then MySQL throws a DeadLockException. Interestingly the SELECT to determine the next ID is still done, which contradicts my understanding of serializable.
Also setting the LockMode to PESSIMISTIC_READ of the select, doesn't seem to help.
public void insert(T entity) {
EntityManager em = factory.createEntityManager();
try {
EntityTransaction transaction = em.getTransaction();
try {
transaction.begin();
int id = 0;
TypedQuery<MasterDataComplete> query = em.createQuery(
"SELECT m FROM MasterDataComplete m ORDER BY m.id DESC", MasterDataComplete.class);
query.setMaxResults(1);
query.setLockMode(LockModeType.PESSIMISTIC_READ);
List<MasterDataComplete> results = query.getResultList();
if (!results.isEmpty()) {
MasterDataComplete singleResult = results.get(0);
id = singleResult.getId() + 1;
}
entity.setId(id);
em.persist(entity);
transaction.commit();
} finally {
if (transaction.isActive()) {
transaction.rollback();
}
}
} finally {
em.close();
}
}
Some words to the application:
It is Java-Standalone, runs on multiple clients which connect to the same DB Server and it should work with multiple DB servers (Sybase Anywhere, Oracle, Mysql, ...)
Currently the only idea I've got left is just to do the insert and catch the Exception that occurs when the ID is already in use and try again. This works because I can assume that the column is set to primary key/unique.
The problem is that with PESSIMISTIC_READ you are blocking others UPDATE on the row with the highest ID. If you want to block other's SELECT you need to use PESSIMISTIC_WRITE.
I know it seems strange since you're not going to UPDATE that row.. ..but if you want the other blocks while executing a SELECT you should lye and say: "Hay all.. ..I read this row and will UPDATE it".. ..so that they will not be allowed to read that row sinche the DB engine thinks that you will modify it before the commit.
SERIALIZABLE itself according to the documentation converts all plain SELECT statements to SELECT ... LOCK IN SHARE MODE so does not more than what you're already doing explicitly.
This question is related to my other question
I am building a Spring web application which reads data from DB using hibernate. My App will not be aware of any changes(Updates/Inserts) done to the DB. Is there a way to use query cache in such a scenario?
I configured query cache, and it is not invalidating the cache when I update the DB from different App. And I think it is the expected behavior.
I need the queries to be cached and invalidated when there is an update in DB. How to achieve this?
I am not sure is there any automatic way for refreshing the cache. But i have solved this problem in my last project. Expose a method like below and give access to admin. Once any modification done in DB externally call this method to refresh your cache.
public void refreshCache()
{
try {
Map<String, ClassMetadata> classesMetadata = sessionFactory.getAllClassMetadata();
for (String entityName : classesMetadata.keySet()) {
sessionFactory.evictEntity(entityName);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Well if you are using Oracle , the following command will give you the last updated unique scn on the table
select max(ora_rowscn) from TableName;
output
10772982279880
further you convert this to timestamp if you want
select scn_to_timestamp(10772982279880) from dual
but idont think you need to convert it into time , just cache the the rowscn alone and periodically check the table , if there is a change you can evict the cache regions.
Please note that this supports version > 10g
I see there are two ways to create update query in Hibernate. First you can go with the standard approach where we have hql like:
Query q = session.createQuery("update" + LogsBean.class.getName() + " LogsBean " + "set LogsBean.jobId= :jobId where LogsBean.jobId= :oldValue ");
q.setLong("jobId", jobId);
q.setLong("oldValue", 0);
return q.executeUpdate();
or we can go and run
getHibernateTemplate.saveorupdate(jobId);
Now am getting java.lang.IllegalArgumentException: node to traverse cannot be null! on running first query and am not sure hwo to provide condition in getHibernateTemplate example, i want to update jobIds in log table whose value matches 0 and so i want to run something like
Update logs set jobId = 23 where jobId = 0
Above is the simple sql query that I am trying to run but I want to run this via hibernate, tried couple ways but it is not working, any suggestions?
Update:
As noted by Jeff, issue was not having space after update and so that issue got resolved but still values are not updated, i have updated show_sql true for hibernate and checking what could be the cause of the issue, will be running query generated by hibernate to run again db and see if records are updated.
Just a few things that might help you to resolve this:
What does .executeUpdate() return, 0 (as it did not update any
rows)?
Does it throw a HibernateException that you are
silently catching or rethrowing?
Which FlushMode do you have configured?
Does the update get to the DB? You could switch on the query log for your DB server.
I am new to JPA and am facing this issue for the past two days . Whenever i am trying to update my object in the database , the merge query is executing twice and the data is not updated in the Database . Can any one tell me where i have done mistake .
here is the Snippet :
Employee emp = em.find(Employee.class,empid);
if (emp != null) {
emp.setDescription("Success");
emp.setDob(new Timestamp(new Date().getTime()));
etxn = em.getTransaction();
etxn.begin();
em.merge(emp);
System.out.println(em.merge(emp));
etxn.commit();
}
Thats because you are calling merge method twice
Since you are using the same EntityManager, and JPA transactions, you do not even need to call merge.
Perhaps enable logging and include the log. Also include the code for you class.