Error SQL7008 while updating a DB2 for iSeries table - java

I have a Java Web application using Hibernate and DB2 for iSeries and during update of a table I get he following error:-
Error SQL7008 while updating a DB2 for
iSeries table

From doing some googling on this error message I noticed that it happens when you are running an insert/update in a non-transactional mode. The explanation is given here.
This occurs because the table you are
trying to update is not being
journalled, and your update is being
run within a transaction.
Generally, you should always commit (and rollback if an exception occurs) your transactions. Usually I never set auto commit to true but in this case I would like to understand if it's truly needed as mentioned in the link above. Can you set the auto commit to true in your connection to see if this goes away?
<property name="hibernate.connection.autocommit" value="true"/>
Also this link has some tutorials on transaction management with hibernate.

I found the answer to my question,
This occurs As CoolBeans mentioned because the table I was trying to update is not being journalled.
Add this table to Journal, here are the steps
this took care of my problem.

Related

Envers audit table does not exist when hibernate (via create-drop) tries to delete it

i am using java, spring, hibernate and envers among other things. For testing purposes i use a h2 database and for the test configuration i set spring.jpa.hibernate.ddl-auto=create-drop. Some of my entities are audited (using envers).
Now, when i run my tests i get something like:
2020-04-24 12:05:30.109 DEBUG [org.hibernate.SQL] [Test worker]: alter table nmc.testtable_aud drop constraint FKcmwq41oxs0yus7mgufns1njbd
Hibernate: alter table nmc.testtable_aud drop constraint FKcmwq41oxs0yus7mgufns1njbd
2020-04-24 12:05:30.112 WARN [org.hibernate.tool.schema.internal.ExceptionHandlerLoggedImpl] [Test worker]: GenerationTarget encountered exception accepting command : Error executing DDL "alter table nmc.testtable_aud drop constraint FKcmwq41oxs0yus7mgufns1njbd" via JDBC Statement
org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL "alter table nmc.testtable_aud drop constraint FKcmwq41oxs0yus7mgufns1njbd" via JDBC Statement
...
Caused by: org.h2.jdbc.JdbcSQLException: Table "testtable_aud" not found; SQL statement:
When i use spring.jpa.hibernate.ddl-auto=update the described error does not occur, but that causes issues further down the line so it is not an option for me.
While the tests run green on my local developer instance it fails on the central testing machine. Also, this is really annoying since i do not want it to throw an error when there is nothing wrong.
It seems to me that this is a really basic issue. I have searched for some time now and tried different things, but i cannot seems to resolve this conflict. From what i read envers/hibernate integration works well so i cannot believe that this is a common thing which then makes me think that i configured something incorrectly. I just expect that hiberate should only drop tables which already exist or at least produce the appropriate sql.
Any help and/or pointer in the right direction would be greatly appreciated.
Thanks.

hibernate.hbm2ddl.auto create-drop with production Database

I am in need to clarify below ,
Is it mandatory to set the hibernate.hbm2ddl.auto property If we are using Hibernate.
hibernate.hbm2ddl.auto create-drop will affect any thing in the production DB
I am using spring localsession factory builder to build hibernate session .The queries all executed fine by using #Transaction but after query execution im getting Invalid data access sql grammer exception.I am assuming Hibernate trying to update something with DB which couldn't.
That's y asking help on hbm2ddl.auto property?
No. The default is fine: it does nothing with the database
Well, it will drop your database (you'll lose everything) when the session factory is closed, and recreate the schema when it restarts. So you really, really don't want that in production.
If you want help on your exception, you should show your code, show the stack trace of the exception, and show the relevant information (what the relevant tables look like, for example). I doubt hbm2ddl.auto has anything to do with that exception. Usually, when an exception occurs, it's a problem with your code, not a problem with hibernate. That's true of any framework/library.

Why is iBATIS giving stale results, even with caching disabled?

I have a web application which I've been slowly migrating from iBATIS 2 to JPA with Spring Data.
For the most part, things have been going well, with me just migrating the DAO for one domain object at a time. However, an issue that's been brought to my attention recently is that stale result lists are being show in some parts of the site.
For example, I have a "ticket" section, which shows a list of open tickets, and lets you view specific tickets on separate pages. When I create a new ticket, I can view that ticket on its specific page correctly. However, the open tickets list doesn't seem to show this new ticket until some time later.
Things I've tried to rule out:
I see this issue even on a system with MySQL's query cache disabled
I see this issue even when I set cacheModelsEnabled="false" in the iBATIS config.
I see this issue even when I completely remove the <cacheModel> element and cacheModel="x" attributes from my sqlMap file.
As soon as I restart the application, I see the up-to-date results.
When I execute the query iBATIS should be running here in a MySQL client, I do see the new ticket which is missing from iBATIS' results.
When I mocked up a simple ticket list using Spring MVC and Spring Data JPA, I do see the new ticket.
I've also tried to rule out some sort of weird transaction state with iBATIS, but it doesn't seem that any transaction is being used here at all.
What am I missing? Is there anything else I should be trying to figure this out? Or, should I just prioritize replacing the iBATIS layer completely with Spring Data JPA, which seems to be immune from this problem?
UPDATE
I've now gone through a lot of my recent changes with git bisect, and I've narrowed it down to a change that introduced Spring's org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter.
So, it would seem that some transaction is living longer than it should. I'll add more logging to see if I can confirm this, and then look for a way to avoid using that filter.
If you are doing select, insert, select in the same SqlSession,
then the SqlSession cache is causing this issue. You will need to
clear the cache manually after the insert: sqlSession.clearCache().
So, it seems a combination of things ended up happening here:
Most of my code was not explicitly using transactions.
I had at some point changed to use Tomcat's JDBC Connection Pool, which does not reset autocommit by default when a connection is returned to the pool. I expect that my older DBCP-based stuff did this implicitly, though.
The introduction of OpenEntityManagerInViewFilter may have caused a SET autocommit=0 to be called at some point, with no corresponding SET autocommit=1 later, if nothing had changed.
By chance, or perhaps some design, the code that inserted a new record into the database and then immediately retrieved and showed it, seemed to get a different Connection than the code that showed my list of records.
The default MySQL transaction isolation level of REPEATABLE-READ meant that my listings were showing the old results.
The fix I've found, which seems to work in my testing so far, is to add these defaultAutoCommit and jdbcInterceptors attributes to my connection pool config:
<Resource name="jdbc/DB" auth="Container" type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
...
defaultAutoCommit="true" jdbcInterceptors="ConnectionState;StatementFinalizer" />

How to update tables from back end that it reflects in retrieved entities

I'm developing a Spring MVC web application using Windows 7, Eclipse Juno, Eclipselink JPA as ORM and Glassfish as application server with Oracle 11g. While I was working with Eclipselink I noticed when I update a table manually by execute an update PL/SQL query it doesn't has any affect on entities that already retrieved by Eclipselink until restart the server. Although, I disabled Eclipselink cache by having <shared-cache-mode>NONE</shared-cache-mode> in persistance.xml and using EntityManager.clear(), EntityManager.close() and #Cacheable(false).
Then, I noticed when I update tables using Oracle-SQLDeveloper table designer it totally works fine and entities are showing updated information. So I checked SQLDeveloper log to see what query it's using to update rows and I saw that it's using ORA_ROWSCN and ROW ROWID in where clause. After that, I exactly used the same where clause as the one SQLDeveloper used to update tables, but still entities were showing old information.
I'm wondering what factors are involved here that Eclipslink is not fetching real time data from database ? but, after updating table with SQLDeveloper designer Eclipselink is showing real time data. It seems that modifying a table data with SQLDeveloper table designer also marks the table as changed by using some database features. Then, Eclipselink will read the mark before hitting the table.
Also to get more clarification, anyone knows what steps are involved in Eclipselink before it decides to hit database when user commands to execute a TypedQuery ? I'm so curious that where it stores cached entities ? since cache rest just by restarting the computer; I tried restarting Glassfish, killing the java process and logoff current user, but none of them worked. Why Eclipselink still is caching entities since I configured it to not use any caching procedure? Is it possible to completely turn off cache in Eclipselink?

Hibernate database name change gives MySQLSyntaxErrorException: Table doesn't exist

I used to have a database called database and everything was working well using hibernate and its models.
I removed <property name="hibernate.hbm2ddl.auto"> to avoid update or create as it's a production server, we want to do it manually.
We recently switched to database2 and so we switched the hibernate configuration file and all the hibernate XML models.
`<class name="com.api.models.database.MmApplications" table="mm_applications" catalog="database2">`
but it keeps looking for database event if we migrated the database, the models and the connexion.
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'database.mm_applications' doesn't exist
Does someone can help me ?
UPDATE ----
Hibernate is connecting to the right database (database2), but there is a prefix as a prefix database. making the queries hitting the database instead of database2, and when I try to force the default_schema my queries become :
`... from database.database2.mm_applications ....`
Any idea?
My database is specified in the hibernate.connection.url property. Have you changed that also ? An example would be: jdbc:mysql://localhost/mydatabase
Also, instead of removing hibernate.hbm2ddl.auto then perhaps you should set its value to validate. That way hibernate will ensure that the datamodel matches the database schema.
I found the problem, It was an other application deployed on the same tomcat server using hibernate as well with another database (database) making a conflict with the new application ...
There is still something weird, by connecting to any database, hibernate will use the specified catalog in the hibernate models and so constructing the request using the catalog.table_name
Hope this help someone someday.

Categories