Log how many items have been impacted in a Spring/Hibernate transaction - java

I have this method that is called from another service:
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void execute(String sql) {
Query query = entityManager.createNativeQuery(sql);
query.executeUpdate();
}
Basically the client loads multiple sql files and run each sql file in a new transaction, in order to not impact other files execution.
For example this is an example of an sql file, that is cleaning up some data:
begin;
delete from table t where t.created_at < current_date - interval '2 month';
commit;
What I'm trying to do is to log, the outcome of each transaction. For example here, I want to display how many records were deleted. How can I do that from Spring ? I know that you can log something more specific with:
logging.level.org.springframework.transaction=TRACE
, but still I cannot see any outcome. This reveals information about sql that will run and when transaction started/ended.
Second solution was to check the result of:
int count = query.executeUpdate();
, but count is 0, even though the sql code got executed and deletes hundreds of rows.
Thanks upfront for the suggestions !

The problem is as #XtremeBaumer correctly pointed out your script. If you just run executeUpdate with a delete statement it will return the number of affected rows.
But that is not what you are doing. You are executing a code block delimited by begin and end. There might be a way for such a code block to return a value, but that would need to be coded into the code block and is probably highly database specific.

Related

When can NoSuchResult be thrown

I have inherited a system with a rather wierd bug that occurs maybe once every 6 months, where the application suddenly loses track of Database data.
The system has redundancy with 2 servers that are scheduled to run the same function at the same time. They both get the same input data to the function and they both talk to the same postgres database, however the behaviours is different on the 2 machines.
The function that is being executed is calling the database and checking if there is a row with the specified id as supplied by the input parameter and if there is it executes A(), otherwise B()
Problem is that one server executes A() and the other B(). I have searched everywhere and there is no code writing to this table or deleting from it. So within all reason I think that they should both execute the same code.
This is the code that is fetching from the database:
#PersistenceContext(unitName = "backend-persistence")
private EntityManager em;
public Optional<OfferEntity> getOfferFromOfferId(final long offerId, final String countryAlias, final String langauageAlias) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<OfferEntity> cq = cb.createQuery(OfferEntity.class);
Root<OfferEntity> from = cq.from(OfferEntity.class);
cq.select(from);
cq.where(cb.and(cb.equal(from.get(OfferEntity_.offerId), offerId),
cb.equal(from.get(OfferEntity_.country), countryAlias),
cb.equal(from.get(OfferEntity_.language), langauageAlias)));
try {
return Optional.of(em.createQuery(cq).getSingleResult());
} catch (NoResultException nre) {
return Optional.empty();
}
}
And I am getting an empty optional from one of the servers but not the other.
So I guess as a tl;dr, am I missunderstanding NoResultException and in what concrete situations can this be thrown? besides if there are no rows matching the query.
You can only use getSingleResult() when you are sure that you will get exactly one result. In all other cases you have to use getResultList()
From the API doc of javax.persistence.Query getSingleResult():
java.lang.Object getSingleResult()
Execute a SELECT query that returns a single untyped result.
Returns:
the result
Throws:
NoResultException - if there is no result
NonUniqueResultException - if more than one result
IllegalStateException - if called for a Java Persistence query language UPDATE or DELETE statement
QueryTimeoutException - if the query execution exceeds the query timeout value set and only the statement is rolled back
TransactionRequiredException - if a lock mode has been set and there is no transaction
PessimisticLockException - if pessimistic locking fails and the transaction is rolled back
LockTimeoutException - if pessimistic locking fails and only the statement is rolled back
PersistenceException - if the query execution exceeds the query timeout value set and the transaction is rolled back

parameterized insert/update in sql

I am trying to insert into a db that I have, and I'd like to do so through parameters. I am connecting to a postgres db using java.
I can connect to the db just fine. I know that because I have various operations that I am using that are already working were I can see, and update existing rows in my db. I am having trouble with INSERT.
I have the following:
private String _update_rentals = "INSERT into rentals (cid, mid) values (?,?)";
private PreparedStatement _update_rentals_statement;
private String _update_movie_status = "UPDATE Movie SET checkedout = true WHERE mid = ?";
private PreparedStatement _update_movie_status_statement;
And I initialize them:
_update_movie_status_statement = _customer_db.prepareStatement(_update_movie_status);
_update_rentals_statement = _customer_db.prepareStatement(_update_rentals);
And
while (movieAvail.next()){
System.out.println(movieAvail.getBoolean(1));
if (movieAvail.getBoolean(1) == false){
//Do chekcout
_update_rentals_statement.clearParameters();
_update_rentals_statement.setInt(1, cid);
_update_rentals_statement.setInt(2, mid);
_update_rentals_statement.executeQuery();
_update_movie_status_statement.clearParameters();
_update_movie_status_statement.setInt(1, mid);
_update_movie_status_statement.executeQuery();
System.out.println("Enjoy your movie!");
}
}
I am getting an error with both of the executeQuery() calls. For some reason I am getting the following error with both:
Exception in thread "main" org.postgresql.util.PSQLException: No results were returned by the query.
I looked at other posts, and I believed that I was following syntax for both insert/ update correctly, so maybe I am overlooking some aspect of this.
This is all part of a larger code base, so I did not want to include the methods these pieces of code are in. But these are the isolated instances which play a part with this code.
In general, when you execute a query, you are willing to retrieve some kind of information from the database. This is usually the case when you are executing SELECT queries. However, with INSERT and UPDATE statements, you are not querying the database, you are simply executing an update or inserting new rows. In the documentation of PreparedStatement you can see in which cases an exception is being thrown when you try to call executeQuery:
Throws: SQLException - if a database access error occurs; this method
is called on a closed PreparedStatement or the SQL statement does not
return a ResultSet object
So in your case the problem is that your statements do not return a ResultSet. You should use execute or executeUpdate instead. The former simply executes the update, while the latter does the same, but also returns the number of affected rows.
I think the main issue is that you are calling executeQuery(), which expects a result to be returned, but Insert/Update are not queries and don't return a result. Try just calling execute().

Hibernate: Issues with running update query

I see there are two ways to create update query in Hibernate. First you can go with the standard approach where we have hql like:
Query q = session.createQuery("update" + LogsBean.class.getName() + " LogsBean " + "set LogsBean.jobId= :jobId where LogsBean.jobId= :oldValue ");
q.setLong("jobId", jobId);
q.setLong("oldValue", 0);
return q.executeUpdate();
or we can go and run
getHibernateTemplate.saveorupdate(jobId);
Now am getting java.lang.IllegalArgumentException: node to traverse cannot be null! on running first query and am not sure hwo to provide condition in getHibernateTemplate example, i want to update jobIds in log table whose value matches 0 and so i want to run something like
Update logs set jobId = 23 where jobId = 0
Above is the simple sql query that I am trying to run but I want to run this via hibernate, tried couple ways but it is not working, any suggestions?
Update:
As noted by Jeff, issue was not having space after update and so that issue got resolved but still values are not updated, i have updated show_sql true for hibernate and checking what could be the cause of the issue, will be running query generated by hibernate to run again db and see if records are updated.
Just a few things that might help you to resolve this:
What does .executeUpdate() return, 0 (as it did not update any
rows)?
Does it throw a HibernateException that you are
silently catching or rethrowing?
Which FlushMode do you have configured?
Does the update get to the DB? You could switch on the query log for your DB server.

Hibernate Batch Processing Using Native SQL

I have an application using hibernate. One of its modules calls a native SQL (StoredProc) in batch process. Roughly what it does is that every time it writes a file it updates a field in the database. Right now I am not sure how many files would need to be written as it is dependent on the number of transactions per day so it could be zero to a million.
If I use this code snippet in while loop will I have any problems?
#Transactional
public void test()
{
//The for loop represents a list of records that needs to be processed.
for (int i = 0; i < 1000000; i++ )
{
//Process the records and write the information into a file.
...
//Update a field(s) in the database using a stored procedure based on the processed information.
updateField(String.valueOf(i));
}
}
#Transactional(propagation=propagation.MANDATORY)
public void updateField(String value)
{
Session session = getSession();
SQLQuery sqlQuery = session.createSQLQuery("exec spUpdate :value");
sqlQuery.setParameter("value", value);
sqlQuery.executeUpdate();
}
Will I need any other configurations for my data source and transaction manager?
Will I need to set hibernate.jdbc.batch_size and hibernate.cache.use_second_level_cache?
Will I need to use session flush and clear for this? The samples in the hibernate tutorial is using POJO's and not native sql so I am not sure if it is also applicable.
Please note another part of the application is already using hibernate so as much as possible I would like to stick to using hibernate.
Thank you for your time and I am hoping for your quick response. If it is also possible could code snippet would really be useful for me.
Application Work Flow
1) Query Database for the transaction information. (Transaction date, Type of account, currency, etc..)
2) For each account process transaction information. (Discounts, Current Balance, etc..)
3) Write the transaction information and processed information to a file.
4) Update a database field based on the process information
5) Go back to step 2 while their are still accounts. (Assuming that no exception are thrown)
The code snippet will open and close the session for each iteration, which definitely not a good practice.
Is it possible, you have a job which checks how many new files added in the folder?
The job should run say every 15/25 minutes, checking how much files are changed/added in last 15/25 minutes and updates the database in batch.
Something like that will lower down the number of open/close session connections. It should be much faster than this.

Toplink bug. Empty result for valid sql with not empty result

How is it possible?
We are executing EJBQL on Toplink(DB is Oracle) and query.getResultList is empty.
But!
When i switched log level to FINE and received Sql query, that TopLink generates, i tried to execute this query on database and (miracle!) i got a non-empty result!
What could be the reason and how is it treated?
Thanks in advance!
P.S. No exceptions.
UPDATE:
Query log:
SELECT DISTINCT t0.ID, t0.REG_NUM, t0.REG_DATE, t0.OBJ_NAME, t1.CAD_NUM, t1.CAD_NUM_EGRO, t2.ID, t2.DICT_TYPE, t2.ARCHIVE_DATE, t2.IS_DEFAULT, t2.IS_ACTUAL, t2.NAME, t0.INVENTORY_NUM FROM CODE_NAME_TREE_DICTIONARY t3, DEFAULTABLE_DICTIONARY t2, IMMOVABLE_PROP t1, ABSTRACT_PROPERTY t0 WHERE ((t3.ID IN (SELECT DISTINCT t4.ID FROM CODE_NAME_TREE_DICTIONARY t5, CODE_NAME_TREE_DICTIONARY t4, type_property_parents t6 WHERE (((t5.ID = ?) AND (t4.DICT_TYPE = ?)) AND ((t6.type_property_id = t4.ID) AND (t5.ID = t6.parent_id)))) AND ((t1.ID = t0.ID) AND (t0.PROP_TYPE_DISCR = ?))) AND ((t3.ID = t0.PROP_TYPE) AND ((t2.ID (+) = t1.STATUS_ID) AND (t2.DICT_TYPE = ?)))) ORDER BY t0.REG_NUM ASC
bind => [4537, R, R, realty_status]|#]
This query returns 100k rows, but toplink believes that it is not...
With log level to FINE can you verify that you are connecting to the same database? How simple is your testcase; can you verify that it is this exact JPQL that is being translated to that SQL?
VPD (http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/vpd.htm)? Policies?
Is something of this flavor defined on the schema? These features transparently add dynamic where clauses to the statement that is executed in the database session, so the query results depend on the state of the session in this case.
When reformatting the query the following conditions seemed strange:
AND t2.ID (+) = t1.STATUS_ID
AND t2.DICT_TYPE = ?
The (+) indicates an outer join of t2 (DEFAULTABLE_DICTIONARY), but this table seems to be non-optional since it has to have a non-null DICT_TYPE for the second condition.
On closer looking, the bind parameters also seem to be off, the fields are in order
CODE_NAME_TREE_DICTIONARY.ID
CODE_NAME_TREE_DICTIONARY.DICT_TYPE
ABSTRACT_PROPERTY.PROP_TYPE_DISCR
DEFAULTABLE_DICTIONARY.DICT_TYPE
With the given parameters (4537, R, R, realty_status), the first DICT_TYPE would be 'R' while the second is the string "realty_status" which seems inconsistent.
Transactions? Oracle never gives you a "dirty read" which database speak for access to uncommitted data. If you send data on one connection you cannot access it on any other connection until it is committed. If you try the query later by hand, the data has been committed and you get the expected result.
This situation can arise if you are updating the data in more than one connection, and the data manipulation is not set to "auto commit". JPA defaults to auto-commit, but flushing at transaction boundaries can give you a cleaner design.
I can't tell exactly, but I am a little surprised that the string parameters are not quoted. Is it possible that interactively there are some automatic conversions, but over this connection instead of the string 'R' it was converted to the INT ascii for R?
I found the reason!
The reason is Oracle! I've tried the same code on Postgres and its worked!
I dont know why, but in some magic cases oracle ignores query parameters and query returns empty result.

Categories