JPA Query yields different results on different developers' machines - java

When I run our Java program, I get a weird error that none of my team mates get, because a JPA-Query gives a different result on my machine, even if I and a fellow developer check out the exact same git commit, and the same DB content, build it and then debug it together.
What this JPA-Query does is, it basically checks if a certain entity already exists in the database. It explicitly excludes entries that have the same Id, so the entity doesn't get compared to itself. It appears that this doesn't work on my machine. Here's a dummy version of the code:
import com.mysema.query.BooleanBuilder;
import com.mysema.query.jpa.impl.JPAQuery;
public class ExampleClass {
#Autowired
protected ClassThatExtendsJpaRepository customerDAO;
public void checkUniqueness(Customer inputCustomer) throws NotUniqueException {
//Or-condition
BooleanBuilder condition = new BooleanBuilder();
Condition.or(QCustomer.customer.registerNr.eq(inputCustomer.getRegisterNr());
Condition.or(QCustomer.customer.registerNr.eq(inputCustomer.getRegisterNr());
JPAQuery query = new JPAQuery().from(Qcustomer.customer).where(condition);
//This should exclude customers of the same id, but it has no effect
query.where(Qcustomer.customer.id.ne(inputCustomer.getId());
Customer existingCustomer = customerDAO.findOne(query);
//Result: finds customer with same id! Should find nothing!
if(existingCustomer != null) {
throw new NotUniqueException();
}
}
}
As you can see, aside from the check that the existing entity doesn't have the same ID as the one we're trying to compare it to, there are also other conditions connected by an OR. For simplicity, I've used the same condition for both in this example. The way the code is written now, it will find the entity with the same ID in the database and throw the NotUniqueException even though it shouldn't. But remove one of the Condition.or, and it works. This leads me to suspect that the query is putting the brackets wrong.
According to the debugger, the query is
Select customer
from Customer customer
where (customer.registerNr = ?1 or customer.registerNr = ?1) and customer.id <> ?2
(And for the record: when I run this directly on my DB it works correctly, finding nothing.)
But I suspect it's actually running it as if the brackets were different:
Select customer
from Customer customer
where customer.registerNr = ?1 or (customer.registerNr = ?1 and customer.id <> ?2)
Either way, this still doesn't explain why this occurs, and why it's only on my machine.
We don't want to change the code, since this bug is only on my machine and doesn't stop me from working. So hopefully if I find the cause I can fix it some other way.
Versions:
Spring-data-jpa: 1.9.6.RELEASE
com.mysema.querydsl: 3.7.4
Hibernate: 4.2.21.Final
OJDBC: 19.17.0.0
Database: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production
Java: 8

The solution was: I downgraded Oracle to version 18, like all the other developers.
Yes, that's right, apparently the queries can give different results in Oracle 21.

Related

EntityManager.createNativeQuery returning list of objects instead of list of BigDecimal when using Pagination

I am trying to use Pagination with EntityManager.createNativeQuery(). Below is the skeleton code that I am using:
var query = em.createNativeQuery("select distinct id from ... group by ... having ...");
List<BigDecimal> results = query
.setMaxResults(pageSize)
.setFirstResult(pageNumber * pageSize)
.getResultList();
When pageNumber is 0 (first page), I get the expected List of BigDecimals:
But as soon as pageNumber > 0 (example, second page), I get a List of Objects, and each object in this list seems to contain two BigDecimals, the first of which contains the value from the db, and the second BigDecimal seems to be the position of this row.
and obviously I get this exception
java.lang.ClassCastException: class [Ljava.lang.Object; cannot be cast to class java.math.BigDecimal
Can someone please explain this discrepancy, and how this can be fixed to always return a List of BigDecimals? Thank you.
Update-1 : I have created a sample project to reproduce this issue. I was able to reproduce this issue only with an Oracle database. With H2 database, it worked fine, and I consistently got a list of BigDecimals irrelevant of the page number.
Update-2 : I have also created a sample project with H2 where it works without this issue.
The problem that you are running into is that your OracleDialect adds a column to its selected ResultSet. It wraps the query that you are running as discussed in SternK's answer.
If you were using the Hibernate SessionFactory and the Session interfaces, then the function that you would be looking for would be the "addScalar" method. Unfortunately, there doesn't seem to be an implementation in pure JPA (see the question asked here: Does JPA have an equivalent to Hibernate SQLQuery.addScalar()?).
I would expect your current implementation to work just fine in DB2, H2, HSQL, Postgres, MySQL (and a few other DB engines). However, in Oracle, it adds a row-number column to the ResultSet which means that Hibernate gets 2 columns from the ResultSet. Hibernate does not implement any query parsing in this case, which means that it simply parses the ResultSet into your List. Since it gets 2 values, it converts them into an Object[] rather than a BigDecimal.
As a caveat, relying on the JDBC driver to provide the expected-data-type is a bit dangerous, since Hibernate will ask the JDBC driver which data-type it suggests. In this case, it suggests a BigDecimal, but under certain conditions and certain implementations would be allowed to return a Double or some other type.
You have a couple options then.
You can modify your oracle-dialect (as SternK) suggests. This will take advantage of an alternate oracle-paging implementation.
If you are not opposed to having hibnerate-specific aspects in your JPA implementation, then you can take advantage of additional hibernate functions that are not offered in the JPA standard. (See the following code...)
List<BigDecimal> results = entitymanager.createNativeQuery("select distinct id from ... group by ... having ...")
.unwrap(org.hibernate.query.NativeQuery.class)
.addScalar("id", BigDecimalType.INSTANCE)
.getResultList();
System.out.println(results);
This does have the advantage of explicitly telling hibnerate, that you are only interested in the "id" column of your ResultSet, and that hibernate needs to explicitly convert to the returned object to a BigDecimal, should the JDBC-driver decide that a different type would be more appropriate as a default.
The root cause of your problem in the way how the pagination implemented in your hibernate oracle dialect.
There are two cases:
When we have setFirstResult(0) the following sql will be generated:
-- setMaxResults(5).setFirstResult(0)
select * from (
select test_id from TST_MY_TEST -- this is your initial query
)
where rownum <= 5;
As you can see, this query returns exactly the same columns list as your initial query, and therefore you do not have problem with this case.
When we set setFirstResult in not 0 value the following sql will be generated:
-- setMaxResults(5).setFirstResult(2)
select * from (
select row_.*, rownum rownum_
from (
select test_id from TST_MY_TEST -- this is your initial query
) row_
where rownum <= 5
)
where rownum_ > 2
As you can see, this query returns the columns list with additional rownum_ column, and therefore you do have the problem with casting this result set to the BigDecimal.
Solution
If you use Oracle 12c R1 (12.1) or higher you can override this behavior in your dialect using new row limiting clause in this way:
import org.hibernate.dialect.Oracle12cDialect;
import org.hibernate.dialect.pagination.AbstractLimitHandler;
import org.hibernate.dialect.pagination.LimitHandler;
import org.hibernate.dialect.pagination.LimitHelper;
import org.hibernate.engine.spi.RowSelection;
public class MyOracleDialect extends Oracle12cDialect
{
private static final AbstractLimitHandler LIMIT_HANDLER = new AbstractLimitHandler() {
#Override
public String processSql(String sql, RowSelection selection) {
final boolean hasOffset = LimitHelper.hasFirstRow(selection);
final StringBuilder pagingSelect = new StringBuilder(sql.length() + 50);
pagingSelect.append(sql);
/*
see the documentation https://docs.oracle.com/database/121/SQLRF/statements_10002.htm#BABHFGAA
(Restrictions on the row_limiting_clause)
You cannot specify this clause with the for_update_clause.
*/
if (hasOffset) {
pagingSelect.append(" OFFSET ? ROWS");
}
pagingSelect.append(" FETCH NEXT ? ROWS ONLY");
return pagingSelect.toString();
}
#Override
public boolean supportsLimit() {
return true;
}
};
public MyOracleDialect()
{
}
#Override
public LimitHandler getLimitHandler() {
return LIMIT_HANDLER;
}
}
and then use it.
<property name="hibernate.dialect">com.me.MyOracleDialect</property>
For my test data set for the following query:
NativeQuery query = session.createNativeQuery(
"select test_id from TST_MY_TEST"
).setMaxResults(5).setFirstResult(2);
List<BigDecimal> results = query.getResultList();
I got:
Hibernate:
/* dynamic native SQL query */
select test_id from TST_MY_TEST
OFFSET ? ROWS FETCH NEXT ? ROWS ONLY
val = 3
val = 4
val = 5
val = 6
val = 7
P.S. See also HHH-12087
P.P.S I simplified my implementation of the AbstractLimitHandler by removing checking presents FOR UPDATE clause. I think we will not have nothing good in this case and with this checking.
For example for the following case:
NativeQuery query = session.createNativeQuery(
"select test_id from TST_MY_TEST FOR UPDATE OF test_id"
).setMaxResults(5).setFirstResult(2);
hibernate (with Oracle12cDialect) will generate the following sql:
/* dynamic native SQL query */
select * from (
select
row_.*,
rownum rownum_
from (
select test_id from TST_MY_TEST -- initial sql without FOR UPDATE clause
) row_
where rownum <= 5
)
where rownum_ > 2
FOR UPDATE OF test_id -- moved for_update_clause
As you can see, hibernate tries to fix query by moving FOR UPDATE to the end of the query. But anyway, we will get:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
I've simulated your consult and everything works fine. I've used DataJpaTest to instance entityManager for me, h2 memory database and JUnit 5 to run the test. See below:
#Test
public void shouldGetListOfSalaryPaginated() {
// given
Person alex = new Person("alex");
alex.setSalary(BigDecimal.valueOf(3305.33));
Person john = new Person("john");
john.setSalary(BigDecimal.valueOf(33054.10));
Person ana = new Person("ana");
ana.setSalary(BigDecimal.valueOf(1223));
entityManager.persist(alex);
entityManager.persist(john);
entityManager.persist(ana);
entityManager.flush();
entityManager.clear();
// when
List<BigDecimal> found = entityManager.createNativeQuery("SELECT salary FROM person").setMaxResults(2).setFirstResult(2*1).getResultList();
// then
Assertions.assertEquals(found.size(), 1);
Assertions.assertEquals(found.get(0).longValue(), 1223L);
}
I suggest that you review your native query. It's preferable that you use Criteria API instead and let native queries for extreme cases like complex consults.
Update
After the author posted the project, I could reproduce the problem and it was related to the oracle dialect. For unknown reason the query which is running for the second call is: select * from ( select row_.*, rownum rownum_ from ( SELECT c.SHOP_ID FROM CUSTOMER c ) row_ where rownum <= ?) where rownum_ > ?, and that's why this is generating a bug, because it's querying 2 columns instead of only one. The undesired one is this rownum. For other dialects there is no such problem.
I suggest you try other oracle dialect version and whether none of them work, my final tip is try to do the pagination yourself.
After a lot of trails with different versions of different spring libraries, I was finally able to figure out the issue. In one of my attempts, the issue seems to have disappeared, as soon as I updated the spring-data-commons library from v2.1.5.RELEASE to v2.1.6.RELEASE. I looked up the changelog of this release, and this bug, which is related to this bug in spring-data-commons, is the root cause of this issue. I was able to fix the issue after upgrading the spring-data-commons library.

Hibernate and Postgres positional parameteres mismatch

I am currently maintaining a legacy application which uses some old technologies: Hibernate 3.2, Spring 2.5 and the like.
I've been fighting the last days with an exception. I've managed to isolate a simple example:
private void test(String username) {
String sql = "from org.ojade.aas.authentication.model.User u " +
"where u.aasPrincipalName = ?";
Session session = sessionFactory.getCurrentSession();
QueryImpl query = (QueryImpl) session.createQuery(sql);
query.setParameter(0, username);
List list = query.list();
log.debug("list = {}", list);
}
The execution of this code throws an exception when query.list() is executed.
Caused by: org.postgresql.util.PSQLException: No value specified for parameter 2.
at org.postgresql.core.v3.SimpleParameterList.checkAllParametersSet(SimpleParameterList.java:102)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:166)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:389)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:330)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:240)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:92)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:187)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1791)
at org.hibernate.loader.Loader.doQuery(Loader.java:674)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236)
at org.hibernate.loader.Loader.doList(Loader.java:2217)
... 120 more
There is no parameter 2. If I change the query to use named parameters it works, but I can't change the original code (is part of a compiled library).
I've set Hibernate to show me the SQL. It is executing this:
SELECT
user0_.ID_PRINCIPAL AS ID1_3_,
user0_.VERSION AS VERSION3_,
user0_.NO_PRINCIPAL AS NO4_3_,
user0_.IN_BLOQUEO AS IN5_3_,
user0_.IN_ACTIVO AS IN6_3_
FROM JAAS_PRINCIPAL user0_
WHERE user0_.TT_DISCRIMINANTE = 'USER' AND user0_.NO_PRINCIPAL =?
Which seems ok.
Any idea what the problem may be?
This old bug reported against the postgresql-8.1-405.jdbc3.jar driver informs that if you have /* */ comments including a ? in the query, the parser won't understand that it's not a placeholder, but just part of the comment. Apparently other characters such as dollar signs also caused problems.
As given in the original bug report, this can be solved with named parameters, fixing comments (in this case the comments were generated by Hibernate, so turning off Hibernate comments with property hibernate.use_sql_comments=false fixes it) or by upgrading the driver to handle it more gracefully (no idea which version it was fixed though).
Quite a catch to come across this in 2017! :)

Spring JDBCTemplate Always Throws QueryTimeOutException

This is something that I've been scratching my head with - especially since it's infuriating to deal with.
Consider the following code:
String query = "UPDATE ORDERS SET VOLUME=?,CONTRACT_ID=?,PROJECT_ID=?,WORKSITE_ID=?,DROPZONE_ID=?,DESCRIPTION_ID=?,MANAGER_ID=?,DELIVERY_DATE=?,REVISION=REVISION+1) WHERE ID=?";
jdbcTemplate.update(query, orderEntity.getVolume(), orderEntity.getContractNo(), orderEntity.getProjectID(), orderEntity.getWorksiteID(), orderEntity.getDropzoneID(), orderEntity.getDescriptionID(), orderEntity.getManagerID(), orderEntity.getDeliveryDate(), id);
We can see that the SQL query is incorrect - and will therefore throw some SQL error but one might have missed that. Spring will (for me) throw a QueryTimeoutException in response to this. I'm sort of okay with that but it's not helpful.
Now let's try
String query = "INSERT INTO ORDERS(ID,REISION,CONTRACT_ID,PROJECT_ID,WORKSITE_ID,DROPZONE_ID,DESCRIPTION_ID,MANAGER_ID,VOLUME,DELIVERY_DATE) VALUES(?,?,?,?,?,?,?,?,?,?)";
jdbcTemplate.update(query, id, revision, etc);
Another spelling mistake that's easily missed - REVISION is misspelled as REISION) Spring throws another QueryTimeoutException again. This now means that if I get that exception I don't actually know what it is. Is it a syntax error? Is it a column spelling error? Is it the (much harder to notice) fact that the foreign key constraint not being followed?
While debugging, this is quite possibly the most infuriating thing ever - all I know is that my query failed to run. How can I get something useful? Is there something I've not added to my pom.xml file?
EDIT:
Here's a nicer example. I have a DESCRIPTIONS table, with an ID, REVISION and TEXT column. All of those are marked as not being nullable.
DescriptionEntity descriptionEntity = new DescriptionEntity("newDesc", 1, null);
String query = "INSERT INTO DESCRIPTIONS (ID,REVISION,TEXT) VALUES(?,?,?)";
jdbcTemplate.update(query, descriptionEntity.getID(), 1, descriptionEntity.getText());
That will also throw a query timeout exception, when running the query in mysql gives me ERROR 1048 (23000): Column 'TEXT' cannot be null
This is, to put it politely, a bit of a pain.
It's not a spelling mistake in the first example, as you left out the opening paren. I would say this isn't an issue with Spring or JDBC, but rather your DB is trying to process the SQL, waiting for more input or something, and never returning.
In the second one, I am not sure what you are talking about since I don't know the table design. I have to assume what you mean is ID is not unique? Again, I wouldn't blame Spring or JDBC, maybe the drive, most likely the database server.
Keep in mind, in a lot of cases, the way SQL is handled in the user Client UI is not the same as how it gets handled through JDBC. For instance, in SQL Server the SQL is set as a string, the passed in parameters set as variables, and it uses sp_executesql() to run it. I discovered that when I had a report that ran PERFECTLY fine through SQL Studio Manager client, but blew up when I ran it live because the query plan optimizer took a different path due to the differences in how the SQL was ran.
This is quite possibly the most stupid error I've ever come across: the issue was in how Maven resolved all the dependencies.
The requirement for Spring Security was placed before the JDBC requirement. That made Spring Security pull down org.springframework:spring-tx:jar:3.0.7.RELEASE:compile which satisfied the dependency for JDBC. Moving the JDBC requirement up meant JDBC pulled down org.springframework:spring-tx:jar:3.2.2.RELEASE:compile.

OptimisticLockException with Ebean/Play

I have a Play 2.1.3 Java app using Ebean. I am getting the OptimisticLockException below.
[OptimisticLockException: Data has changed. updated [0] rows sql[update person
set name=? where id=? and email=? and name=? and password is null and created=?
and deleted is null] bind[null]]
I understand that it is trying to tell me the record has changed between when I read it and when I tried to write it. But the only change is happening in this method.
public void updateFromForm(Map<String, String[]> form) throws Exception {
this.name = form.get("name")[0];
String password = form.get("password")[0];
if (password != null && password.length() != 0) {
String hash = Password.getSaltedHash(password);
this.password = hash;
}
this.update();
}
Am I doing this wrong? I saw similar logic in zentasks. Also, should I be able to see the the values for the bind variables?
UPDATE: I am calling updateFromForm() from inside a controller:
#RequiresAuthentication(clientName = "FormClient")
public static Result updateProfile() throws Exception {
final CommonProfile profile = getUserProfile();
String email = getEmail(profile);
Person p = Person.find.where().eq("email", email).findList().get(0);
Map<String, String[]> form = request().body().asFormUrlEncoded();
if (p == null) {
Person.createFromForm(form);
} else {
p.updateFromForm(form);
}
return ok("HI");
}
I have an alternative approach to this, where I add the annotation
#EntityConcurrencyMode(ConcurrencyMode.NONE)
to the Entity class.
This disables the optimistic locking concurrent modification check meaning the SQL becomes
update person set name=? where id=?
This is even more optimistic since it simply overwrites any intermediate changes.
Little bit late, but for your case #Version annotation should be the solution. We're using it mostly with java.util.Date, so it can be also used also for determining the date of last record update, in Play model that's just:
#Version
public java.util.Date version;
In such case update statement will be done with id and version fields only - useful especially when using with large models:
update person set name='Bob'
where id=1 and version='2014-03-03 22:07:35';
Note: you don't need/should update this field manually at each save, Ebean does it itself. version value changes ONLY when there was updated data (so using obj.update() where nothing changes doesn't update version field)
Mystery solved.
First- this public service announcement. "OptimisticLockException" is a big bucket. If you are trying to track one of these down be open to the idea that it could really be anything.
I figured out my problem by dumping SQL to the log and finding this:
update person set name='Bob'
where id=1 and email='jj#test.com'
and name='Robert' and password is null
and created=2013-12-01 and deleted is null
So I guess what happens when you do an update is that it builds a WHERE clause with all the known entities and their values as they were originally ready.
That means, if any other part of your code or another process changes something behind your back, this query will fail. I wrongly assumed that the problem was that somehow .setName('Bob') had changed the name in the DB or some object cache.
Really what was happening is that the WHERE clause includes a date while my database includes an entire timestamp with date, time, and timezone.
For now, I fixed it by just commenting out the timestamp in the model until I can figure out if/how Ebean can handle this data type.
I had the same problem,
after hours of search i found the reason..
It was of inconsistency of the parameters type in the data base (in my case string) and the object i created and tried to save -java.util.Date.
after changing the database to hold datetime object the problem was solved

JPA: querying FK

I'm using EclipseLink(JPA 2.0) under Netbeans 7.0 with JDK 7. Adding more, this is a JavaSE.
I have this tables, Employee and Record where in the relation is Employee(1) --- (*)Records.
Adding more about the structure of the Record: RecordID (PK), EmployeeID(FK), Status, etc.
I wanted to query out from the Record Table (not using the Employee->Rental Collection) what records has a relation with the employee..
I tried using the query, it always returns an exception
Exception Description: Error compiling the query [SELECT r FROM Record r WHERE
r.employeeid = :employeeid], unknown state or association field
[employeeid] of class [Record].
From the information given it's not completely clear, but I believe you need to reference the id inside the Employee object.
eg. the correct query is probably:
SELECT r FROM Record r WHERE r.employee.id = :employeeid
(notice the extra dot in employee.id)
If this doesn't work, please provide us with some actual code of your Java classes.

Categories