I want to build Select fields and Where clause dynamically with OR condition using squiggle-sql api.
Please take more than two fields as an example.
Select field1,filed2,filed3,field4,.....
from t1,t2,t3
where t1.field1 = t2.field1 and t1.field1 = t3.field1
where t1.field=? OR t2.field3=? OR t3.field2=?
Please suggest.
I have just discovered Squiggle recently. It seems to be very similar to jOOQ (of which I am the developer). In jOOQ, you could write (I'm sure Squiggle offers similar functionality)
List<Field<?>> fields = new ArrayList<Field<?>>();
fields.add(field1);
fields.add(field2);
// ... add more fields here
Condition condition = T1.field.equal(...);
condition = condition.or(T2.field3.equal(...));
condition = condition.or(T3.field2.equal(...));
// ... connect more conditions here
DSL.using(configuration)
.select(fields)
.from(t1, t2, t3)
.where(t1.field1.equal(t2.field1))
.and(t2.field1.equal(t3.field1))
.and(condition);
For more information, see http://www.jooq.org
Two WHERE at the same SELECT will produce an error. Do you mean this?
Select field1,filed2,filed3,field4,.....
from t1,t2,t3
where t1.field1 = t2.field1 and t1.field1 = t3.field1
AND (t1.field=? OR t2.field3=? OR t3.field2=?)
Related
Using Jooq, I am trying to fetch from a table by id first, if no matches found, then fetch by handle again.
And I want all fields of the returned rows, not just one.
Field<?> firstMatch = DSL.select(Tables.MY_TABLE.fields())
.from(Tables.MY_TABLE.fields())
.where(Tables.MY_TABLE.ID.eq(id))
.asfield(); // This is wrong, because it supports only one field, but above we selected Tables.MY_TABLE.fields(), which is plural.
Field<?> secondMatch = DSL.select(Tables.MY_TABLE.fields())
.from(Tables.MY_TABLE.fields())
.where(Tables.MY_TABLE.HANDLE.eq(handle))
.asfield(); // Same as above.
dslContext.select(DSL.coalesce(firstMatch, secondMatch))
.fetchInto(MyClass.class);
Due to the mistake mentioned above in the code, the following error occurs:
Can only use single-column ResultProviderQuery as a field
I am wondering how to make firstMatch and secondMatch two lists of fields, instead of two fields?
I tried
Field<?>[] secondMatch = DSL.select(Tables.MY_TABLE.fields())
.from(Tables.MY_TABLE.fields())
.where(Tables.MY_TABLE.HANDLE.eq(handle))
.fields();
but the following error occurred in the line containing DSL.coalesce
Type interface org.jooq.Field is not supported in dialect DEFAULT
Thanks in advance!
This sounds much more like something you'd do with a simple OR?
dslContext.selectFrom(MY_TABLE)
.where(MY_TABLE.ID.eq(id))
// The ne(id) part might not be required...
.or(MY_TABLE.ID.ne(id).and(MY_TABLE.HANDLE.eq(handle))
.fetchInto(MyClass.class);
If the two result sets should be completely exclusive, then you can do this:
dslContext.selectFrom(MY_TABLE)
.where(MY_TABLE.ID.eq(id))
.or(MY_TABLE.HANDLE.eq(handle).and(notExists(
selectFrom(MY_TABLE).where(MY_TABLE.ID.eq(id))
)))
.fetchInto(MyClass.class);
If on your database product, a query using OR doesn't perform well, you can write an equivalent query with UNION ALL, which might perform better.
I am trying to use Pagination with EntityManager.createNativeQuery(). Below is the skeleton code that I am using:
var query = em.createNativeQuery("select distinct id from ... group by ... having ...");
List<BigDecimal> results = query
.setMaxResults(pageSize)
.setFirstResult(pageNumber * pageSize)
.getResultList();
When pageNumber is 0 (first page), I get the expected List of BigDecimals:
But as soon as pageNumber > 0 (example, second page), I get a List of Objects, and each object in this list seems to contain two BigDecimals, the first of which contains the value from the db, and the second BigDecimal seems to be the position of this row.
and obviously I get this exception
java.lang.ClassCastException: class [Ljava.lang.Object; cannot be cast to class java.math.BigDecimal
Can someone please explain this discrepancy, and how this can be fixed to always return a List of BigDecimals? Thank you.
Update-1 : I have created a sample project to reproduce this issue. I was able to reproduce this issue only with an Oracle database. With H2 database, it worked fine, and I consistently got a list of BigDecimals irrelevant of the page number.
Update-2 : I have also created a sample project with H2 where it works without this issue.
The problem that you are running into is that your OracleDialect adds a column to its selected ResultSet. It wraps the query that you are running as discussed in SternK's answer.
If you were using the Hibernate SessionFactory and the Session interfaces, then the function that you would be looking for would be the "addScalar" method. Unfortunately, there doesn't seem to be an implementation in pure JPA (see the question asked here: Does JPA have an equivalent to Hibernate SQLQuery.addScalar()?).
I would expect your current implementation to work just fine in DB2, H2, HSQL, Postgres, MySQL (and a few other DB engines). However, in Oracle, it adds a row-number column to the ResultSet which means that Hibernate gets 2 columns from the ResultSet. Hibernate does not implement any query parsing in this case, which means that it simply parses the ResultSet into your List. Since it gets 2 values, it converts them into an Object[] rather than a BigDecimal.
As a caveat, relying on the JDBC driver to provide the expected-data-type is a bit dangerous, since Hibernate will ask the JDBC driver which data-type it suggests. In this case, it suggests a BigDecimal, but under certain conditions and certain implementations would be allowed to return a Double or some other type.
You have a couple options then.
You can modify your oracle-dialect (as SternK) suggests. This will take advantage of an alternate oracle-paging implementation.
If you are not opposed to having hibnerate-specific aspects in your JPA implementation, then you can take advantage of additional hibernate functions that are not offered in the JPA standard. (See the following code...)
List<BigDecimal> results = entitymanager.createNativeQuery("select distinct id from ... group by ... having ...")
.unwrap(org.hibernate.query.NativeQuery.class)
.addScalar("id", BigDecimalType.INSTANCE)
.getResultList();
System.out.println(results);
This does have the advantage of explicitly telling hibnerate, that you are only interested in the "id" column of your ResultSet, and that hibernate needs to explicitly convert to the returned object to a BigDecimal, should the JDBC-driver decide that a different type would be more appropriate as a default.
The root cause of your problem in the way how the pagination implemented in your hibernate oracle dialect.
There are two cases:
When we have setFirstResult(0) the following sql will be generated:
-- setMaxResults(5).setFirstResult(0)
select * from (
select test_id from TST_MY_TEST -- this is your initial query
)
where rownum <= 5;
As you can see, this query returns exactly the same columns list as your initial query, and therefore you do not have problem with this case.
When we set setFirstResult in not 0 value the following sql will be generated:
-- setMaxResults(5).setFirstResult(2)
select * from (
select row_.*, rownum rownum_
from (
select test_id from TST_MY_TEST -- this is your initial query
) row_
where rownum <= 5
)
where rownum_ > 2
As you can see, this query returns the columns list with additional rownum_ column, and therefore you do have the problem with casting this result set to the BigDecimal.
Solution
If you use Oracle 12c R1 (12.1) or higher you can override this behavior in your dialect using new row limiting clause in this way:
import org.hibernate.dialect.Oracle12cDialect;
import org.hibernate.dialect.pagination.AbstractLimitHandler;
import org.hibernate.dialect.pagination.LimitHandler;
import org.hibernate.dialect.pagination.LimitHelper;
import org.hibernate.engine.spi.RowSelection;
public class MyOracleDialect extends Oracle12cDialect
{
private static final AbstractLimitHandler LIMIT_HANDLER = new AbstractLimitHandler() {
#Override
public String processSql(String sql, RowSelection selection) {
final boolean hasOffset = LimitHelper.hasFirstRow(selection);
final StringBuilder pagingSelect = new StringBuilder(sql.length() + 50);
pagingSelect.append(sql);
/*
see the documentation https://docs.oracle.com/database/121/SQLRF/statements_10002.htm#BABHFGAA
(Restrictions on the row_limiting_clause)
You cannot specify this clause with the for_update_clause.
*/
if (hasOffset) {
pagingSelect.append(" OFFSET ? ROWS");
}
pagingSelect.append(" FETCH NEXT ? ROWS ONLY");
return pagingSelect.toString();
}
#Override
public boolean supportsLimit() {
return true;
}
};
public MyOracleDialect()
{
}
#Override
public LimitHandler getLimitHandler() {
return LIMIT_HANDLER;
}
}
and then use it.
<property name="hibernate.dialect">com.me.MyOracleDialect</property>
For my test data set for the following query:
NativeQuery query = session.createNativeQuery(
"select test_id from TST_MY_TEST"
).setMaxResults(5).setFirstResult(2);
List<BigDecimal> results = query.getResultList();
I got:
Hibernate:
/* dynamic native SQL query */
select test_id from TST_MY_TEST
OFFSET ? ROWS FETCH NEXT ? ROWS ONLY
val = 3
val = 4
val = 5
val = 6
val = 7
P.S. See also HHH-12087
P.P.S I simplified my implementation of the AbstractLimitHandler by removing checking presents FOR UPDATE clause. I think we will not have nothing good in this case and with this checking.
For example for the following case:
NativeQuery query = session.createNativeQuery(
"select test_id from TST_MY_TEST FOR UPDATE OF test_id"
).setMaxResults(5).setFirstResult(2);
hibernate (with Oracle12cDialect) will generate the following sql:
/* dynamic native SQL query */
select * from (
select
row_.*,
rownum rownum_
from (
select test_id from TST_MY_TEST -- initial sql without FOR UPDATE clause
) row_
where rownum <= 5
)
where rownum_ > 2
FOR UPDATE OF test_id -- moved for_update_clause
As you can see, hibernate tries to fix query by moving FOR UPDATE to the end of the query. But anyway, we will get:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
I've simulated your consult and everything works fine. I've used DataJpaTest to instance entityManager for me, h2 memory database and JUnit 5 to run the test. See below:
#Test
public void shouldGetListOfSalaryPaginated() {
// given
Person alex = new Person("alex");
alex.setSalary(BigDecimal.valueOf(3305.33));
Person john = new Person("john");
john.setSalary(BigDecimal.valueOf(33054.10));
Person ana = new Person("ana");
ana.setSalary(BigDecimal.valueOf(1223));
entityManager.persist(alex);
entityManager.persist(john);
entityManager.persist(ana);
entityManager.flush();
entityManager.clear();
// when
List<BigDecimal> found = entityManager.createNativeQuery("SELECT salary FROM person").setMaxResults(2).setFirstResult(2*1).getResultList();
// then
Assertions.assertEquals(found.size(), 1);
Assertions.assertEquals(found.get(0).longValue(), 1223L);
}
I suggest that you review your native query. It's preferable that you use Criteria API instead and let native queries for extreme cases like complex consults.
Update
After the author posted the project, I could reproduce the problem and it was related to the oracle dialect. For unknown reason the query which is running for the second call is: select * from ( select row_.*, rownum rownum_ from ( SELECT c.SHOP_ID FROM CUSTOMER c ) row_ where rownum <= ?) where rownum_ > ?, and that's why this is generating a bug, because it's querying 2 columns instead of only one. The undesired one is this rownum. For other dialects there is no such problem.
I suggest you try other oracle dialect version and whether none of them work, my final tip is try to do the pagination yourself.
After a lot of trails with different versions of different spring libraries, I was finally able to figure out the issue. In one of my attempts, the issue seems to have disappeared, as soon as I updated the spring-data-commons library from v2.1.5.RELEASE to v2.1.6.RELEASE. I looked up the changelog of this release, and this bug, which is related to this bug in spring-data-commons, is the root cause of this issue. I was able to fix the issue after upgrading the spring-data-commons library.
I have a SQL request like:
select X from "myTable" where (cond1 AND cond2) OR (cond3 AND cond 4)...
How many (cond AND cond) can I have in my clause where? Because it makes a stackOverFlow error with my 24578 conditions.
final List update = xService.getCountUpdate(couple);
couple is a list build by that way (every line in my file this is done ):
dcIssn = new ArrayList<String>();
dcIssn.add(0, row.getCell(dc).getStringCellValue());
dcIssn.add(1, row.getCell(issn).getStringCellValue());
couple.add(dcIssn);
I solve my problem.
The problem was not my query, just my StringBuilder which was too short for the query.
Thank you for helping me =)
There is no limit to the number of predicates that can be included in a search condition. For more information about search conditions and predicates.
see this:-
https://technet.microsoft.com/en-us/library/ms189575(v=sql.105).aspx
Note: This may be a simple question but I can't find a short way to make it clear. So, sorry for this long question.
In a project I am responsible, I use Spring 2.5, Hibernate 3.3.2 as middleware and Oracle database. As database is related to many other projects, some queries as really very complicated and I can't get a solution with Hibernate's solutions (HQL, Criteria, etc...). So I feel more comfortable with JdbcTemplate's queryForX() methods, as an example;
String sql = "select * from myTable";
jdbc.queryForList(sql);
Sure there are mostly "where" conditions and params indeed:
jdbc.querForList(sql, new Object[]{obj1,obj2,obj3 /* and many more arguments... */})
In this case, I must write question marks "?" for my parameters, so my SQL query string turns out some messy and hard to read; something like this:
select t1.col1, t2.col2, t1.col, --...some cols ,
sum(nvl(some_col1,?)-nvl(other_col2,?)) over (partition by col1,col2,col3,col4) sum_of_cols
from weird_table t1, another_table t2
where t1.col20=? and sum_of_cols>? and t1.col3=t2.col3 --and many ?'s...
and not exists (
select ? from boring_table t3 where -- many ?'s
)
--group by and order by order by etc
So now, which question mark is for which parameter? It is obvious but hard to read. But there are some other solutions for binded params like:
select * from a_table t where t.col1= :col1 and t.col2= :col2 -- and many more ":param"s
For this type query, we can write if it were Hibernate:
Query q = hibernateTemplate.createQuery();
q.setString("col1","a value");
q.setInteger("col2", 3);
I think it is more readable and easy to understand which value is what. I know I can do this with SQLQuery;
SQLQuery sq = hibernateTemplate.createSQLQuery();
/* same as above setInteger() etc. */
But this sq.list() gives me a list without a column name. so I have a basic array which is difficult to use:
[[1,2,"a"],[1,2,"b"], ...]
But with queryForList() I get better one:
[{COL1=1,COL2=2,COL3="a"},{COL1=1,COL2=2,COL3="b"},...]
So if I use queryForList(), I must write a very messy params Object;
or I use SQLQuery and then I have to get my list without a map as column names.
Is there a simple solution with mapped list using more readable param setting (like query.setX()) ?
Well you can use NamedParameterJdbcTemplate to do just that
Heres a sample
String query = "INSERT INTO FORUMS (FORUM_ID, FORUM_NAME, FORUM_DESC)
VALUES (:forumId,:forumName,:forumDesc)";
Map namedParameters = new HashMap();
namedParameters.put("forumId", Integer.valueOf(forum.getForumId()));
namedParameters.put("forumName", forum.getForumName());
namedParameters.put("forumDesc", forum.getForumDesc());
namedParameterJdbcTemplate.update(query, namedParameters);
You check the complete example with the source code in the below link
Spring NamedParameterJdbcTemplate Tutorial
How is it possible?
We are executing EJBQL on Toplink(DB is Oracle) and query.getResultList is empty.
But!
When i switched log level to FINE and received Sql query, that TopLink generates, i tried to execute this query on database and (miracle!) i got a non-empty result!
What could be the reason and how is it treated?
Thanks in advance!
P.S. No exceptions.
UPDATE:
Query log:
SELECT DISTINCT t0.ID, t0.REG_NUM, t0.REG_DATE, t0.OBJ_NAME, t1.CAD_NUM, t1.CAD_NUM_EGRO, t2.ID, t2.DICT_TYPE, t2.ARCHIVE_DATE, t2.IS_DEFAULT, t2.IS_ACTUAL, t2.NAME, t0.INVENTORY_NUM FROM CODE_NAME_TREE_DICTIONARY t3, DEFAULTABLE_DICTIONARY t2, IMMOVABLE_PROP t1, ABSTRACT_PROPERTY t0 WHERE ((t3.ID IN (SELECT DISTINCT t4.ID FROM CODE_NAME_TREE_DICTIONARY t5, CODE_NAME_TREE_DICTIONARY t4, type_property_parents t6 WHERE (((t5.ID = ?) AND (t4.DICT_TYPE = ?)) AND ((t6.type_property_id = t4.ID) AND (t5.ID = t6.parent_id)))) AND ((t1.ID = t0.ID) AND (t0.PROP_TYPE_DISCR = ?))) AND ((t3.ID = t0.PROP_TYPE) AND ((t2.ID (+) = t1.STATUS_ID) AND (t2.DICT_TYPE = ?)))) ORDER BY t0.REG_NUM ASC
bind => [4537, R, R, realty_status]|#]
This query returns 100k rows, but toplink believes that it is not...
With log level to FINE can you verify that you are connecting to the same database? How simple is your testcase; can you verify that it is this exact JPQL that is being translated to that SQL?
VPD (http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/vpd.htm)? Policies?
Is something of this flavor defined on the schema? These features transparently add dynamic where clauses to the statement that is executed in the database session, so the query results depend on the state of the session in this case.
When reformatting the query the following conditions seemed strange:
AND t2.ID (+) = t1.STATUS_ID
AND t2.DICT_TYPE = ?
The (+) indicates an outer join of t2 (DEFAULTABLE_DICTIONARY), but this table seems to be non-optional since it has to have a non-null DICT_TYPE for the second condition.
On closer looking, the bind parameters also seem to be off, the fields are in order
CODE_NAME_TREE_DICTIONARY.ID
CODE_NAME_TREE_DICTIONARY.DICT_TYPE
ABSTRACT_PROPERTY.PROP_TYPE_DISCR
DEFAULTABLE_DICTIONARY.DICT_TYPE
With the given parameters (4537, R, R, realty_status), the first DICT_TYPE would be 'R' while the second is the string "realty_status" which seems inconsistent.
Transactions? Oracle never gives you a "dirty read" which database speak for access to uncommitted data. If you send data on one connection you cannot access it on any other connection until it is committed. If you try the query later by hand, the data has been committed and you get the expected result.
This situation can arise if you are updating the data in more than one connection, and the data manipulation is not set to "auto commit". JPA defaults to auto-commit, but flushing at transaction boundaries can give you a cleaner design.
I can't tell exactly, but I am a little surprised that the string parameters are not quoted. Is it possible that interactively there are some automatic conversions, but over this connection instead of the string 'R' it was converted to the INT ascii for R?
I found the reason!
The reason is Oracle! I've tried the same code on Postgres and its worked!
I dont know why, but in some magic cases oracle ignores query parameters and query returns empty result.