Rounding issue when dividing a COUNT value by another value using jooq - java

When using SQL, I can run a simple query such as the query below with no issue, it returns an answer to 4 decimal places:
SELECT(COUNT(ID)/7) FROM myTable;
If the count above returns a value of 12, the returned select value is 1.7143 in workbench.
My issue occurs when I use jooq to do this calculation:
dsl.select(count(MYTABLE.ID).divide(7).from(MYTABLE).fetch();
The above code returns me a value of 1, whereas I want a value of 1.7143.
I have similar lines of jooq code which use SUM as opposed to COUNT and they return a value to 4 decimal places but I cannot find a way to get the above code to return the value to 4 decimal places.
I have tried using .round but had no success.
Has anyone else had a similar problem and knows of a solution?

There are two issues here, depending on what RDBMS you're using:
1. The type of the whole projected expression
The type of the whole division expression depends on the type of the left hand side (dividend), which is SQLDataType.INTEGER. So, irrespective of whether your RDBMS returns a decimal or floating point number, jOOQ will use JDBC's ResultSet.getInt() method to fetch the value, where you will be losing precision. So, the first step is to make sure jOOQ will fetch the desired data type. There are several ways to do this:
Use a cast on the COUNT(*) expression: count(MYTABLE.ID).cast(SQLDataType.DOUBLE).divide(7)
Use a cast on the entire expression:
count(MYTABLE.ID).divide(7).cast(SQLDataType.DOUBLE)
Use data type coercion on either expression: expr.coerce(SQLDataType.DOUBLE)
Casts have an effect on the generated SQL. Data type coercions do not.
2. How your RDBMS handles data types
In most RDBMS, count(*) expressions produce an integer type, and your division's right hand side (divisor) is also an integer, so the best resulting data type is, in fact, an integer type. I suspect you should pass a double or BigDecimal type as your divisor instead.
Solution
The ideal solution would then be to combine the above two:
dsl.select(count(MYTABLE.ID).cast(SQLDataType.DOUBLE).divide(7.0))
.from(MYTABLE)
.fetch();

Related

How to compare two timestamp columns in a Gremlin Java query

Is there a way to do something like that in Gremlin traversals?
I would have thought it would be obvious, but it seems I was wrong.
I have a table containing two dates (both are timestamps), and I would like to select only the records having one greater than the other one. Something like:
has('date_one', P.gt('date_two'))
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: timestamp with time zone > character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Then the second argument is not translated into the value of the column 'date_two'.
Based on the answer 1, the full request becomes:
g.V().hasLabel('File').where(or (
__.out('has_Ref1').hasNot('date_one'),
__.out('has_Ref1').as('s1', 's2').where('s1', gt('s2')).by('date_two').by('date_one')))
.as('file').out('has_Ref1').as('ref1').out('has_Content').as('data').select('file','ref1','data')
But in this case: A where()-traversal must have at least a start or end label (i.e. variable): [OrStep([[VertexStep(OUT,[has_Ref1],vertex), NotStep([PropertiesStep([date_one],value)])], [VertexStep(OUT,[has_Ref1],vertex)#[s1, s2], WherePredicateStep(s1,gt(s2),[value(date_two), value(date_one)])]])]
I guess the second argument of the or clause must be a boolean. Then if I try to add '.hasNext()', I've got the following exception:
g.V().hasLabel('File').where(or (
__.out('has_Ref1').hasNot('date_one'),
__.out('has_Ref1').as('s1', 's2').where('s1', gt('s2')).by('date_two').by('date_one').hasNext()))
.as('file').out('has_Ref1').as('ref1').out('has_Content').as('data').select('file','ref1','data')
groovy.lang.MissingMethodException: No signature of method: static org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.__.or() is applicable for argument types: (org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal...) values: [[VertexStep(OUT,[has_Ref1],vertex), NotStep([PropertiesStep([date_one],value)])], ...]
The has() step doesn't quite work that way as the P deals with a constant value supplied to it and doesn't treat that value as a property key. You would want to use where() step in this case I think so that you could take advantage of traversal induced values:
gremlin> g.addV('person').property('d1',1).property('d2',2).
......1> addV('person').property('d1',2).property('d2',1).iterate()
gremlin> g.V().as('x','y').where('x', gt('y')).by('d1').by('d2').elementMap()
==>[id:3,label:person,d1:2,d2:1]
Why don't you ask Postgres if the data_one is bigger than date_two
SELECT '2019-11-20'::timestamp > '2019-11-01'::timestamp;
?column?
----------
t
(1 row)
Time: 10.246 ms
this provides you a boolean value
I am assuming you are talking about a columns in a single table.
I have mostly worked with graphDB with gremlin but I think it will still fit right.
do your thing to get the table of concern
g.V().has(...).has()...
then just project the same data twice by using project directly or using as & select
...as('named1', 'named2').select('named1')
it is creating two copies of same data in the flow and then you can compare them on different basis like
...where('named1', gt('named2')).by('value1').by('value2')
above part in literal english means that bring me only those from selected where value1 of named1 is greater than value2 of named2
If you give only one by then it will compare same property

Compare date if not null

I need to find all the record with create date > X. X is a sql.Timestamp and might be null, in which case I want to just return all the records. So I tried: (createdAfter is Timestamp)
SELECT *
FROM sample AS s
WHERE s.isActive
AND (:createdAfter ISNULL OR s.insert_time > :createdAfter)
But all I'm getting is
org.postgresql.util.PSQLException: ERROR: could not determine data type of parameter $1
However, if I'll do the same query where I'm checking for an arbitrary int to be null:
SELECT *
FROM trades
WHERE (:sInt ISNULL OR trades.insert_time > :createdAfter )
Then it works. What's wrong?
There is no simple solution if you want to stick with native queries like that. The null value is converted to a bytea value. See for example this and this.
That value is quite hard to be casted or compared to a timestamp value.
The problem is not so much with the first comparison it would be handled by using coalesce, like:
COALESCE(:createdAfter) ISNULL
because there is no comparison beteween actual values and the data type does not matter. But the comparison
sometimestamp::timestamp > null::bytea (casts just to show the actual types so not working)
would need more logic behind maybe procedure & exception handling or so, not sure.
So if JPQL or CriteriaQueries are not possible for you have only bad options:
contruct the query by setting params by string concatenation or so (NOT! and not sure if realy works)
use PreparedStatement queries, more code & effort
also if using Hibernate, using session api like in this answer
You can try using the pg_typeof function, which returns a text string, and using a CASE statement to force which comparisons are made (otherwise there's no guarantee that postgres will short-circuit the OR in the correct order). You can then force the correct conversion by converting to text and then back to timestamp, which is inelegant but should be effective.
SELECT *
FROM sample AS s
WHERE s.isActive
AND
CASE WHEN pg_typeof( :createdAfter ) = 'bytea' THEN TRUE
WHEN s.insert_time > ( ( :createdAfter )::text)::timestamp THEN TRUE
ELSE FALSE
END

Postgres data type cast

My database is Postgres 8. I need to cast data type to another. That means, one of columns data type is varchar and need to cast it into int with Postgres in a SELECT statement.
Currently, I get the string value and cast it into int in Java.
Is there any way to do it? Sample code would be highly appreciated.
cast(varchar_col AS int) -- SQL standard
or
varchar_col::int -- Postgres syntax shorthand
Theses syntax variants are valid (almost) anywhere. The second may require nesting parentheses in special situations:
PostgreSQL: Create index on length of all table fields
And the first may be required where only functional notation is allowed by syntax restrictions:
PostgreSQL - CAST vs :: operator on LATERAL table function
There are two more variants:
int4(varchar_col) -- only works for some type names
int '123' -- must be an untyped, quoted string literal
Note how I wrote int4(varchar_col). That's the internal type name and there is also a function defined for it. Wouldn't work as integer() or int().
Note also that the last form does not work for array types. int[] '{1,2,3}' has to be '{1,2,3}'::int[] or cast('{1,2,3}' AS int[]).
Details in the manual here and here.
To be valid for integer, the string must be comprised of an optional leading sign (+/-) followed by digits only. Leading / trailing white space is ignored.

resultSet.getInt() return value on a string containing two ints

I need to parse the value of a database column that generally contains integers, based on the result Set generated from a JDBC call. However, one particular row of the column has two integers in it (ie, "48, 103"). What will be the return value of resultSet.getInt() on that column?
It will throw an exception.
I think you are taking the wrong approach here. The getXXX() is supposed to match the data type of the table. Is the data type on the table listed as VARCHAR? If that case you should use getString() to get the data and then parse it with the String.spilt(",") if the , exists (you can use String.indexOf() to verify is the comma is there or not).
You'll almost certainly get a SQLException (or possibly a NumberFormatException). The actual interface just says that the result set will return "the value of the designated column... as an int". The exact details will be implementation-specific, but I doubt you'll get anything sensible from a value of "48, 103".
(Personally I think it's an error if the driver lets you call getInt on that column in any case, even for "sensible" values. A string is not an int, even if it's a string representation of an int, and the conversion should be done manually by the developer.)
I'd expect it to throw an exception. If it does give you a value, it won't be what you want. I'd get the values as strings and parse them, splitting on commas and trimming spaces.
I believe it's a NumberFormatException.

How to perform a Restrictions.like on an integer field

I have an integer field in the DB (Postgresql) and my hibernate mapping file that I want to use in a like operation (e.g. Restrictions.like(Bean.fieldname,'123')).
The database does not support like for integer without explicit type casting select * from table where text(myint) like '1%'. Ideally, I'd like to keep the DB field type and Hibernate property type as integers and not have to load all the fields from the DB to iterate through in the Java code.
cheers :)
If the value really is a number, I'd just restrict it to a range - e.g. greater than or equal to 100 and less than 200. I wouldn't have thought you'd really want "all numbers starting with 1" - that suggests that 1 and 10000 are similar, whereas 1 and 2 are totally different. The information in a number should almost always relate to its magnitude, not the digits from its decimal representation.
Why do you need a LIKE? It's a very strange comparison, that's also why it's not an integer operator.
You could cast the value in the database to text/varchar, but you will kill performance unless you create a special index as well.
Restrictions.sqlRestriction("CAST({alias}.myint AS CHAR) like ?", "%1%", Hibernate.STRING));

Categories