Postgres data type cast - java

My database is Postgres 8. I need to cast data type to another. That means, one of columns data type is varchar and need to cast it into int with Postgres in a SELECT statement.
Currently, I get the string value and cast it into int in Java.
Is there any way to do it? Sample code would be highly appreciated.

cast(varchar_col AS int) -- SQL standard
or
varchar_col::int -- Postgres syntax shorthand
Theses syntax variants are valid (almost) anywhere. The second may require nesting parentheses in special situations:
PostgreSQL: Create index on length of all table fields
And the first may be required where only functional notation is allowed by syntax restrictions:
PostgreSQL - CAST vs :: operator on LATERAL table function
There are two more variants:
int4(varchar_col) -- only works for some type names
int '123' -- must be an untyped, quoted string literal
Note how I wrote int4(varchar_col). That's the internal type name and there is also a function defined for it. Wouldn't work as integer() or int().
Note also that the last form does not work for array types. int[] '{1,2,3}' has to be '{1,2,3}'::int[] or cast('{1,2,3}' AS int[]).
Details in the manual here and here.
To be valid for integer, the string must be comprised of an optional leading sign (+/-) followed by digits only. Leading / trailing white space is ignored.

Related

How to compare two timestamp columns in a Gremlin Java query

Is there a way to do something like that in Gremlin traversals?
I would have thought it would be obvious, but it seems I was wrong.
I have a table containing two dates (both are timestamps), and I would like to select only the records having one greater than the other one. Something like:
has('date_one', P.gt('date_two'))
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: timestamp with time zone > character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Then the second argument is not translated into the value of the column 'date_two'.
Based on the answer 1, the full request becomes:
g.V().hasLabel('File').where(or (
__.out('has_Ref1').hasNot('date_one'),
__.out('has_Ref1').as('s1', 's2').where('s1', gt('s2')).by('date_two').by('date_one')))
.as('file').out('has_Ref1').as('ref1').out('has_Content').as('data').select('file','ref1','data')
But in this case: A where()-traversal must have at least a start or end label (i.e. variable): [OrStep([[VertexStep(OUT,[has_Ref1],vertex), NotStep([PropertiesStep([date_one],value)])], [VertexStep(OUT,[has_Ref1],vertex)#[s1, s2], WherePredicateStep(s1,gt(s2),[value(date_two), value(date_one)])]])]
I guess the second argument of the or clause must be a boolean. Then if I try to add '.hasNext()', I've got the following exception:
g.V().hasLabel('File').where(or (
__.out('has_Ref1').hasNot('date_one'),
__.out('has_Ref1').as('s1', 's2').where('s1', gt('s2')).by('date_two').by('date_one').hasNext()))
.as('file').out('has_Ref1').as('ref1').out('has_Content').as('data').select('file','ref1','data')
groovy.lang.MissingMethodException: No signature of method: static org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.__.or() is applicable for argument types: (org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal...) values: [[VertexStep(OUT,[has_Ref1],vertex), NotStep([PropertiesStep([date_one],value)])], ...]
The has() step doesn't quite work that way as the P deals with a constant value supplied to it and doesn't treat that value as a property key. You would want to use where() step in this case I think so that you could take advantage of traversal induced values:
gremlin> g.addV('person').property('d1',1).property('d2',2).
......1> addV('person').property('d1',2).property('d2',1).iterate()
gremlin> g.V().as('x','y').where('x', gt('y')).by('d1').by('d2').elementMap()
==>[id:3,label:person,d1:2,d2:1]
Why don't you ask Postgres if the data_one is bigger than date_two
SELECT '2019-11-20'::timestamp > '2019-11-01'::timestamp;
?column?
----------
t
(1 row)
Time: 10.246 ms
this provides you a boolean value
I am assuming you are talking about a columns in a single table.
I have mostly worked with graphDB with gremlin but I think it will still fit right.
do your thing to get the table of concern
g.V().has(...).has()...
then just project the same data twice by using project directly or using as & select
...as('named1', 'named2').select('named1')
it is creating two copies of same data in the flow and then you can compare them on different basis like
...where('named1', gt('named2')).by('value1').by('value2')
above part in literal english means that bring me only those from selected where value1 of named1 is greater than value2 of named2
If you give only one by then it will compare same property

ORA-01465: invalid hex number or data mismatch error using hibernate jpa and oracle in coalesce

In java spring boot when I am using coalesce function for search query with oracle backend, null values are not handled properly. using jpql it either gives me RAW-- ORA-01465: invalid hex number or data mismatch error, like expected binary got integer
Please consult the documentation of COALESCE
The usage is
COALESCE (expr1, expr2, ..., exprn)
and not nested as in your example
COALESCE (expr1, COALESCE (expr2,expr3))
Check the data type of the bind parameters and referenced database columns.
It seems that some of them are not VARCHAR (possible numeric), which is in conflict with the value of 'a'
If you want to handle all expressions in the COALESCE as character strings (which I deduce from teh construction COALESCE(t.cId,'a')), you must excplicitely convert the non-strings using TO_CHAR.
Basically you need, all expressions in the COALESCE in the same datatype or at least able to convert to the datatype, that is defined with the first parameter.

Rounding issue when dividing a COUNT value by another value using jooq

When using SQL, I can run a simple query such as the query below with no issue, it returns an answer to 4 decimal places:
SELECT(COUNT(ID)/7) FROM myTable;
If the count above returns a value of 12, the returned select value is 1.7143 in workbench.
My issue occurs when I use jooq to do this calculation:
dsl.select(count(MYTABLE.ID).divide(7).from(MYTABLE).fetch();
The above code returns me a value of 1, whereas I want a value of 1.7143.
I have similar lines of jooq code which use SUM as opposed to COUNT and they return a value to 4 decimal places but I cannot find a way to get the above code to return the value to 4 decimal places.
I have tried using .round but had no success.
Has anyone else had a similar problem and knows of a solution?
There are two issues here, depending on what RDBMS you're using:
1. The type of the whole projected expression
The type of the whole division expression depends on the type of the left hand side (dividend), which is SQLDataType.INTEGER. So, irrespective of whether your RDBMS returns a decimal or floating point number, jOOQ will use JDBC's ResultSet.getInt() method to fetch the value, where you will be losing precision. So, the first step is to make sure jOOQ will fetch the desired data type. There are several ways to do this:
Use a cast on the COUNT(*) expression: count(MYTABLE.ID).cast(SQLDataType.DOUBLE).divide(7)
Use a cast on the entire expression:
count(MYTABLE.ID).divide(7).cast(SQLDataType.DOUBLE)
Use data type coercion on either expression: expr.coerce(SQLDataType.DOUBLE)
Casts have an effect on the generated SQL. Data type coercions do not.
2. How your RDBMS handles data types
In most RDBMS, count(*) expressions produce an integer type, and your division's right hand side (divisor) is also an integer, so the best resulting data type is, in fact, an integer type. I suspect you should pass a double or BigDecimal type as your divisor instead.
Solution
The ideal solution would then be to combine the above two:
dsl.select(count(MYTABLE.ID).cast(SQLDataType.DOUBLE).divide(7.0))
.from(MYTABLE)
.fetch();

resultSet.getInt() return value on a string containing two ints

I need to parse the value of a database column that generally contains integers, based on the result Set generated from a JDBC call. However, one particular row of the column has two integers in it (ie, "48, 103"). What will be the return value of resultSet.getInt() on that column?
It will throw an exception.
I think you are taking the wrong approach here. The getXXX() is supposed to match the data type of the table. Is the data type on the table listed as VARCHAR? If that case you should use getString() to get the data and then parse it with the String.spilt(",") if the , exists (you can use String.indexOf() to verify is the comma is there or not).
You'll almost certainly get a SQLException (or possibly a NumberFormatException). The actual interface just says that the result set will return "the value of the designated column... as an int". The exact details will be implementation-specific, but I doubt you'll get anything sensible from a value of "48, 103".
(Personally I think it's an error if the driver lets you call getInt on that column in any case, even for "sensible" values. A string is not an int, even if it's a string representation of an int, and the conversion should be done manually by the developer.)
I'd expect it to throw an exception. If it does give you a value, it won't be what you want. I'd get the values as strings and parse them, splitting on commas and trimming spaces.
I believe it's a NumberFormatException.

In a JDBC ResultSet, what should happen when getLong() or getShort() is called on an int result column?

Say that I have a JDBC ResultSet, and I call the getLong() or getshort() method.
For which of the following SQL types {SMALLINT, INT, BIGINT} should I get long, and for which types should I get an error?
In other words, if I have an INT and I want a SMALLINT (A short), would I get it, or would I get an error? Similarly, if I have an INT and want a BIGINT (a long), would I get it, or would I get an error?
The Javadocs (listed below) say nothing.
public long getLong(int columnIndex)
throws SQLException
Retrieves the value of the designated column in the current row
of this ResultSet object as a long in
the Java programming language.
Parameters:
columnIndex - the first column is 1, the second is 2, ...
Returns:
the column value; if the value is SQL NULL, the value returned is 0
Throws:
SQLException - if a database access error occurs
From the Retrieving Values from Result Sets section of the Java tutorials:
JDBC allows a lot of latitude as far as which getXXX methods you can use to retrieve the different SQL types. For example, the method getInt can be used to retrieve any of the numeric or character types. The data it retrieves will be converted to an int; that is, if the SQL type is VARCHAR , JDBC will attempt to parse an integer out of the VARCHAR. The method getInt is recommended for retrieving only SQL INTEGER types, however, and it cannot be used for the SQL types BINARY, VARBINARY, LONGVARBINARY, DATE , TIME, or TIMESTAMP.
I'm interpreting that to mean that the data will be coerced. It should work just fine if it's an upcast, but I'd expect potential loss of precision (naturally) if, for example, you're reading a LONG value using getInt(). I'd expect an Exception to be thrown if you try to read text using getInt().
It'll cast it to a long and it should be fine.
You'll get an error if you're trying to get a long from a string containing "Bob", or some other field that can't be easily converted to a long.
The spec doesn't say anything about this behavior. This will totally depend on the implementation of the drivers.
With the MySQL Connector, you can almost get anything looking like a number as long as it's in valid numeric format and it's in the range of long. Null/False are also returned as 0.
This is implementation dependent. The spec says that the ResultSet implemenation may support such a conversion, and you can check by calling DataBaseMetaData.supportsConvert(int fromType, int toType) (Section 15.2.3.1 of the 4.0 implementer spec).
Best is to not rely on the behavior but rather check the ResultSetMetaData for the correct type.

Categories