Working with the datastax Cassandra QueryBuilder, is there any way to insert now() into a timestamp column?
The current implementation does not have a dateOf or toUnixTimestamp function. The now()function itself returns timeuuid which is incompatible with timestamp.
This may depend on the version of the driver...
For driver 3.x, there is a generic fcall method that allows you to call any function, something like this (didn't check, but you get an idea):
.fcall("toTimestamp", now())
For driver 4.x, there is similar function call. You even have the possibility to use raw code snippets.
Related
Recently, If you are using Hibernate 5.2 or higher, then the Query::list() method has been deprecated.
Now, what the difference in using these two methods?
If anyone knows, please explain with examples.
The documentation of Hibernate 3.2 says that Query#list() returns the query as List<T>.
Return the query results as a List. If the query contains multiple results pre row, the results are returned in an instance of Object[].
As you can read from the newer documentation of Hibernate 5.2 about the same named class and its method Query#getResultList is the overridden implementation of the the javax interface's method TypedQuery#getResultList.
Execute a SELECT query and return the query results as a typed List.
This method is a replacement of the one from the previous versions.
The idea is to implement Java EE interface (most of javax library) and keep the naming consistent.
I'm using Apache Derby as an in-memory mock database for unit testing some code that works with MySQL using jOOQ.
The production database uses enums for certain fields (this is a given and out of scope of this question - I know enums are bad but I can't change this part now), so jOOQ generates code to handle the enums.
Unfortunately, Derby does not support enums and when I try to create the database in Derby (from jOOQ SQL generator), I get errors.
My solution was to user-defined types that mimic the enum by wrapping the relevant jOOQ generated enum Java class. So, for example, if I have an enum field kind in the table stuffs, jOOQ SQL generator creates Derby table creation SQL that talks about stuffs_kind.
To support this I created the class my.project.tests.StuffsKindDebyEnum that wraps the jOOQ generated enum type my.project.model.StuffsKind. I then run the following SQL through Derby, before running the jOOQ database creation SQL:
CREATE TYPE stuffs_kind EXTERNAL NAME 'my.project.tests.StuffsKindDerbyEnum' LANGUAGE JAVA
When I then use jOOQ to insert new records, jOOQ generates SQL that looks somewhat like this:
insert into "schema"."stuffs" ("text", "kind")
values (cast (? as varchar(32672)), cast(? as stuffs_kind)
But binds a string value to the kind argument (as expected), and it work for MySQL but with Derby I get an exception:
java.sql.SQLDataException: An attempt was made to get a data value of type
'"APP"."STUFFS_KIND"' from a data value of type 'VARCHAR'
After looking at all kinds of ways to solve this problem (including trying to treat enums as simple VARCHARs), and before I give up on being able to test my jOOQ-using code, is there a way to get Derby to "cast" varchar into user-defined types? If could put some Java code that can handle that, it will not be a problem as I can simply do StuffsKind.valueOf(value) to convert a string to the correct enum type, but after perusing the (very minimal) Derby documentation, I can't figure out if it is even should be possible.
Any ideas are welcome!
Implementing a dialect sensitive custom data type binding:
The proper way forward here would be to use a dialect sensitive, custom data type binding:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings
The binding could then implement, e.g. the bind variable SQL generation as follows:
#Override
public void sql(BindingSQLContext<StuffsKindDerbyEnum> ctx) throws SQLException {
if (ctx.family() == MYSQL)
ctx.render().visit(DSL.val(ctx.convert(converter()).value()));
else if (ctx.family() == DERBY)
ctx.render()
.sql("cast(
.visit(DSL.val(ctx.convert(converter()).value()))
.sql(" as varchar(255))");
else
throw new UnsupportedOperationException("Dialect not supported: " + ctx.family());
}
You'd obviously also have to implement the other methods that tell jOOQ how to bind your variable to a JDBC PreparedStatement, or how to fetch it from a ResultSet
Avoiding the MySQL enum
Another, simpler way forward might be to avoid the vendor-specific feature and just use VARCHAR in both databases. You can still map that VARCHAR to a Java enum type using a jOOQ Converter that will work the same way in both databases.
Simplify testing by avoiding Derby
A much simpler way forward is to test your application directly on MySQL, e.g. on an in-memory docker virtualisation. There are a lot of differences between database vendors and their features, and at some point, working around those differences just to get slightly faster tests doesn't seem reasonable.
The exception is, of course, if you have to support both Derby and MySQL in production, in case of which the data type binding is again the best solution.
I need to use the MongoDB Java drive since I need to use the driver within Matlab.
At the moment I have the followed problem. I get my BSON object from database, now I need to convert the BSON tree into a Matlab structure. My problem is that the BSONObject or BasisBSONObject class does not have a function to retrieve the type of the particluar BSON object (ARRAY, OBJECTID, ...). There is a class named BSON in the java driver that defines the values I need. But I do not know how to find out what type my current BSON object is.
The C++ driver and also the C# driver has a function that returns the type of a particular BSON element, but where is it in the JAVA driver.
Any advices are welcome. I'm not oerfect in JAVA maybe I did not find it for this reason...?
Why not get the object and call getClass() on it? myBSON.get("myKey").getClass() Seems like that is just as easy as calling some myBSON.getTypeOf("myKey") method that does not exist and would also be redundant in the API.
Typically I use BSON<->Java POJO mapping libraries like Morphia or Spring-Data-Mongo. These libraries have converters that can convert to and from mongo objects to type-safe objects.
Additionally, I think the Mongo 3.x driver is suppose to have better support for this.
According to the java docs of PreparedStatement.setNull: "Note: You must specify the parameter's SQL type". What is the reason that the method requires the SQL type of the column?
I noticed that passing java.sql.Types.VARCHAR also works for non-varchar columns. Are there scenarios in which VARCHAR won't be suitable (certain column types or certain DB providers)?
Thanks.
According to the java docs of
PreparedStatement.setNull: "Note: You
must specify the parameter's SQL
type". What is the reason that the
method requires the SQL type of the
column?
For maximum compatibility; as per the specification, there are some databases which don't allow untyped NULL to be sent to the underlying data source.
I noticed that passing
java.sql.Types.VARCHAR also works for
non-varchar columns. Are there
scenarios in which VARCHAR won't be
suitable (certain column types or
certain DB providers)?
I don't think that sort of behaviour really is part of the specification or if it is, then I'm sure there is some sort of implicit coercion going on there. In any case, relying on such sort of behaviour which might break when the underlying datastore changes is not recommended. Why not just specify the correct type?
JDBC drivers appear to be moving away from setNull. See Add support for setObject(<arg>, null).
My list of databases supporting the more logical behaviour is:
Oracle
MySQL
Sybase
MS SQL Server
HSQL
My list of databases NOT supporting this logical behaviour is:
Derby Queries with guarded null Parameter fail
PostgreSQL Cannot pass null in Parameter in Query for ISNULL Suggested solution
When it comes to Oracle it would be very unwise to use varchar2 towards other datatypes. This might fool the optimizer and you could get an bad execution plan. For instance filtering on a date column using a timestamp datatype in your bind, Oracle could end up reading all your rows converting all dates to timestamp, then filtering out the wanted rows.
If you have a index on your date column, it could even get worse (if oracle chose to use it) - doing single reads on your oracle blocks.
--Lasse
I was playing around with the DatabaseMetaData class to see how it works. The java doc comments seem to state one thing, while the code does a different. I know it is an interface, so it is really up to the vendor that supplied the JDBC driver to implement this correctly. But I was wondering if I am missing something or not?
I am using this with a version of Oracle 10g. Basically the comment implies that it will return the following 10 columns in the resultset:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
TYPE_CAT
TYPE_SCHEM
TYPE_NAME
SELF_REFERENCING_COL_NAM
REF_GENERATION
In reality I only get 5 columns in the result set:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
So what gives? Am I misreading the javadocs or is this pretty much par for the course with jdbc drivers. For instance if I swapped out oracle for MySQL (of course getting the appropriate driver) would I probably get a number of columns?
The JDBC driver for Oracle 10g that you are using is just fulfilling an older spec. Here is a JavaDoc to which it conforms. You have to know the JDBC version of your JDBC drivers to work with them effectively when you do more than the absolute basics.
JDBC is a spec. Some features are required to conform to the spec; others are optional.
I don't know the complete spec, but this must be one feature that Oracle has chosen not to return all the values expressed in the interface. Other vendors like MySQL may choose to do so.
You'll have to try it and see.
Are the missing columns crucial to your app's operation? It seems like a trivial reason to switch database vendors.