How can I abstract the schema with jOOQ? - java

I tried to follow the instructions for mapping a schema in jOOQ.
First, I start with a qualified name and table:
Name myTableName = DSL.name("schema", "myTable");
Table<Record> myTable = DSL.table(myTableName);
Then I build a context with schema mapping:
Configuration configuration = new DefaultConfiguration();
configuration.set(SQLDialect.HSQLDB);
Settings settings = new Settings()
.withRenderNameStyle(RenderNameStyle.QUOTED)
.withRenderSchema(true)
.withRenderMapping(
new RenderMapping()
.withSchemata(
new MappedSchema()
.withInput("schema")
.withOutput("PUBLIC")
)
);
configuration.set(settings);
return DSL.using(configuration);
Then I build an SQL string to create the table:
context.createTable(myTable)....getSQL();
But it fails to map the schema:
invalid schema name: schema in statement [create table "schema"."myTable"(
...
What exactly am I doing wrong here?
For the bigger picture, I'm trying to write SQL that is portable across different dialects, but each of the environments I have to build for uses a different schema. I am trying to abstract a general schema in Java that I can then use jOOQ to map depending on the target environment.

This is a known issue: https://github.com/jOOQ/jOOQ/issues/5344
As of jOOQ 3.9.5, schema mapping and table mapping is not applied to plain SQL tables and custom named tables. While there will not be any mapping applied to plain SQL strings, the latter is fixed as of jOOQ 3.10.
There are two workarounds:
You can perform the mapping manually
You're in full control of table reference construction and can map the table explicitly as such:
Name myTableName = DSL.name(schema(), "myTable");
And then:
public String schema() {
if (something)
return "schema";
else
return "PUBLIC";
}
Use CustomTable
A lesser known feature is the CustomTable which can be used instead of generated tables if you're not using jOOQ's code generator. It is a bit more effort than a plain SQL table or a named table, but if you can abstract the construction of the table, it might be worth while, because CustomTable allows for easily. An example:
public class BookRecord extends CustomRecord<BookRecord> {
protected BookRecord() {
super(BookTable.BOOK);
}
}
public class BookTable extends CustomTable<BookRecord> {
public static final BookTable BOOK = new BookTable();
public static final TableField<BookRecord, Short> ID
= createField("ID", SQLDataType.SMALLINT, BOOK);
public static final TableField<BookRecord, String> TITLE
= createField("TITLE", SQLDataType.VARCHAR, BOOK);
protected BookTable() {
super("BOOK", DSL.schema(DSL.name("schema")));
}
#Override
public Class<? extends BookRecord> getRecordType() {
return BookRecord.class;
}
}

Related

Apache Ignite - SQL Support for HashMaps?

Hi there I like to use at Apache Ignite a Pojo which has a HashMap attribute so I can work with dynamic models at runtime. Storing and Saving of such objects works fine.
However, I m wondering if there a way exist to access the key / values of such a Hashmap through a SQL query? If this is not supported any other ways I can work in Apache Ignite with dynamic objects?
POJO Class with dynamic attributes
#Data
public class Item {
private static final AtomicLong ID_GEN = new AtomicLong();
#QuerySqlField(index = true)
private Long id;
#QuerySqlField
public Map<String,Serializable> attributes = new HashMap<String,Serializable>();
public Item(Long id, String code, String name) {
this.id = id;
}
public Item() {
this(ID_GEN.incrementAndGet());
}
public void setAttribute(String name,Serializable value) {
attributes.put(name, value);
}
public Serializable getAttribute(String name) {
return attributes.get(name);
}
}
Example Query Feature illstrated
SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Item WHERE attributes('Price') > 100");
SQL in Ignite is not just syntactic sugar, it requires a schema of your models to be defined before you can run SQL queries and this won't work for a collection. Therefore you need to normalize the data just like with a regular DB or rework the model's structure somehow to avoid JOIN.
Apache Ignite has no support for destructuring/collections in its SQL, so you can't peek inside HashMap via SQL.
However, you may define your own SQL functions, so you can implement e.g. SELECT hashmap_get(ATTRIBUTES, 'subkey') FROM ITEM WHERE ID = ?
But you can't have indexes on function application so the usefulness is limited.

Unable to create records using custom generator strategy for getter names

I'm on jOOQ 3.13.1, dropwizard 2.0.7. To make jOOQ and dropwizard together, I am using (https://droptools.bendb.com/jooq/). I am using custom generation strategy to maintain camel case for my setters and getters. The names are coming in as expected.
The record objects have data for their respective columns. However, I keep on getting errors from my database that I am trying to set "null" on a non-null column.
I see this issue only when I am trying to create a new record. Updating records work just fine.
ERROR [2021-03-18 14:58:05,363] com.bendb.dropwizard.jooq.jersey.LoggingDataAccessExceptionMapper: Error handling a request: 9f65c0316d996ebb
! org.postgresql.util.PSQLException: ERROR: null value in column "createdAt" of relation "failures" violates not-null constraint
! Detail: Failing row contains (265, null, Some callback, User account not found, null, null, null).
If I print the record, it looks like this:
+------+------+--------------+--------------------------------------------------+------------------------------+------------------------------+------+
| id|userId|action |error |createdAt |updatedAt |status|
+------+------+--------------+--------------------------------------------------+------------------------------+------------------------------+------+
|{null}|{null}|*Some callback|*User account not found|*2021-03-18 14:58:05,363|*2021-03-18 14:58:05,363|{null}|
+------+------+--------------+--------------------------------------------------+------------------------------+------------------------------+------+
My getter names are:
"getId", "getUserId", "getAction", "getError", "getCreatedAt", "getUpdatedAt", "getStatus".
For columns that are in lowercase, I see no issues. The issue if for places where the column names are in CamelCase.
The class looks something like:
public class FailureDao {
private final DSLContext database;
public FailureDao(DSLContext database) {
this.database = database;
}
public void storeFailure(FailureRecord failure) {
database.newRecord(FAILURES, failure).store();
}
}
For code generation, I am following the documentation here https://www.jooq.org/doc/3.13/manual/code-generation/codegen-generatorstrategy/
My generator class looks something like:
public class AsInDatabaseStrategy extends DefaultGeneratorStrategy {
#Override
public String getJavaIdentifier(Definition definition) {
return definition.getOutputName().toUpperCase();
}
#Override
public String getJavaSetterName(Definition definition, Mode mode) {
return "set" + StringUtils.toUC(definition.getOutputName());
}
#Override
public String getJavaGetterName(Definition definition, Mode mode) {
return "get" + StringUtils.toUC(definition.getOutputName());
}
}
I found the issue. Turns out, it was explained on https://groups.google.com/g/jooq-user/c/1iy0EdWe_T8/m/YN9PEsIF4lcJ. My workaround was to use a jOOQ generated POJO. To create a new record, instead of passing an object of Record class, I am now passing an object of the POJO class.

How to use Spring jdbc templates (jdbcTemplate or namedParameterJDBCTem) to retrieve values from database

Few days into Spring now. Integrating Spring-JDBC into my web application. I was successfully able to preform CRUD operations on my DB, impressed with boiler-plate code reduction. But I am failing to use the query*() methods provided in NamedParameterJDBCTemplate. Most of the examples on the internet provide the usage of either RowMapper or ResultSetExtractor. Though both uses are fine, it forces me to create classes which have to implement these interfaces. I have to create bean for every type of data I am loading for the DB (or maybe I am mistaken).
Problem arises in code section where I have used something like this:
String query="select username, password from usertable where username=?"
ps=conn.prepareStatement(query);
ps.setString(username);
rs=ps.executeQuery();
if(rs.next()){
String username=rs.getString("username");
String password=rs.getString("password")
//Performs operation on them
}
As these values are not stored in any bean and used directly, I am not able to integrate jdbcTemplate in these kind of situations.
Another situation arises when I am extracting only part of properties present in bean from my database.
Example:
public class MangaBean{
private String author;
private String title;
private String isbn;
private String releaseDate;
private String rating;
//getters and setters
}
Mapper:
public class MangaBeanMapper implements RowMapper<MangaBean>{
#Override
public MangaBean mapRow(ResultSet rs, int arg1) throws SQLException {
MangaBean mb=new MangaBean();
mb.setAuthor(rs.getString("author"));
mb.setTitle(rs.getString("title"));
mb.setIsbn(rs.getString("isbn"));
mb.setReleaseDate(rs.getString("releaseDate"));
mb.setRating(rs.getString("rating"));
return mb;
}
}
The above arrangement runs fine like this:
String query="select * from manga_data where isbn=:isbn"
Map<String, String> paramMap=new HashMap<String, String>();
paramMap.put("isbn", someBean.getIsbn());
return template.query(query, paramMap, new MangaBeanMapper());
However, if I only want to retrieve two/three values from my db, I cannot use the above pattern as it generates a BadSqlGrammarException: releaseDate does not exist in ResultSet . Example :
String query="select title, author where isbn=:isbn"
Map<String, String> paramMap=new HashMap<String, String>();
paramMap.put("isbn", someBean.getIsbn());
return template.query(query, paramMap, new MangaBeanMapper());
Template is an instance of NamedParameterJDBCTemplate. Please advice me solutions for these situations.
The other answers are sensible: you should create a DTO bean, or use the BeanPropertyRowMapper.
But if you want to be able to have more control than the BeanPropertyRowMapper, (or reflection makes it too slow), you can use the
queryForMap
method, which will return you a list of Maps (one per row) with the returned columns as keys. Because you can call get(/* key that is not there */) on a Map without throwing an exception (it will just return null), you can use the same code to populate your object irrespective of which columns you selected.
You don't even need to write your own RowMapper, just use the BeanPropertyRowMapper that spring provides. The way it works is it matches the column names returned to the properties of your bean. Your query has columns that match your bean exactly, if it didn't you would use an as in your select as follows...
-- This query matches a property named matchingName in the bean
select my_column_that doesnt_match as matching_name from mytable;
The BeanPropertyRowMapper should work with both queries you listed.
Typically, yes : for most queries you would create a bean or object to transform the result into. I would suggest that more most cases, that's want you want to do.
However, you can create a RowMapper that maps a result set to a map, instead of a bean, like this. Downside would be be losing the type management of beans, and you'd be relying on your jdbc driver to return the correct type for each column.
As #NimChimpskey has just posted, it's best to create a tiny bean object : but if you really don't want to do that, this is another option.
class SimpleRowMapper implements RowMapper<Map<String, Object>> {
String[] columns;
SimpleRowMapper(String[] columns) {
this.columns = columns;
}
#Override
public Map<String, Object> mapRow(ResultSet resultSet, int i) throws SQLException {
Map<String, Object> rowAsMap = new HashMap<String, Object>();
for (String column : columns) {
rowAsMap.put(column, resultSet.getObject(column));
}
return rowAsMap;
}
}
In yr first example I would just create a DTO Bean/Value object to store them. There is a reason its a commonly implemented pattern, it takes minutes to code and provides many long term benefits.
In your second example, create a second implementation of rowmapper where you don;t set the fields, or supply a null/subsitute value to mangabean where necessary :
#Override
public MangaBean mapRow(ResultSet rs, int arg1) throws SQLException {
MangaBean mb=new MangaBean();
mb.setAuthor(rs.getString("author"));
mb.setTitle(rs.getString("title"));
/* mb.setIsbn("unknown");*/
mb.setReleaseDate("unknown");
mb.setRating(null);
return mb;
}

Hibernate Envers How to record additional audit data such as table name being audited

I have implemented a solution of Hibernate Envers.
I am extending RevisionLister by creating my own class to store the system username:
import org.hibernate.envers.RevisionListener;
public class CustomRevisionListener implements RevisionListener {
public void newRevision(Object revisionEntity) {
CustomRevisionEntity revision = (CustomRevisionEntity) revisionEntity;
revision.setUsername(System.getProperty("user.name")); // for testing
}
}
This does the job, but what I want to do, is to make a more comprehensive record, that would include the table name being audited.
Does anyone know how I could do this. I cannot find any documentation relating to recording the table name?
See example 15.2 in the envers doc how to get the modified entity class(es). Then slightly change the code to obtain the table name from the entity class (assumes you use JPA/Hibernate annotations on entity classes):
public class CustomEntityTrackingRevisionListener
implements EntityTrackingRevisionListener {
#Override
public void entityChanged(Class entityClass, String entityName,
Serializable entityId, RevisionType revisionType,
Object revisionEntity) {
// either javax.persistence.Table or org.hibernate.annotations.Table
Table tableAnnotation = entityClass.getAnnotation(Table.class);
if (tableAnnotation != null)
String tableName = tableAnnotation.getName();
((CustomTrackingRevisionEntity)revisionEntity).addTable(tableName);
}
}
I don't know if Envers can track the table name of the records being audited out of the box , but I know it can track the entity name instead which can be enabled by three different ways
You can extend DefaultTrackingModifiedEntitiesRevisionEntity, or configure org.hibernate.envers.track_entities_changed_in_revision parameter to true.
See Envers Doc: Tracking entity names modified during revisions

JPA: How do I specify the table name corresponding to a class at runtime?

(note: I'm quite familiar with Java, but not with Hibernate or JPA - yet :) )
I want to write an application which talks to a DB2/400 database through JPA and I have now that I can get all entries in the table and list them to System.out (used MyEclipse to reverse engineer). I understand that the #Table annotation results in the name being statically compiled with the class, but I need to be able to work with a table where the name and schema are provided at runtime (their defintion are the same, but we have many of them).
Apparently this is not SO easy to do, and I'd appreciate a hint.
I have currently chosen Hibernate as the JPA provider, as it can handle that these database tables are not journalled.
So, the question is, how can I at runtime tell the Hibernate implementation of JPA that class A corresponds to database table B?
(edit: an overridden tableName() in the Hibernate NamingStrategy may allow me to work around this intrinsic limitation, but I still would prefer a vendor agnostic JPA solution)
You need to use the XML version of the configuration rather than the annotations. That way you can dynamically generate the XML at runtime.
Or maybe something like Dynamic JPA would interest you?
I think it's necessary to further clarify the issues with this problem.
The first question is: are the set of tables where an entity can be stored known? By this I mean you aren't dynamically creating tables at runtime and wanting to associate entities with them. This scenario calls for, say, three tables to be known at compile-time. If that is the case you can possibly use JPA inheritance. The OpenJPA documentation details the table per class inheritance strategy.
The advantage of this method is that it is pure JPA. It comes with limitations however, being that the tables have to be known and you can't easily change which table a given object is stored in (if that's a requirement for you), just like objects in OO systems don't generally change class or type.
If you want this to be truly dynamic and to move entities between tables (essentially) then I'm not sure JPA is the right tool for you. An awful lot of magic goes into making JPA work including load-time weaving (instrumentation) and usually one or more levels of caching. What's more the entity manager needs to record changes and handle updates of managed objects. There is no easy facility that I know of to instruct the entity manager that a given entity should be stored in one table or another.
Such a move operation would implicitly require a delete from one table and insertion into another. If there are child entities this gets more difficult. Not impossible mind you but it's such an unusual corner case I'm not sure anyone would ever bother.
A lower-level SQL/JDBC framework such as Ibatis may be a better bet as it will give you the control that you want.
I've also given thought to dynamically changing or assigning at annotations at runtime. While I'm not yet sure if that's even possible, even if it is I'm not sure it'd necessarily help. I can't imagine an entity manager or the caching not getting hopelessly confused by that kind of thing happening.
The other possibility I thought of was dynamically creating subclasses at runtime (as anonymous subclasses) but that still has the annotation problem and again I'm not sure how you add that to an existing persistence unit.
It might help if you provided some more detail on what you're doing and why. Whatever it is though, I'm leaning towards thinking you need to rethink what you're doing or how you're doing it or you need to pick a different persistence technology.
You may be able to specify the table name at load time via a custom ClassLoader that re-writes the #Table annotation on classes as they are loaded. At the moment, I am not 100% sure how you would ensure Hibernate is loading its classes via this ClassLoader.
Classes are re-written using the ASM bytecode framework.
Warning: These classes are experimental.
public class TableClassLoader extends ClassLoader {
private final Map<String, String> tablesByClassName;
public TableClassLoader(Map<String, String> tablesByClassName) {
super();
this.tablesByClassName = tablesByClassName;
}
public TableClassLoader(Map<String, String> tablesByClassName, ClassLoader parent) {
super(parent);
this.tablesByClassName = tablesByClassName;
}
#Override
public Class<?> loadClass(String name) throws ClassNotFoundException {
if (tablesByClassName.containsKey(name)) {
String table = tablesByClassName.get(name);
return loadCustomizedClass(name, table);
} else {
return super.loadClass(name);
}
}
public Class<?> loadCustomizedClass(String className, String table) throws ClassNotFoundException {
try {
String resourceName = getResourceName(className);
InputStream inputStream = super.getResourceAsStream(resourceName);
ClassReader classReader = new ClassReader(inputStream);
ClassWriter classWriter = new ClassWriter(0);
classReader.accept(new TableClassVisitor(classWriter, table), 0);
byte[] classByteArray = classWriter.toByteArray();
return super.defineClass(className, classByteArray, 0, classByteArray.length);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private String getResourceName(String className) {
Type type = Type.getObjectType(className);
String internalName = type.getInternalName();
return internalName.replaceAll("\\.", "/") + ".class";
}
}
The TableClassLoader relies on the TableClassVisitor to catch the visitAnnotation method calls:
public class TableClassVisitor extends ClassAdapter {
private static final String tableDesc = Type.getDescriptor(Table.class);
private final String table;
public TableClassVisitor(ClassVisitor visitor, String table) {
super(visitor);
this.table = table;
}
#Override
public AnnotationVisitor visitAnnotation(String desc, boolean visible) {
AnnotationVisitor annotationVisitor;
if (desc.equals(tableDesc)) {
annotationVisitor = new TableAnnotationVisitor(super.visitAnnotation(desc, visible), table);
} else {
annotationVisitor = super.visitAnnotation(desc, visible);
}
return annotationVisitor;
}
}
The TableAnnotationVisitor is ultimately responsible for changing the name field of the #Table annotation:
public class TableAnnotationVisitor extends AnnotationAdapter {
public final String table;
public TableAnnotationVisitor(AnnotationVisitor visitor, String table) {
super(visitor);
this.table = table;
}
#Override
public void visit(String name, Object value) {
if (name.equals("name")) {
super.visit(name, table);
} else {
super.visit(name, value);
}
}
}
Because I didn't happen to find an AnnotationAdapter class in ASM's library, here is one I made myself:
public class AnnotationAdapter implements AnnotationVisitor {
private final AnnotationVisitor visitor;
public AnnotationAdapter(AnnotationVisitor visitor) {
this.visitor = visitor;
}
#Override
public void visit(String name, Object value) {
visitor.visit(name, value);
}
#Override
public AnnotationVisitor visitAnnotation(String name, String desc) {
return visitor.visitAnnotation(name, desc);
}
#Override
public AnnotationVisitor visitArray(String name) {
return visitor.visitArray(name);
}
#Override
public void visitEnd() {
visitor.visitEnd();
}
#Override
public void visitEnum(String name, String desc, String value) {
visitor.visitEnum(name, desc, value);
}
}
It sounds to me like what you're after is Overriding the JPA Annotations with an ORM.xml.
This will allow you to specify the Annotations but then override them only where they change. I've done the same to override the schema in the #Table annotation as it changes between my environments.
Using this approach you can also override the table name on individual entities.
[Updating this answer as it's not well documented and someone else may find it useful]
Here's my orm.xml file (note that I am only overriding the schema and leaving the other JPA & Hibernate annotations alone, however changing the table here is totally possible. Also note that I am annotating on the field not the Getter)
<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings
xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd"
version="1.0">
<package>models.jpa.eglobal</package>
<entity class="MyEntityOne" access="FIELD">
<table name="ENTITY_ONE" schema="MY_SCHEMA"/>
</entity>
<entity class="MyEntityTwo" access="FIELD">
<table name="ENTITY_TWO" schema="MY_SCHEMA"/>
</entity>
</entity-mappings>
as alternative of XML configuration, you may want to dynamically generate java class with annotation using your preferred bytecode manipulation framework
If you don't mind binding your self to Hibernate, you could use some of the methods described at https://www.hibernate.org/171.html . You may find your self using quite a few hibernate annotations depending on the complexity of your data, as they go above and beyond the JPA spec, so it may be a small price to pay.

Categories