we are currently upgrading to Hibernate 5.4.29.
We have some custom Usertypes that map for example bigint to a custom usertype (extends Usertype).
metadata.applyBasicType(new MyUserType(), new String[] { MyUserType.class.getName()});
If I create a native query with the new version e.g.
hibernate.createNativeQuery(select * from tablexy);
and one of the columns contains bigint values it tries to map the type to the custom Usertype MyUserType.
This seems to be happening in the class JdbcResultMetadata
//Get the contributed Hibernate Type first
Set<String> hibernateTypeNames = factory.getMetamodel()
.getTypeConfiguration()
.getJdbcToHibernateTypeContributionMap()
.get( columnType );
//If the user has not supplied any JDBC Type to Hibernate Type mapping, use the Dialect-based mapping
if ( hibernateTypeNames != null && !hibernateTypeNames.isEmpty() ) {
if ( hibernateTypeNames.size() > 1 ) {
throw new HibernateException(
String.format(
"There are multiple Hibernate types: [%s] registered for the [%d] JDBC type code",
String.join( ", ", hibernateTypeNames ),
columnType
) );
}
else {
hibernateTypeName = hibernateTypeNames.iterator().next();
}
}
else {
hibernateTypeName = factory.getDialect().getHibernateTypeName(
columnType,
length,
precision,
scale
);
}
return factory.getTypeResolver().heuristicType(
hibernateTypeName
);
If we use the native query without adding a scalar we get:
org.hibernate.HibernateException: There are multiple Hibernate types: [MyUserType1, MyUserType2] registered for the [-5] JDBC type code
This can be prevented if I add addScalar() for every column. Which is quite cumbersome. The old hibernate version did not map to the usertypes unless we explicitly added e.g. a ResultTransformer.
Do we have to add addScalar for every column if we don't want mapping to usertypes or is there some other way to prevent this. (Basically I want a native query where the results are not mapped to the usertypes without adding addScalar for every column)
You could create a custom org.hibernate.boot.spi.MetadataContributor or org.hibernate.integrator.spi.Integrator (register it through the service loader mechanism) and remove type mappings that you don't want by altering the map metadataCollector.getBootstrapContext().getTypeConfiguration().getJdbcToHibernateTypeContributionMap() or sessionFactory.getMetamodel().getTypeConfiguration().getJdbcToHibernateTypeContributionMap()
I actually think this might be a bug, but I'm not sure at all.
Related
We use the https://github.com/etiennestuder/gradle-jooq-plugin for generating jOOQ classes on-demand from our database (DDL, .sql file with CREATE TABLE statements). However, we noted that when overriding the strategy.name in jooq.configurations.main.generationTool.generator, we started seeing peculiar behavior.
The database schema
We have some field names like _is_tombstone, _revision etc in our tables. These are for "metadata" which differs from the normal "data" columns (data coming from another system vs "metadata" logging event details about why the change was triggered).
Other tables like id, name etc do not have any special prefix.
The generator
Here is our current generator strategy, inspired by https://www.jooq.org/doc/latest/manual/code-generation/codegen-generatorstrategy/
public class CustomNameGeneratorStrategy extends DefaultGeneratorStrategy {
#Override
public String getJavaMemberName( Definition definition, Mode mode ) {
String memberName = super.getJavaMemberName( definition, mode );
// Converts e.g. _IsTombstone to _isTombstone
if ( memberName.startsWith( "_" ) ) {
memberName = "" + memberName.charAt( 0 ) + toLowerCase( memberName.charAt( 1 ) ) + memberName.substring( 2 );
}
return memberName;
}
#Override
public String getJavaGetterName( Definition definition, Mode mode ) {
String methodName = super.getJavaGetterName( definition, mode );
methodName = methodName.replace( "_", "" );
// isTombstone() seems more natural than getIsTombstone()
methodName = methodName.replace( "getIs", "is" );
return methodName;
}
#Override
public String getJavaSetterName( Definition definition, Mode mode ) {
String methodName = super.getJavaSetterName( definition, mode );
return methodName.replace( "_", "" );
}
}
The problem
With the above in place, the code below causes some very nasty bugs with certain fields not copied from the POJO to the Record instance. Only id, name etc works. All fields which have underscores in the name get excluded in the copying; they have null values in the target Record instance.
This, in turns, makes the DB insertion fail since certain mandatory fields do not have any values.
import static some.package.Tables.GROUPS;
import some.package.tables.records.GroupRecord;
import some.package.tables.pojos.TSDBGroup;
// ...
TSDBGroup tsdbGroup = createTsdbGroup(...);
// The GroupRecord here becomes a "partial copy".
GroupRecord groupRecord = create.newRecord(GROUPS, tsdbGroup);
groupRecord.store();
Why does this happen?
The problem turned out to be that jOOQ expects one of the following to hold true, unless you annotate your getters/setters with JPA-style annotations:
Method names for field setters use the standard name (setFoo for a DB column named foo or FOO), or
Field names use the standard name (foo for a DB column named foo or FOO)
Break both of these rules, and you keep to keep the pieces. :-)
More specifically, this is caused by the following code in jOOQ:
// No annotations are present
else {
members = getMatchingMembers(configuration, type, field.getName(), true);
method = getMatchingGetter(configuration, type, field.getName(), true);
}
// Use only the first applicable method or member
if (method != null)
Tools.setValue(record, field, method.invoke(source));
else if (members.size() > 0)
setValue(record, source, members.get(0), field);
}
Workaround
The simple workaround is to remove the getJavaMemberName method in the custom GeneratorStrategy. jOOQ is then able to populate the fields as expected when creating Records from POJOs.
I need to invoke a stored procedure using the JPA. The stored procedure operates on multiple tables and return some of the columns from these tables.
Tried with the #Procedure it doesn't seem to work, always the stored procedure is not found in this case.
Directly calling the procedure using native query was successful, but with this approach, I am need to map the result returned to List of an Object.
My implementation in the repository looks like below,
#Query(value = "EXECUTE dbs.multitable_Test :inputObj", nativeQuery = true)
List<sp> multitable_Test(#Param("inputObj")String inputObj);
The result returned from the stored procedure needs to be mapped to the sp class.
How can this be achieved while we have multiple tables response in the single result set?
Already tried with the attributeConvert from
this link, still getting the below exception.
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type
Any help with this is appreciated.
Firstly, this is not really the use case for procedure. Procedure is meant to modify data on the database without any return value, then you could use:
#Procedure(procedureName = "procedure_name")
void procedure(); // notice void
You should rather use a function using create function syntax. Function can modify data and return the result.
Secondly if you want to map it to some class, I see two solutions (using EntityManager):
Using ResultTransformer:
entityManager.createNativeQuery(
"select * from function_name(:parameter)"
)
.setParameter("parameter", parameter)
.unwrap(org.hibernate.query.NativeQuery.class)
.setResultTransformer(new ResultTransformer() {
#Override
public Object transformTuple(Object[] tuple, String[] aliases) {
return new Sp(tuple[0]);
}
#Override
public List transformList(List collection) {
return collection;
}
})
.getResultList();
Note that ResultTransformer is deprecated, but is so powerful, it will not be removed until there is a sensible replacement, see the note from hibernate developer.
Using ResultSetMapping. Place the proper annotation over an entity:
#SqlResultSetMapping(
name = "sp_mapping",
classes = #ConstructorResult(
targetClass = Sp.class,
columns = {
#ColumnResult(name = "attribute", type = Long.class)
})
)
And invoke the function using the mapping as parameter:
entityManager.createNativeQuery(
"select * " +
"from function_name(:parameter);",
"sp_mapping"
)
.setParameter("parameter", parameter)
.getResultList();
How can i get only modified fields from audited entity?
When i use
AuditQuery query = getAuditReader().createQuery().forEntitiesAtRevision(MyEntity.class, revisionNumber).getResultList()
I get all fields; but i want to get only fields modified?
Without Modified Flags Feature
If you are not using the Modified Flags feature on the #Audited annotation, the only way to obtain that an audited property changed from revision X to revision Y is to actually fetch both revisions and then compare the actual field values between the two object instances yourself.
With Modified Flags Feature
Assuming you are using the Modified Flags feature on the #Audited annotation, presently the only way is to fetch the revision numbers for a given entity instance and using those revisions and prior knowledge of the audited columns, use the Envers Query API to ask whether a property changed for that revision.
Obviously this approach is not ideal as it does impose some prior knowledge on the user code's part to know the fields that are audited in order to get the desired result.
List<Number> revisions = reader.getRevisions( MyEntity.class, myEntityId );
for ( Number revisionNumber : revisions ) {
for ( String propertyName : propertyNamesToCheckList ) {
final Long hits = reader.createQuery()
.forRevisionsOfEntity( MyEntity.class, false, false )
.add( AuditEntity.id().eq( myEntityId ) )
.add( AuditEntity.revisionNumber().eq( revisionNumber ) )
.add( AuditEntity.propertyName( propertyName ).hasChanged() )
.addProjection( AuditEntity.id().count() )
.getSingleResult();
if ( hits == 1 ) {
// propertyName changed at revisionNumber
}
else {
// propertyName didn't change at revisionNumber
}
}
}
Modified Flags Property Changes Queries
In Hibernate Envers 6.0, we are introducing a new query that combines forRevisionsOfEntity with the modified flags query mechanism to obtain not only the revised instances for a given entity class type and primary key, but also a list of fields that were modified at each revision.
The following pseudo code gives an example of the future API:
List results = reader.forRevisionsOfEntityWithChanges( MyEntity.class false )
.add( AuditEntity.id().eq( entityId ) )
.getResultList();
Object previousEntity = null;
for ( Object row : results ) {
Object[] rowArray = (Object[]) row;
final MyEntity entity = rowArray[0];
final RevisionType revisionType = (RevisionType) rowArray[2];
final Set<String> propertiesChanged = (Set<String>) rowArray[3];
for ( String propertyName : propertiesChanged ) {
// using the property name here you know
// 1. that the property changed in this revision (no compare needed)
// 2. Can get old/new values easily from previousEntity and entity
}
}
This feature may be expanded upon or changed as it is going to be considered experimental, but it is something that users have asked for and we at least intend to deliver a first pass at this functionality based on modified flags.
We haven't decided if or how we'd support this for non-modified flags at the moment, so again the only choice there will presently be a brute force bean comparison.
Fore more details on this feature see HHH-8058.
I'm using JDBC with createStruct() to call a stored procedure on an Oracle database that accepts a custom type as a parameter. The stored procedure inserts the custom type fields into a table and when I SELECT from the table later I see that all the fields that I tried to insert are NULL.
The custom type looks like this:
type record_rec as object (owner_id varchar2 (7),
target_id VARCHAR2 (8),
IP VARCHAR2 (15),
PREFIX varchar2 (7),
port varchar2 (4),
description VARCHAR2 (35),
cost_id varchar2(10))
The stored procedure looks like this:
package body "PKG_RECORDS"
IS
procedure P_ADD_RECORD (p_target_id in out VARCHAR2,
p_record_rec in record_rec)
is
l_target_id targets.target_id%TYPE;
BEGIN
Insert into targets (target_id,
owner_id,
IP,
description,
prefix,
start_date,
end_date,
cost_id,
port,
server_name,
server_code)
values (f_sequence ('TARGETS'),
p_record_rec.owner_id,
p_record_rec.ip,
p_record_rec.description,
p_record_rec.prefix,
sysdate,
to_date ('01-JAN-2050'),
p_record_rec.cost_id,
p_record_rec.port,
'test-server',
'51')
returning target_id
into p_target_id;
END;
END PKG_RECORDS;
My Java code looks something like this:
try (Connection con = m_dataSource.getConnection()) {
ArrayList<String> ids = new ArrayList<>();
CallableStatement call = con.prepareCall("{call PKG_RECORDS.P_ADD_RECORD(?,?)}");
for (Record r : records) {
call.registerOutParameter("p_target_id", Types.VARCHAR);
call.setObject("p_record_rec",
con.createStruct("SCHEME_ADM.RECORD_REC", new Object[] {
r.getTarget_id(),
null, // will be populated by SP
t.getIp(),
t.getPrefix(),
t.getPort(),
t.getDescription(),
t.getCost_id()
}), Types.STRUCT);
call.execute();
ids.add(call.getString("p_target_id"));
}
return new QueryRunner().query(con,
"SELECT * from TARGETS_V WHERE TARGET_ID IN ("+
ids.stream().map(s -> "?").collect(Collectors.joining(",")) +
")",
new BeanListHandler<Record>(Record.class),
ids.toArray(new Object[] {})
).stream()
.collect(Collectors.toList());
} catch (SQLException e) {
throw new DataAccessException(e.getMessage());
}
Notes:
* That last part is using Apache Commons db-utils - I love their bean stream operations.
* The connection is using C3P0 connection pool - could that be related?
* Just to make it clear - its not that the bean processor populates null values into the Record bean fields - if I use an SQL explorer to load the table (or view) directly, I can see that the fields in the database are indeed set to NULL.
There are no SQLExceptions when the process runs, or any other notice that something is wrong.
Any ideas what to check?
[Update]
After reading on Oracle Objects and SQLData mappings, I rewrote the code to use SQLData.
The Record class now implements SQLData and it's writeSQL() method looks like this:
#Override
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeString(owner_id);
stream.writeString(target_id);
stream.writeString(Objects.isNull(ip) ? "0" : ip); // weird, but as specified
stream.writeString(prefix);
stream.writeString(String.valueOf(port));
stream.writeString(description);
stream.writeString(cost_id);
}
Then at the start of the calling code, I've added:
con.getTypeMap().put("SCHEME_ADM.RECORD_REC", Record.class);
And instead of using createStruct(), the setObject() call now looks simply like this:
call.setObject("p_record_rec", t, Types.STRUCT)
But the result is the same - no errors and all the passed values are read as NULL. I've traced through the writeSQL() implementation and I can see that it is called and all values are passed correctly into the Oracle code. I've tried to use Types.JAVA_OBJECT in the setObject() call, and got an error: Invalid column type.
[Update 2]
Bordering on insane helplessness I've implemented the OracleData pattern:
public class Record implements SQLData, OracleData, OracleDataFactory {
...
#Override
public Object toJDBCObject(Connection conn) throws SQLException {
return conn.createStruct(getSQLTypeName(), new Object[] {
Objects.isNull(owner_id) ? "" : owner_id,
Objects.isNull(record_id) ? "" : record_id,
Objects.isNull(ip) ? "0" : ip,
Objects.isNull(prefix) ? "" : prefix,
String.valueOf(port),
Objects.isNull(description) ? "" : description,
Objects.isNull(cost_id) ? "" : cost_id
});
}
#Override
public OracleData create(Object jdbcValue, int sqltype) throws SQLException {
if (Objects.isNull(jdbcValue)) return null;
LinkedList<Object> attr = new LinkedList<>(Arrays.asList(((OracleStruct)jdbcValue).getAttributes()));
Record r = new Record();
r.setOwner_id(attr.removeFirst().toString());
r.setRecord_id(attr.removeFirst().toString());
r.setIp(attr.removeFirst().toString());
r.setPrefix(attr.removeFirst().toString());
r.setPort(Integer.parseInt(attr.removeFirst().toString()));
r.setDescription(attr.removeFirst().toString());
r.setCost_id(attr.removeFirst().toString());
return r;
}
public static OracleDataFactory getOracleDataFactory() {
return new Record();
}
Calling code:
...
// unwrap the Oracle object from C3P0 (standard JDBCv4 API)
OracleCallableStatement ops = call.unwrap(OracleCallableStatement.class);
// I'm not sure why I even need to do this - it looks exactly like
// the standard JDBC code
for (Records r : records) {
ops.registerOutParameter(1, Types.VARCHAR);
ops.setObject(2, t);
ops.execute();
ids.add(ops.getString(1));
}
...
And again, same result - no errors, a record is created in the table, with all provided values are null. I've traced through the code and the toJDBCObject() method is called correctly and does pass the values correctly in to createStruct().
Found the problem. Annoyingly, its about character encoding.
If in the toJDBCObject() implementation, I run getAttributes() on the created struct, the resulting Object[] array has all fields set as "???". Which is weird and looks like a character set transcoding failure (although it looks weird for that too - has three question marks for all fields regardless of value length, including empty string values).
According to Oracle's JDBC developer guide, "Globalization Support":
The basic Java Archive (JAR) file ojdbc7.jar, contains all the necessary classes to provide complete globalization support for:
Oracle character sets for CHAR, VARCHAR, LONGVARCHAR, or CLOB data that is not being retrieved or inserted as a data member of an Oracle object or collection type.
CHAR or VARCHAR data members of object and collection for the character sets US7ASCII, WE8DEC, WE8ISO8859P1, WE8MSWIN1252, and UTF8.
To use any other character sets in CHAR or VARCHAR data members of objects or collections, you must include orai18n.jar in the CLASSPATH environment variable:
ORACLE_HOME/jlib/orai18n.jar
And my setup was using the character set "WE8ISO8859P9" (I have no idea why, what it means, or even if it is selected by the client or the server - I just dumped the STRUCT object created by the OracleData API implementation and it was there somewhere).
So when Oracle says that it does not "provide complete globalization support", they mean "all character fields will be silently converted to NULL". Hmpph.
Anyway, adding orai18n.jar to the CLASSPATH indeed fixed the problem, and now records are added correctly to the database.
I am currently in the process of learning the Java Spring Framework, and I am having difficulty understanding why the following query is failing to return any results from the database.
I am ultimately trying to create a where method in my OffersDAO class that allows my to query on a specific field, for a specific value.
public List<Offer> where(String field, String value){
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("field", field);
params.addValue("value", value);
String sql = "select * from offers where :field = :value";
return jdbc.query(sql, params, new RowMapper<Offer>(){
public Offer mapRow(ResultSet rs, int arg1) throws SQLException {
Offer offer = new Offer();
offer.setId(rs.getInt("id"));
offer.setName(rs.getString("name"));
offer.setText(rs.getString("text"));
offer.setEmail(rs.getString("email"));
return offer;
}
});
}
I am able to successfully query the database for results when I specify the field explicitly, as follows:
String sql = "select * from offers where name = :value";
Obviously there is something wrong with specifying the field name dynamically. My guess is it is most likely due to the fact that the field key is being inserted as a mysql string (with ''), when in fact mysql expects a column name for the :field placeholder.
My questions are as follows:
Is there a way to accomplish what I am attempting to do above, using the jdbc NamedParameterJdbcTemplate class?
If I cannot accomplish the above, by what means can I?
Thank you
Edit: No exceptions are thrown. In the case when I am attempting to supply the column name, a empty result set is returned.
You can't specify the field name in a parameter - only the field value. Since you know the DB schema when you're writing the code, this shouldn't be much of a problem.
What about include all possible fields in the filter but restricting their usage by field name param. Like this:
select * from offers where
('name'=:field and name = :value)
OR
('field2'=:field and field2 = :value)
OR
('field3'=:field and field3 = :value)
I don't know how You can implement it with spring (I mean use variable column names) but I can suggest to use the following principle.
Keep your query like template:
String sql = "select * from offers where ##field = :value";
And every time before execution replace ##value parameter with the column You want.
And then You are gone.