morphia 1.2.1 upgrade - FieldEndImpl is now private - java

I'm struggling to upgrade from morphia 1.0.1 to 1.2.1. With 1.0.1 we had to override the morphia calls to equals() and other calls to throw an exception if the value being fetched was null. Doing that prevented a security hole where the first record in the database with a value of null was selected if the calls runs without an exception.
To do this, we overrode morphia.createDatastore() in the Guice module to return a special custom datastore. The special datastore returned a special Query object which returned a special FieldEnd when the Query.field() call was called. This FieldEnd did the exception checking.
That worked, however our special NotAllowingNullsFieldEnd class extended FieldEndImpl which is now private in 1.2.1 and so I have a problem.
We need a way to stop Queries from accepting null as a valid argument in the 1.2.1 world.
One solution would be to move NotAllowingNullsFieldEnd to the same package FieldEndImpl is in (org.mongodb.morphia.query) but that seems really hacky.
I'm NOT a morphia expert and actually I'm fairly new to java, so any expert input would be welcome.
Just FYI, the implementation of this was done before my time, so I don't have a lot to add about the in-depth reasons regarding why this path was chosen, I've just been asked to do the upgrade.

Morphia 1.3.2 the class FieldEndImpl is back to public.

Related

JDBI failure case

I am looking at some examples of using the JDBI library for Java database access.
One such example is as follows....
List<String> names = mJdbi.withHandle(handle ->
handle.createQuery("select name from test_table")
.mapTo(String.class)
.list());
I am confused as to what happens when this call fails. For example, what if there is no table called test_table. What should I expect the outcome of this code to be in that case?
Well, what should you expect if any call fails in java? Maybe an Exception?
Be happy that jdbi unburdens you from dealing with SQLException directly. Here is what you will be facing https://jdbi.org/apidocs/org/jdbi/v3/core/JdbiException.html (or ady subclass thereof, probably StatementException in your case)
On a side note: it is <5 minutes of work setting up a project with in-mem db to try it out...

Is it possible to change the value of an annotation using ByteBuddy?

I am trying to develop a tool that needs to work with annotations.
One important feature is to target an element annotated with an annotation and change its value, i.e.
// from this
#Annotation(value = "foo")
class SomeClass {}
// to this
#Annotation(value = "bar")
class SomeClass {}
I made an attempt in which first I remove the annotation with an AsmVisitorWrapper and then I re-add the annotation with the modified value.
Sadly this does not seem to work.
I used the byte-buddy-maven-plugin to add this transformation. The error happens during the transform goal. I tracked down the generic error to a NullReferenceException: the Asm ClassVisitor seems to take place after the annotateType() step and tries to apply some visit step to the new attached annotation value. I think the NullReferenceException is caused by the visitor beacuse to remove an annotation you need to return null.
I made a test repository on github where I pushed my attempt. Hoping it helps to understand what I need to achieve. https://github.com/Fed03/bytebuddy-switch-annotation-test
Thanks
This is indeed a bug in Byte Buddy that is now fixed on master and will be part of version 1.10.2. The problem is that you are removing an annotation that you are adding and this was not considered as a scenario.
However, even with this fix, your problem is not solved despite a green build. You would need to discriminate better to tell Byte Buddy which annotation you are removing. I would recommend you to transform an annotation rather then removing it to later add it again. Any matcher that discriminates to which of the two annotations needs to be removed needs to partially implement such change discovery already which is why it should not be much more difficult to implement the transformer in the first place.

Apache Camel - Spring DSL - Pass String argument to bean method

On Camel 2.10.1, the following worked:
<camel:bean ref="profilingBean" method="addProfilingContext('TEST')"/>
The method in question takes a String parameter
Migrating to 2.10.6 , this does not work anymore, it tries to call TEST as another class. I have tried wrapping with ${} , trying to use exotic combinations of "& quot;" etc...
The only solution I found was to put the value in a header using constant language then call the header using simple. Obviously, this isn't very clean...
Do you have any ideas how to do this?
Cheers
The behavior/bug still exists in Camel 2.16 and also in latest 2.18.2.
For every string constant that is passed to a bean via Spring DSL a java.lang.ClassNotFoundException is thrown.
It gets more visible by setting logger for org.apache.camel.util.ObjectHelper to TRACE.
This camel behavior also has serious negative performance impact because ClassLoader method (java.lang.ClassLoader.loadClass) is synchronized for a given parameter name.
I wrote a little demo to show this: https://github.com/groundhog2k/camel-dsl-bean-bug
Your solution with the header is fine. The bug you talk about should be fixed in 2.10.7, or 2.11.1 etc.

Re-serializing JBPM process variables directly via MySQL

I'm working with an application that uses JBPM 3.1 and MySQL. The core problem is that there are processes instances with variables that contain an older version of an external, non-JBPM Serializable class. When the main application is upgraded, these processes instances cause an exception to be thrown by JBPM since the SUID of a specific class instance has changed in the main application.
I believe I have a method for fixing the deserialization process using the technique described in the following:
How to deserialize an object persisted in a db now when the object has different serialVersionUID
However, my problem is figuring out where in MySQL JBPM stores process instance variables, so I can write a program that can interate over all the variables for all instances, an reserialize the variables so the offending class will have the new SUID, so JBPM can operate against the processes.
My initial looking at the JBPM tables, it appears that the JBPM_BYTEARRAY and/or JBPM_BYTEBLOCK may be the tables to operate against. However, I'm unsure how to proceed. I'm guessing each process variable is stored in a wrapping container class. Is that class org.jbpm.context.exe.VariableInstance? Or is it something else?
I figure if I have the proper jar files in the class path, and I know what the main class instance is that JBPM uses to store process variables in MySQL, I can deserialize the class (which will fix the SUID problem with the embedded problem class instance), and reserialize the class back. Since JBPM documentation does mention stuff about converters, I'm unsure if I have to replicate the conversion process JPBM does when deserializing, or if standard java deserialization is enough.
Some analysis of JBPM indicates that binary data may be split across multiple records. This may not be the case for mysql itself, but the JPBM code is written to support multiple RDBMs, and some have limits on the size of binary records.
Since the question earned me a tumbleweed reward, I was not going to get a usable mysql-based answer in within the deadline I had to meet, so I re-considered the core problem and the operating context the problem occurs, and came up with a solution that avoided the needed to perform direct mysql operations.
The main application in question already has some customize modifications to JBPM, so the solution I implemented altered JBPM source which performs the deserialization of process instance variables. This avoids the need to deal with JBPM logic that extracts the deserialized binary data from the RDBMs.
In the class org.jbpm.context.exe.converter.SerializableToByteArrayConverter, I modifed the code to use a custom ObjectInputStream class that returns the latest SUID of a class. The technique of just replacing the descriptor with the latest version of the class as described in the post referenced in the question does not work if the new class includes new fields. Doing so causes an end-of-data exception since the base deserialization code tries to access the "new" fields in the old, deserialized version of the class.
Therefore, I just need to replace the SUID, but keep all other parts of the descriptor the same. Since the JDK does not make ObjectStreamClass extensible, I created a sub-class of ObjectInputStream that returns the new SUID based upon a given calling pattern the java library executes against ObjectInputStream when deserialzing data.
The pattern: When reading the header of a deserialized object, the readUTF() function is called (to obtain the class name) followed by a readLong() call. Therefore, if this calling sequence occurs, and if the readUTF() returned the class name I want to change the SUID of, I return the newer SUID in the readLong() call.
The custom code reads a configuration file that specifies class names and associated SUIDs that should be mapped to the latest SUIDs for the classes listed. This allows mapping of alternate classes in the future w/o modifying the custom code.
Note, this approach is applicable to general deserialization operations, where one needs to map old SUIDs to the latest SUIDs of specified classes, and leaving the other parts of the serialized class descriptor alone to avoid end-of-data problems if the newer class definition includes additional field declarations not present in the older class definition.
Do you know if you made changes that break the contract or is it just simple adding new fields ? If it is simply adding new fields, then just define prior serialversionuid.. Otherwise.. you will have to read all the variables that have different serialversionids and save them under the new class because you are the only person who knows how to convert them.

Why has Hibernate switched to use LONG over CLOB?

It looks like that Hibernate started using LONG data type in version 3.5.5 (we upgraded from 3.2.7) instead of CLOB for the property of type="text".
This is causing problems as LONG data type in Oracle is an old outdated data type (see http://www.orafaq.com/wiki/LONG) that shouldn’t be used, and tables can’t have more than one column having LONG as a data type.
Does anyone know why this has been changed?
I have tried to set the Oracle SetBigStringTryClob property to true (as suggested in Hibernate > CLOB > Oracle :(), but that does not affect the data type mapping but only data transfer internals which are irrelevant to my case.
One possible fix for this is to override the org.hibernate.dialect.Oracle9iDialect:
public class Oracle9iDialectFix extends Oracle9iDialect {
public Oracle9iDialectFix() {
super();
registerColumnType(Types.LONGVARCHAR, "clob");
registerColumnType(Types.LONGNVARCHAR, "clob");
}
}
However this is the last resort - overriding this class is step closer to forking Hibernate which I would rather avoid doing.
Can anybody explain why this was done?
Should this be raised as a bug?
[UPDATE]: I have created https://hibernate.atlassian.net/browse/HHH-5569, let's see what happens.
It looks like the resolution to this issue is to use materialized_clob, at least that's what's being said by Gail Badner on HHH-5569.
This doesn't help me at all (and I left relevant comment about that) but might be helpful for someone else here. Anyway the bug is rejected and there is very little I can do about it but use overriden dialect :(
Can anybody explain why this was done? Should this be raised as a bug?
This has been done for HHH-3892 - Improve support for mapping SQL LONGVARCHAR and CLOB to Java String, SQL LONGVARBINARY and BLOB to Java byte[] (update of the documentation is tracked by HHH-4878).
And according to the same issue, the old behavior was wrong.
(NOTE: currently, org.hibernate.type.TextType incorrectly maps "text" to java.sql.Types.CLOB; this will be fixed by this issue and updated in database dialects)
You can always raise an issue but in short, my understanding is that you should use type="clob" if you want to get the property mapped to a CLOB.
PS: Providing your own Dialect and declaring it in your Hibernate configuration (which has nothing to do with a fork) is IMHO not a solution on the long term.
I cannot answer your question about why, but for Hibernate 6, it seems they're considering switching back to using CLOB

Categories