How to make BlazeDS ignore properties? - java

I have a java class which has one field with getter and setter, and a second pair of getter and setter that access this field in another way:
public class NullAbleId {
private static final int NULL_ID = -1;
private int internalId;
getter & setter for internalId
public Integer getId() {
if(this.internalId == NULL_ID) {
return null;
} else {
return Integer.valueOf(internalId);
}
}
public void setId(Integer id) {
if (id == null) {
this.internalId = NULL_ID;
} else {
this.internalId = id.intValue();
}
}
}
(the reason for this construction is that I want to build a way to hande Nullable Intergers)
On the Flash/Flex client side, I have a Class with two properties: id and internalId (the id properties are only for testing, at the end they should return the internalId value)
BlazeDS seams to transfer both values: id and internalId, because both have a complete getter setter pair. I want Blaze not to transfer id, only internalId should be transferred. But I have no idea how I have to configure that.

All the rules for BlazeDS serialization are here:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=serialize_data_3.html
Here is a quote: "Fields that are static, transient, or nonpublic, as well as bean properties that are nonpublic or static, are excluded."
So if you can make your id property fit that criteria it will be excluded. Another option would be to create a custom serializer that overtly does not include your id property.
All the best,
~harris

Besides transient / marshaller you can implement the Externalizable interface and create your custom serialization.
See serialization rules

It maybe a little bit old, but it could help some : there is a nice ticket about excluding properties from Java to Flex via BlazeDS
EDIT : a better soluce, it's to use the #AmfIgnore (or #AmfIgnoreField if your serialization is directly on the fields) annotation present in the spring-flex-core.jar (I've used the 1.5.2-RELEASE)

Related

Order YAML file entries according to its java equivalent class (SnakeYaml)

I am using SnakeYaml to both load/dump data in Java. For this I have created a custom class with fields, say that the class looks something like this:
public class Person {
private String name;
private String lastName;
private String address;
public Person() {
// Do nothing
}
// Getters and setters initialized for all the fields
}
Now, what I would like is that when I write a Person object to a file with SnakeYaml I would want to have the fields in the order they are defined in the class.
e.g.
name: Patrick
lastName: Star
Age : 42
The problem is that for more advanced examples, this ordering is not achieved. Currently I am writing/dumping to a yaml file like the following:
Constructor struct = new Constructor(YamlIteratorModel.class);
Yaml yaml = new Yaml(struct);
try {
String path = "Some/File/Path/yamlfile.yaml";
FileWriter writer = new FileWriter(path);
yaml.dump(iteratorModel, writer);
} catch (IOExcepton e) {
// Do something
}
What I have also tried is creating a Representer class which extends Representer and calls the Yaml constructor in a similar manner. This one is taken from another post, and it doesn't do the job for me as it only sorts the Properties in an order I am not entirely sure of (can't find the link right now but will update if I find it again)..
public class ConfigurationModelRepresenter extends Representer {
/**
* Create object without specified dumper object
*/
public ConfigurationModelRepresenter() {
super();
}
/**
* Create object with dumper options
*
* #param options
*/
public ConfigurationModelRepresenter(DumperOptions options) {
super(options);
}
/** {#inheritDoc} */
#Override
protected Set<Property> getProperties(Class< ? extends Object> type) {
Set<Property> propertySet;
if (typeDefinitions.containsKey(type)) {
propertySet = typeDefinitions.get(type).getProperties();
} else {
propertySet = getPropertyUtils().getProperties(type);
}
List<Property> propsList = new ArrayList<>(propertySet);
Collections.sort(propsList, new BeanPropertyComparator());
return new LinkedHashSet<>(propsList);
}
class BeanPropertyComparator implements Comparator<Property> {
#Override
public int compare(Property p1, Property p2) {
// p1.getType().get
if (p1.getType().getCanonicalName().contains("util") && !p2.getType().getCanonicalName().contains("util")) {
return 1;
} else if (p2.getName().endsWith("Name") || p2.getName().equalsIgnoreCase("name")) {
return 1;
} else {
return -1;
}
}
}
}
SUMMARY: How do I maintain the ordering when dumping an object to a YAML file (using SnakeYaml) e.g. the order the fields appear defined in the custom class?
See this question, which discusses that you cannot get the line number of a declared field via reflection.
Together with the fact that reflection gives you a classes' fields in no particular order, it is obvious that it is not possible to observe the order of declared fields in a class at runtime, and it follows that you cannot order the keys in your YAML output according to their position/order in the source, because you cannot know that order.
The remedy is to transport the knowledge of the order to the runtime. Some possible ways to do this might be:
Annotate each field with a weight that defines the position of the resulting YAML key (ugly because you need annotations on the fields).
Autogenerate code by parsing the class' definition discovering the order from there, and write it to some autogenerated source file whose code is then used to order the properties in your Representer (this solution, while avoiding bloat in the original class, is very complex and elaborate).
Hard-code the field order in the Representer. That's basically the previous solution but without autogenerating. Error-prone because the Representer must be adjusted each time the class is changed.
I recommend against using any of those solutions. The YAML spec specifically says that key order must not convey content information, and if the order is important to you, you are already violating the YAML spec and should switch to a format that better serves your needs.

Dealing with changed ENUM definitions - database

Introduction
The lead architect went and changed the ENUM definition in a spring boot project.
From:
public enum ProcessState{
C("COMPLETE"), P("PARTIAL");
}
To:
public enum ProcessState{
COMPLETE("COMPLETE"), PARTIAL("PARTIAL");
}
What is the proper way to deal with this? Some other Java Spring Boot applications are now breaking. Would there be a way to tell the jackson deserializer to perform some kind of conversion in these situations?
My Current Work-Around
What I did was to run two update statements on the oracle database:
UPDATE store set PAYLOAD = REPLACE(PAYLOAD, '"processState":"P"','"processState":"PARTIAL"') where PAYLOAD like '%"processState":"P"%';
UPDATE store set PAYLOAD = REPLACE(PAYLOAD, '"processState":"C"','"processState":"COMPLETE"') where PAYLOAD like '%"processState":"C"%';
Question
So are there other ways? Could I do it by adding some deserialization/conversion code somewhere for these specific cases? Is there a more elegant way than running a replace SQL statement?
Could I do some kind of hack on a specific java sub-package, and say "use this enum instead of that enum..." or use one of the two? But without affecting the rest of the code?
The error:
java.lang.IllegalArgumentException: No enum constant
Ideally we store value of emum rather than Enum.
So, you should save ENUM values like COMPLETE,PARTIAL
For JSON serialization and de-serialization, use #JsonValue
#JsonValue
public String toValue() {
return value;
}
One additional solution to the others posted:
#JsonCreator
public static ProcessState factory(String inputValue) {
if(inputValue.length() == 1){
for(ProcessState type : ProcessState.values()){
if(inputValue.equals(type.getValue().substring(0,inputValue.length()))){
return type;
}
}
}
return ProcessState .valueOf(inputValue);
}
Implement a JPA converter like this:
#Converter(autoApply = true)
public class ProcessStateConverter
implements AttributeConverter<ProcessState, String> {
private ImmutableBiMap<ProcessState, String> map = ImmutableBiMap.<ProcessState, String>builder()
.put(COMPLETE, "C")
.put(COMPRESSING, "P")
.build();
#Override
public String convertToDatabaseColumn(ProcessState attribute) {
return Optional.ofNullable(map.get(attribute))
.orElseThrow(() -> new RuntimeException("Unknown ProcessState: " + attribute));
}
#Override
public ProcessState convertToEntityAttribute(String dbData) {
return Optional.ofNullable(map.inverse().get(dbData))
.orElseThrow(() -> new RuntimeException("Unknown String: " + dbData));
}
}
Remember to treat your Enum like a simple column and not #Enumerated i.e.
#Entity
public class MyEntity {
#Column //no #Enumerated
private ProcessState processState;
//...
}
The drawback is that you need to maintain the converter each time something changes. So better create a unit test to check if everything is correctly mapped.

How can I "patch" a JPA entity?

Let's pretend a RESTful service receives a PATCH request to update one or more fields of an entity that might have tens of fields.
#Entity
public class SomeEntity {
#Id
#GeneratedValue
private Long id;
// many other fields
}
One dirty way to patch the corresponding entity is to write something like this:
SomeEntity patch = deserialize(json);
SomeEntity existing = findById(patch.getId());
if (existing != null)
{
if (patch.getField1() != null)
{
existing.setField1(patch.getField1());
}
if (patch.getField2() != null)
{
existing.setField2(patch.getField2());
}
if (patch.getField3() != null)
{
existing.setField3(patch.getField3());
}
}
But this is insane! And if I want to patch 1 to many & other associations of the entity the insanity could even become hazardous!
Is there a sane an elegant way to achieve this task?
Modify the getter's of SomeEntity and apply check, if any value is blank or null just return the corresponding entity object value.
class SomeEntity {
transient SomeEntity existing;
private String name;
public String getName(){
if((name!=null&&name.length()>0)||existing==null){
return name;
}
return existing.getName();
}
}
You can send an array containing the name of the fields you are going to patch. Then, in the server side, by reflection or any field mapping, set each field to the entity. I have already implemented that and it works, thought my best advice is this:
Don't publish an endpoint to perform a "generic" PATCH (modification), but one that performs a specific operation. For instance, if you want to modify an employee's address, publish an endpoint like:
PUT /employees/3/move
that expects a JSON with the new address {"address" : "new address"}.
Instead of reinventing the wheel by writing the logic yourself, why don't you use a mapping library like Dozer? You want to use the 'map-null' mapping property: http://dozer.sourceforge.net/documentation/exclude.html
EDIT I am not sure whether or not it would be possible to map a class onto itself. You could use an intermediary DTO, though.

JPA: Map invalid database values to enums

In my datamodel a have many entities where attributes are mapped to enumerations like this:
#Enumerated(EnumType.STRING)
private MySpecialEnum enumValue;
MySpecialEnum defines some fixed values. The mapping works fine and if the database holds a NULL-value for a column I get NULL in the enumValue-attribute too.
The problem is, that my backend module (where I have no influence on) uses spaces in CHAR-columns to identify that no value is set. So I get an IllegalArgumentException instead of a NULL-value.
So my question is: Is there a JPA-Event where I can change the value read from the database before mapping to the enum-attribute?
For the write-access there is the #PrePersist where I can change Null-values to spaces. I know there is the #PostLoad-event, but this is handled after mapping.
Btw: I am using OpenJpa shipped within WebSphere Application Server.
You could map the enum-type field as #Transient (it will not be persisted) and map another field directly as String, synchronizing them in #PostLoad:
#Transient
private MyEnum fieldProxy;
private String fieldDB;
#PostLoad
public void postLoad() {
if (" ".equals(fieldDB))
fieldProxy = null;
else
fieldProxy = MyEnum.valueOf(fieldDB);
}
Use get/setFieldProxy() in your Java code.
As for synchronizing the other way, I'd do it in a setter, not in a #PreUpdate, as changes to #Transient fields probably do not mark the entity as modified and the update operation might not be triggered (I'm not sure of this):
public void setFieldProxy(MyEnum value) {
fieldProxy = value;
if (fieldProxy == null)
fieldDB = " ";
else
fieldDB = value.name();
}
OpenJPA offers #Externalizer and #Factory to handle "special" database values.
See this: http://ci.apache.org/projects/openjpa/2.0.x/manual/manual.html#ref_guide_pc_extern_values
You might end up with something like this: not tested...
#Factory("MyClass.mySpecialEnumFactory")
private MySpecialEnum special;
...
public static MySpecialEnum mySpecialEnumFactory(String external) {
if(StringUtils.isBlank(external) return null; // or why not MySpecialEnum.NONE;
return MySpecialEnum.valueOf(external);
}

JPA ID Generation Strategy

I defined a generator for a JPA class:
<sequence-generator name="MY_SEQ" allocation-size="-1"
sequence-name="MY_SEQ"
initial-value="100000000" />
There are cases where I already have an ID for an entity but when I insert the Entity the Id gets generated using the generator.
Is it possible to define a generator that will only generate an Id when one does not exist?
I am using Hibernate as a JPA Provider.
Thank you
I couldn't find a way to do this in JPA so I used Hibernate EJB3 event listeners. I over rode the saveWithGeneratedId to use reflection to check the entity for an #Id annotation and then to check that field for a value. If it has a value then I call saveWithRequestedId instead. Other wise I let it generate the Id. This worked well because I can still use the sequence for Hibernate that is set up if I need an Id. The reflection might add overhead so I might change it a little. I was thinking of having a getId() or getPK() method in all entities so I don't have to search for which field is the #Id.
Before I used reflection I tried calling session.getIdentifier(entity) to check but I was getting TransientObjectException( "The instance was not associated with this session" ). I couldn;t figure out how to get the Entity into the session without saving it first so I gave up. Below is the listener code I wrote.
public class MergeListener extends org.hibernate.ejb.event.EJB3MergeEventListener
{
#Override
protected Serializable saveWithGeneratedId(Object entity, String entityName, Object anything, EventSource source, boolean requiresImmediateIdAccess) {
Integer id = null;
Field[] declaredFields = entity.getClass().getDeclaredFields();
for (Field field : declaredFields) {
Id annotation = field.getAnnotation(javax.persistence.Id.class);
if(annotation!=null) {
try {
Method method = entity.getClass().getMethod("get" + field.getName().substring(0, 1).toUpperCase() + field.getName().substring(1));
Object invoke = method.invoke(entity);
id = (Integer)invoke;
} catch (Exception ex) {
//something failed (method not found..etc) , keep going anyway
}
break;
}
}
if(id == null ||
id == 0) {
return super.saveWithGeneratedId(entity, entityName, anything, source, requiresImmediateIdAccess);
} else {
return super.saveWithRequestedId(entity, id, entityName, anything, source);
}
}
}
I then had to add the listener to my persistence.xml
<property name="hibernate.ejb.event.merge" value="my.package.MergeListener"/>
it's not a good Idea, sequences are used for surrogate keys, are meaningless in the business sense but assures you, there won't be duplicates thus no error at inserting time.

Categories