How to implement domain strong typing using Java annotations - java

Problem
Working with anemic domain objects with multiple foreign keys stored as Long's. Trying to protect against transposing the values with some kind of strong typed domain types.
For example given the following product review class:
public class Review {
private Long id;
private Long productId;
private int rating;
private String title;
private String body;
private Long createdById;
private Date createdAt;
//...
}
I'd like to prevent myself from accidentally transposing the wrong foreign key to any of the Long's:
ReviewDto dto = // some DTO that was parsed from JSON for example
Review review = new Review();
review.productId = dto.getCreatedById(); // see transpose error here as typical type checking doesn't catch it
Solution(s)
Inheritance
An obvious solution is to implement a simple class hierarchy for domain types.
public class LongId {
private Long id;
//...
}
//...
public class ReviewId extends LongId {...}
public class ProductId extends LongId {...}
//...
public class Review {
private ReviewId id;
private ProductId productId;
//...
}
//...
ReviewDto dto = // some DTO that was parsed from JSON for example
Review review = new Review();
review.productId = dto.getCreatedById(); // compile error as types don't match
The downside to this solution is that the actual type is contained and therefore laborious to marshal it to/from JSON and in/out of the database requiring me to write lots of custom serializers.
Generics
Another solution I've seen done is using generics but adds verbose syntax and still having to write custom serializers just to get at a simple type.
public class LongId<T> {
private Long id;
//...
}
//...
public interface ReviewId {}
public interface ProductId {}
//...
public class Review {
private LongId<ReviewId> id;
private LongId<ProductId> productId;
//...
}
//...
ReviewDto dto = // some DTO that was parsed from JSON for example
Review review = new Review();
review.productId = dto.getCreatedById(); // compile error as types don't match
Annotations?
Anyone pull this off with Java annotations? What was involved? The documentation landscape for Java annotations is sparse once you get past the hello world examples. What I have found gave only a passing reference that the system is pluggable and that I'd have to write my own maven plug-in to do the type checking. I'm very interested in this as a possible solution as I don't expect to have to write custom serializers among other boilerplate as the types are just plain Java reference types that are greatly supported by most JSON and database libraries.
#Documented
#Retention(RetentionPolicy.RUNTIME)
#Target({ ElementType.FIELD, ElementType.LOCAL_VARIABLE, ElementType.PARAMETER })
#TypeQualifier(applicableTo = Long.class)
public #interface ReviewId {}
//...
public class Review {
private #ReviewId Long id;
private #ProductId Long productId;
//...
}
//...
ReviewDto dto = // some DTO that was parsed from JSON for example
Review review = new Review();
review.productId = dto.getCreatedById(); // **magic** happens here so that both maven and IDE catch this at compile time

The Checker Framework is an annotation-based approach, as you requested.
The Checker Framework permits you to specify domain properties with type annotations, and it enforces those properties at compile time.
It is used at companies such as Amazon, Google, Uber, and many others.
You can use a pre-built type system, or you can build your own which can be as little as 4 lines of code (example). You do not need to write a Maven plug-in.
For more information, see the Checker Framework manual.

Related

One way mapping in Dozer using custom converter

Please note: while I would accept an XML-based solution if that's truly the only way to accomplish what I'm looking for, I would greatly prefer a solution using Dozer's Java API.
I am new to Dozer and am trying to figure out how to use its API. It seems to default to field-level mappings (if the field names match) and to allow for custom mappers and converters in the event that field-level mapping (based on field name) is either not possible or not logical for your application needs.
I have a situation where my app will take a DTO, say, ReportedIssue (an issue reported by a user and sent to my application over HTTP), and an Issue entity (a data entity that will be persisted to a MySQL DB).
Here are my two objects:
#Data
public class ReportedIssue {
private String typeRefId;
private String reporterRefId;
private String info;
}
#Entity
#Table(name = "issues")
#Data
public class Issue {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Column(name = "issue_ref_id")
private String refId;
#Column(name = "issue_tracking_number")
private String trackingNumber;
#OneToOne(fetch = FetchType.EAGER, cascade = {CascadeType.PERSIST, CascadeType.MERGE})
#JoinColumn(name = "issue_type_id", referencedColumnName = "issue_type_id")
private IssueType type;
#Column(name = "issue_reported_on")
private Date reportedOn;
#OneToOne(fetch = FetchType.EAGER, cascade = {CascadeType.PERSIST, CascadeType.MERGE})
#JoinColumn(name = "issue_reporter_id", referencedColumnName = "account_id")
private Account reporter;
#Column(name = "issue_info")
private String info;
}
So in the application frontend, a user can report an issue. The frontend sends a JSON version of a ReportedIssue to the backend, where that JSON is deserialized into a ReportedIssue DTO bean. Then I need Dozer to convert my ReportedIssue into an Issue entity that I can then easily save to my MySQL DB.
Here is my best attempt:
public class ReportedIssueConverter extends DozerConverter<ReportedIssue, Issue> {
private AuthService authService;
public ReportedIssueConverter(AuthService authService, Class<ReportedIssue> prototypeA, Class<Issue> prototypeB) {
super(prototypeA, prototypeB);
this.authService = authService;
}
public ReportedIssueConverter(Class<ReportedIssue> prototypeA, Class<Issue> prototypeB) {
super(prototypeA, prototypeB);
}
#Override
public Issue convertTo(ReportedIssue source, Issue destination) {
Issue issue = new Issue();
issue.setRefId(UUID.randomUUID().toString());
issue.setType(IssueUtils.determineType(source));
issue.setReportedOn(DateTimeUtils.nowInUTC());
issue.setReporter(authService.currentUser());
issue.setInfo(destination.getInfo());
return issue;
}
#Override
public ReportedIssue convertFrom(Issue source, ReportedIssue destination) {
throw new UnsupportedOperationException("we currently don't map from issues to reported issues");
}
}
Several concerns here. For one, is such a custom converter even necessary? Or is there a "better" (more standards compliant or using generally-accepted Dozer practices) way to use the Dozer API to perform this conversion? But mainly, this DozerConverter seems to be intended for bi-directional mapping use cases. Whereas, in my application, I will never have an Issue instance and need to map it back to a ReportedIssue DTO instance. So I only need one-way mapping from ReportedIssue --> Issue. Am I using Dozer correctly by throwing an UnsupportedOperationException or is there another interface or API trick I can use to only leverage the one-way mapping I need?
It could actually be done without a custom converter using custom getter methods in your dto class corresponding to fields in Issue. Dozer works by mapping each field in destination class by trying to invoke the getter method of the corresponding name in the source class.
public class ReportedIssue {
// fields.......
public String getRefId() {
UUID.randomUUID().toString()
}
public IssueType getType() {
IssueUtils.determineType(this);
}
// similarly create getters for other required fields.
}
But for reporter field in Issue, you need an AuthService object. I would suggest writing a static method as below:
public static Issue getIssue(AuthService auth, ReportedIssue dto) {
Issue issue = //map using dozer
issue.setReporter(authService.currentUser());
return issue;
}
Gauntham answer will work. Another option:
Implement a com.github.dozermapper.core.BeanFactory
Your custom BeanFactory can handle
Issue issue = new Issue();
issue.setRefId(UUID.randomUUID().toString());
issue.setReportedOn(DateTimeUtils.nowInUTC());
issue.setReporter(authService.currentUser());
Then depending on your preferences, this could also go into the bean factory
issue.setType(IssueUtils.determineType(source));
Or you could handle that separately in the mapping. Something would need to know how to call IssueUtils, so that is either 1) a customer converter or 2) a change to the DTO or entity to have the functionality through a getter or setter.
Finally, this line would be handled in the Dozer Java API mapping
issue.setInfo(destination.getInfo());
Personally, I like Dozer's com.github.dozermapper.core.loader.api.BeanMappingBuilder where you can explicitly tell it how to map 2 beans, specify the bean factory to use and the custom converter for a specific field.
mapping(ReportedIssue.class, Issue.class, oneWay(), wildcard(true), beanFactory(IssueBeanFactory.class.getName()).fields("this", "type", customConverter(IssueTypeConverter.class)
oneWay(), wildcard(boolean), and beanFactory(String) are found in Dozer's TypeMappingOptions and customConverter(Class.class) is found in Dozer's FieldMappingOptions.
oneWay() makes the mapping work only in the direction specified in the BeanMappingBuilder.
wildcard(true) tells Dozer to automatically map matching fields (this is default behavior).

Is it possible to use a MongoRepository with not-fixed document structure? [duplicate]

Mongodb is a no-schema document database, but in spring data, it's necessary to define entity class and repository class, like following:
Entity class:
#Document(collection = "users")
public class User implements UserDetails {
#Id private String userId;
#NotNull #Indexed(unique = true) private String username;
#NotNull private String password;
#NotNull private String name;
#NotNull private String email;
}
Repository class:
public interface UserRepository extends MongoRepository<User, String> {
User findByUsername(String username);
}
Is there anyway to use map not class in spring data mongodb so that the server can accept any dynamic JSON data then store it in BSON without any pre-class define?
First, a few insightful links about schemaless data:
what does “schemaless” even mean anyway?
“schemaless” doesn't mean “schemafree”
Second... one may wonder if Spring, or Java, is the right solution for your problem - why not a more dynamic tool, such a Ruby, Python or the Mongoshell?
That being said, let's focus on the technical issue.
If your goal is only to store random data, you could basically just define your own controller and use the MongoDB Java Driver directly.
If you really insist on having no predefined schema for your domain object class, use this:
#Document(collection = "users")
public class User implements UserDetails {
#Id
private String id;
private Map<String, Object> schemalessData;
// getters/setters omitted
}
Basically it gives you a container in which you can put whatever you want, but watch out for serialization/deserialization issues (this may become tricky if you had ObjectIds and DBRefs in your nested document). Also, updating data may become nasty if your data hierarchy becomes too complex.
Still, at some point, you'll realize your data indeed has a schema that can be pinpointed and put into well-defined POJOs.
Update
A late update since people still happen to read this post in 2020: the Jackson annotations JsonAnyGetter and JsonAnySetter let you hide the root of the schemaless-data container so your unknown fields can be sent as top-level fields in your payload. They will still be stored nested in your MongoDB document, but will appear as top-level fields when the ressource is requested through Spring.
#Document(collection = "users")
public class User implements UserDetails {
#Id
private String id;
// add all other expected fields (getters/setters omitted)
private String foo;
private String bar;
// a container for all unexpected fields
private Map<String, Object> schemalessData;
#JsonAnySetter
public void add(String key, Object value) {
if (null == schemalessData) {
schemalessData = new HashMap<>();
}
schemalessData.put(key, value);
}
#JsonAnyGetter
public Map<String, Object> get() {
return schemalessData;
}
// getters/setters omitted
}

Is there a way (e.g. an Eclipse plugin) to automatically generate a DTO from an Entity (JPA)?

I would like a plain forward DTO generation tool that would either
Generate it on the fly (e.g. cglib - create the class and DTO object on the fly)
Or an Eclipse plugin that will take the Entity and generate a DTO (user will specify which tree graph to include, and for non included, will include foreign keys instead of related entities etc)
E.g. take something like this
#Entity
#Table(name="my_entity")
public class MyEntity {
#Id #GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String name;
#ManyToOne
private RelatedEntity related;
public RelatedEntity getRelated(){
return related;
}
...
And generate something like this :
#Entity
#Table(name="my_entity")
public class MyEntity imlpements MyEntityDTO {
#Id #GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String name;
#ManyToOne
private RelatedEntity related;
//overrides MyEntity interface, it's allowed to narrow return type
public RelatedEntity getRelated(){
return related;
}
...
//implements MYEntityDTO respective interfaces
public Long getRelatedId(){return related.getId();}
And DTO interface(s):
public interface MyEntityDTO {
public String getId();
public String getName();
public Long getRelatedId();
public RelatedEntityDTO getRelated(); //RelatedEntity implements RelatedEntityDTO
...
}
public interface RelatedEntityDTO {
...
}
If we don't want to include children in the graph, remove it from the DTO interface:
public interface MyEntityDTO {
public String getId();
public String getName();
public Long getRelatedId();
...
I'm sure there is some eclipse plugn for it and if not, I challange someone to write one, or explain why what I want is not helpful (and provide an alternative suggestion)
Probably Hibernate Tools should be doing this: http://hibernate.org/subprojects/tools.html
Telosys Tools can generate both : JPA entity and DTO
Let's have a look at this tutorial https://sites.google.com/site/telosystutorial/springmvc-jpa-springdatajpa
it generates a full Spring MVC CRUD application with JPA
Architecture : https://sites.google.com/site/telosystutorial/springmvc-jpa-springdatajpa/presentation/architecture
The mapper Entity/DTO is also generated (it uses "org.modelmapper" )
The templates are customizable
Try to look at:
https://github.com/nikelin/spring-data-generation-kit
But it's only suitable for you if your project is under the
Maven control.

Storing Objects in columns using Hibernate JPA

Is it possible to store something like the following using only one table? Right now, what hibernate will do is create two tables, one for Families and one for people. I would like for the familymembers object to be serialized into the column in the database.
#Entity(name = "family")
class Family{
private final List<Person> familyMembers;
}
class Person{
String firstName, lastName;
int age;
}
This is an horrible design and I'm really not recommending it (you should just create another table) but it is possible.
First, you'll need to use a byte[] attribute to hold a serialized version of the list of persons that will be stored in a BLOB in the database. So annotate it's getter with #Lob (I would make the getter and setter private to not expose them). Then, expose "fake" getter and setter to return or set a List<Person> from the byte[]. I'm using SerializationUtils from Commons Lang in the sample below (provide you own helper class if you don't want to import this library) to serialize/deserialize on the fly to/from the byte[]. Don't forget to mark the "fake" getter with #Transcient or Hibernate will try to create a field (and fail because it won't be able to determine the type for a List).
#Entity(name = "family")
class Family implements Serializable {
// ...
private byte[] familyMembersAsByteArray;
public Family() {}
#Lob
#Column(name = "members", length = Integer.MAX_VALUE - 1)
private byte[] getFamilyMembersAsByteArray() { // not exposed
return familyMembersAsByteArray;
}
private void setFamilyMembersAsByteArray((byte[] familyMembersAsByteArray() { // not exposed
this.familyMembersAsByteArray = familyMembersAsByteArray;
}
#Transient
public List<Person> getFamilyMembers() {
return (List<Person>) SerializationUtils.deserialize(familyMembersAsByteArray);
}
public void setParticipants(List familyMembers) {
this.familyMembersAsByteArray = SerializationUtils.serialize((Serializable) familyMembers);
}
}
Don't forget to make the Person class Serializable and to add a real serialVersionUID (I'm just showing a default here):
public class Person implements Serializable {
private static final long serialVersionUID = 1L;
// ...
private String firstName, lastName;
private int age;
}
But, let me insist, this is an horrible design and it will be very fragile (changing Person might require to "migrate" the content of the BLOB to avoid deserialization issues and this will become painful. You should really reconsider this idea and use another table for the Person instead (or I don't get why you use a database).
#Type(type = "serializable")
private List<Person> familyMembers;
if you can't use hibernate annotations try this:
#Lob
private Serializable familyMembers;
public List<Person> getFamilyMembers(){
return (List) familyMembers;
}
public void setFamilyMembers(List<Person> family){
familyMembers = family;
}
Annotate the property with #Column and define the type to be ArrayList, not just List. And make Person implement Serializable.
But you should do this only if your motives are very clear, because this is the correct solution in some very rare cases. As Pascal noted, if you ever have to change Person you'll have headaches.
You can create pseudoproperty (getter and setter) which accepts/returns the serialized form, and annotate the familyMembers with #Transient. This would also need to annotate the getters, not fields, for all other properties.

Splitting one DB association into several collections in Hibernate

I am trying to model such situation - there is a cash transfer (I mean a car that carries money), that has required amounts of each currency, and also an actual amount for each currency. And it seems to me pointless to create two separate classes, one for required amount and another for actual amount. So the implementation would look like this:
#Entity
public class CashTransferCurrencyAmount {
// id, version and so on
#Column(length = 3)
private String currencyCode;
#Basic
private BigDecimal amount;
#ManyToOne
private CashTransfer cashTransfer;
}
#Entity
public class CashTransfer {
// id, version and so on
#OneToMany(mappedBy="cashTransfer")
private Set<CashTransferCurrencyAmount> requiredCurrencyAmountSet = new HashSet<CashTransferAmountCurrency>();
#OneToMany(mappedBy="cashTransfer")
private Set<CashTransferCurrencyAmount> actualCurrencyAmountSet = new HashSet<CashTransferAmountCurrency>();
}
But how is a CashTransferCurrencyAmount instance to know to which collection it belongs? I have two ideas:
1 - add a discriminator field to CashTransferCurrencyAmount:
public enum RequestType {
ACTUAL,
REQUIRED
}
#Basic
#Enumerated(EnumType.STRING)
private RequestType requestType;
and add #WHERE annotations to collections in CashTransfer. This is preferable for me.
2 - create two join tables. one for mapping requested amounts and one for mapping actual amounts. I dislike this one as I don't want too many tables in my DB.
Are there any other ways to achieve this? I this approach correct?
And please don't tell me to put both requested and actual amounts in one entity. The real case is more complicated, each CashTransferCurrencyAmount has it's own collections so it can't be solved that way.
EDIT
As for requests for complete story - there used to be two values in CashTransferCurrencyAmount - required (I think it should be 'requested') and actual, but now each amount has it's own collection - how this amount is split into denominations. So I need a collection of amounts, each one having a collection of denominations. The type of CurrencyAmount and CurencyDenomination seems to be the same for requested ones and for actual ones.
Since you want CashTransferCurrencyAmount instance to know which collection it belongs to, I assume you want to have some logic based on that. The way I would model your situation would be using inheritance.
You're saying "it seems to me pointless to create two separate classes", I would however try to convince you that you should. You could use a "Single Table" inheritance type, so that you don't introduce additional tables in your DB, which is what you're trying to accomplish.
My shot would look something like:
#Entity
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#DiscriminatorColumn(name = "request_type", discriminatorType = DiscriminatorType.STRING)
public abstract class CashTransferCurrencyAmount {
// id, version and so on
#Column(length = 3)
private String currencyCode;
#Basic
private BigDecimal amount;
#ManyToOne
private CashTransfer cashTransfer;
}
#Entity
#DiscriminatorValue("REQUIRED")
public class CashTransferCurrencyAmountRequired extends CashTransferCurrencyAmount {
// required anount specific stuff here
}
#Entity
#DiscriminatorValue("ACTUAL")
public class CashTransferCurrencyAmountActual extends CashTransferCurrencyAmount {
// actual anount specific stuff here
}
#Entity
public class CashTransfer {
// id, version and so on
#OneToMany(mappedBy="cashTransfer")
private Set requiredCurrencyAmountSet = new HashSet();
//Stackoverflow deleting my generic sets! But it's exactly the same as in your code...
#OneToMany(mappedBy="cashTransfer")
private Set actualCurrencyAmountSet = new HashSet();
}

Categories