Exposing JPA Entities to the Presentation Layer - java

It seems almost trivial to use JPA entities in the presentation layer.
XHTML:
<h:outputText value="#{catalog.item.name}:"/>
Controller:
#SessonScoped
#ManagedBean
class Catalog {
#EJB
private Item item = null;
public Item getItem(){
...
}
....
}
Entity JPA:
#Entity
class Item {
#Id
private Integer identifier;
#Column
private String name;
//gets and sets
}
Are there any restrictions for medium/large systems? Does it scale? Are there any gotchas?

JSF and Java EE experts such as Bauke Scholtz and Adam Bien recommend using Entities in the presentation layer as opposed to making some typically useless intermediary object.
I'm going to quote them below, but note that they sometimes use the term "DTO" (Data Transfer Object) to describe the intermediary object that some designs introduce between the Entities and presentation layer.
Adam Bien writes:
On the other hand considering a dedicated DTO layer as an investment, rarely pays off and often lead to over-engineered and bloated architectures. At the beginning the DTOs will be identical to your domain layer - without any impedance mismatch. If you are 'lucky', you will get few differences over the time. Especially with lightweight platforms like Java EE 6, the introduction of a DTO is a bottom-up, rather than top-down approach." ~ How evil are Data Transfer Objects
(Note that the above quoted article also suggests when it is appropriate to use an intermediary object: "It is perfectly valid to introduce a dedicated DTO to adapt an incompatible domain layer...")
Bien's description makes perfect sense to me. Having worked on projects where intermediary objects identical to the Entities are immediately introduced because "it's good design because it has low coupling" has been a huge, comical waste of time. It's possible to waste time converting DTOs to Entities, and it takes good team discipline to make sure developers treat the DTOs and Entities according to some project policy, e.g. on which objects do you perform validations, business logic, etc.
Bauke Scholtz writes:
However, for the average webapplication you don't need DTO's. You're already using JPA entities. You can just go ahead with using them in your JSF bean/view. This question already indicate that you don't need DTOs at all. You are not blocked by some specific business restrictions. You should not then search for design patterns so that you can apply it on your project. You should rather search for real problems in form of overcomplicated/unmaintainable code so that you can ask/find a suitable design pattern for it. ~ How to use DTO in JSF + Spring + Hibernate

JPA Entities are plain old java objects with decoupled states (DETACHED, MANAGED) and there would be no problem using them in the presentation layer.
In most applications it makes no sence to copy the fields of an JPA entity to an additional presentation object that provides the same state.
You could use JPA entities in combination with interfaces, so that your able to introduce additional transfer objects only if they are really needed and if no existing jpa entity matches the requirements of the target view.
For relations with interfaces the targetType attribute of #OneToMany, #OneToOne or #ManyToOne is needed (e. g. #OneToMany(targetType = SomeJPAEntity.class)).
Here is some example code for the Items entity used in both the persistence and the presentation layer of a Java FX application:
// Service definition for obtaining IItem objects.
public interface IItemService {
IItem getItemById(Integer id);
IItemWithAdditionalState getItemWithAdditionalStateById(Integer id);
}
// Definition of the item.
public interface IItem {
StringProperty nameProperty();
ObservableList subItems();
List getSubItems();
}
// Definition for the item with additional state.
public interface IItemWithAdditionalState extends IItem {
String getAdditionalState();
}
// Represents a sub item used in both the persistence and the presentation layer.
#Table(name = SubItem.TABLE_NAME)
#Entity
public class SubItem extends AbstractEntity implements ISubItem, Serializable {
public static final String TABLE_NAME = "SUB_ITEM";
}
// Represents an item used in both the persistence and the presentation layer.
#Access(AccessType.PROPERTY)
#Table(name = Item.TABLE_NAME)
#Entity
public class Item extends AbstractEntity implements IItem, Serializable {
public static final String TABLE_NAME = "ITEM";
private StringProperty nameProperty;
private String _name; // Shadow field for lazy loading of java fx properties.
private ObservableList<ISubItem> subItems = FXCollections.observableArrayList();
public StringProperty nameProperty() {
if (null == nameProperty) {
nameProperty = new SimpleStringProperty(this, "name");
}
return nameProperty;
}
public String getName() {
return null == nameProperty ? _name : nameProperty.get();
}
public void setName(String name) {
if (null == nameProperty) {
_name = name;
} else {
_name = null;
nameProperty.set(name);
}
}
#Override
public ObservableList<ISubItem> subItems() {
return subItems;
}
#JoinColumn(name = "ID")
#OneToMany(targetEntity = SubItem.class)
#Override
public List<ISubItem> getSubItems() {
return subItems;
}
public void setSubItems(List<ISubItem> subItems) {
this.subItems.setAll(subItems)
}
}
// New added presentation data transfer object for matching the requirements of a special view.
public class ItemWithAdditionalState extends Item implements IItemWithAdditionalState {
String getAdditionalState();
}

From a technical point of view, nothing stops you from querying your entities in your data layer and allowing them to flow all the way through the business tier and into your presentation layer. In fact, it's a plausible solution for very small scale applications or proofs of concept work.
The problem with that approach is that you now have a tightly coupled relationship between the domain entity classes and your view. Any change to your domain model immediately impacts your views or any view requirement changes could immediately impact your domain model. This tight coupling isn't desirable in any complex application.

Related

Domain Drive Design, how to implement this type of rule

I try to practice DDD but I have a doubt about this type of rules that I wrote bellow:
class UserAggregator under domain layer.
public class UserAggregator{
private UUID userId;
private String userName;
private String country;
//getter
boolean isUserNamePatternCorrect(){
return this.userName.startsWith("TOTO");
}
}
Repository interface under domain layer.
public interface UserRepository {//User Table
public UserAggregator findUser(UUID id);
public void deleteUser(UserAggregator userToARchive);
}
public interface ArchiveUserRepository { //ArchiveUser Table
public archiveUser(UUID id);
}
the UserRepository/ArchiveUserRepository are implemented under infrastructure layer using spring data.
Service under infrastructure layer.
public class MyService {
#Autowired
UserRepository userRepo;
#Autowired
ArchiveUserRepository archiveUserRepo;
public void updateUserName(UUID userId){
UserAggregator userAgg = userRepo.findUser(userId);
if(Objects.nonNull(userAgg)){
boolean isCorrect = userAgg.isUserNamePatternCorrect();
if(!isCorrect){//-----------------------------1
userRepo.deleteUser(userAgg);//-----------2
archiveUserRepo.archiveUser(userAgg);//---3
}
}
//....
}
}
my management rule says, if the userName does not match the pattern then archive the user into archive table and remove it from user table.
as you notice, the rule (from //1 to //3) is written in the service class and not into my agregator!
Is this correct ? or how to manage this ?
First of all, it's not "Aggregator", it's "Aggregate" and second don't actually put the "Aggregate" suffix, it doesn't bring any value and goes against a business-driven language (Ubiquitous Language).
my management rule says, if the userName does not match the pattern then archive the user into archive table and remove it from user table
Well, I think the problem is that this is not a business rule at all, it's a technical specification. Why would business experts care about database tables? DDD is helpful to model & describe rules in an infrastructure-agnostic way, so it won't be very helpful to model technical specifications.
There's a few ways to retain the technical specification while eliminating the infrastructure pollution though, for instance the repository could make the storage decision based on an archived state.
class User {
boolean archived;
void changeUserName(String userName) {
this.userName = userName;
if (!isUserNamePatternCorrect()) {
this.archived = true;
}
}
}
class UserRepository {
save(User user) {
if (user.archived) ...
}
}
If you need to be more technically explicit then you could perhaps capture the logic in a domain service and use a similar approach to what you have or perhaps even dispatch a domain event such as UsernameChanged and have it handled by a UserArchivingPolicy.
As it seems to be an infrastructure rule on how to behave in that particular case of having an incorrect username, I don't see any problem on your code.
The responsabilities belong to the repositories IMHO, so I think that this approach is more than correct.

Axon: Create and Save another Aggregate in Saga after creation of an Aggregate

Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!

Axon - Does the unique Id of my Aggregate equal/be the same as that of my Entity?

I must be doing something wrong here.
I have a very simple Axon application that has two simple functions: create a person & change the name of the person.
So I have Person Entity:
#Entity
#Data
#NoArgsConstructor
public class Person {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String name;
}
And my PersonAggregate:
#Aggregate
#Data
#NoArgsConstructor
public class PersonAggregate {
#AggregateIdentifier
private UUID id;
private String name;
#CommandHandler
public PersonAggregate(CreatePersonCommand command) {
apply(new PersonCreatedEvent(
command.getId(),
command.getName()
));
}
#EventSourcingHandler
public void on(PersonCreatedEvent event) {
this.id = event.getId();
this.name = event.getName();
}
#CommandHandler
public void handle(ChangeNameCommand command) {
apply(
new NameChangedEvent(
command.getId(),
command.getName()
)
);
}
#EventSourcingHandler
public void on(NameChangedEvent event) {
this.name = event.getName();
}
}
And this is my ChangeNameCommand:
#Data
#NoArgsConstructor
#AllArgsConstructor
public class ChangeNameCommand {
#TargetAggregateIdentifier
private UUID id;
private String name;
}
And this is my NameChangedEvent:
#Value
public class NameChangedEvent {
private final UUID id;
private final String name;
}
And Repository:
#Repository
public interface PersonCommandRepository extends JpaRepository<Person, Long> {
}
The Problem is that I am not sure how to structure my Event Handler
#EventHandler
public void changeName(NameChangedEvent event) {
Optional<Person> opt = null;//this.personCommandRepository.findById(event.getId());
if (opt.isPresent()) {
Person person = opt.get();
person.setName(event.getName());
this.personCommandRepository.save(person); //Does'nt this defeat the purpose of Event Sourcing? A single point of truth?
}
And also, in my controller, I have quite I big problem, in the comment:
#RequestMapping(value = "/rename", method = RequestMethod.POST)
public ResponseEntity<?> changeName(#RequestParam("id") Long id, #RequestParam("name")String name){
//1. Which Id do we pass in here? Aggregate or Entity?
//this.commandGateway.send(new ChangeNameCommand(id,name));
return new ResponseEntity<String>("Renamed", HttpStatus.OK);
}
}
I feel you are still mixing the idea of the Command Model and the Query Model with one another.
The command model, which typically is the aggregate (or several), only handles requests of intent to change some state. These requests of intent, i.e. command message, are thus the sole operations targeted towards your aggregate/command model. Vice versa, if you have the requirement to change something on a Person, that means you dispatch a command marking that intent towards the PersonAggregate.
The query model on the other hand is solely tasked with answering the questions targeted towards your application. If there is somehow a need to know the state of a Person, that means you dispatch a query towards a component capable of returning the Person entity you have shared.
Going back to "the problems" you described:
"Doesn't this defeat the purpose of Event Sourcing? A single point of truth?": Event Sourcing is a concept specific towards rehydrating the command model from the events itself has published. Thus, employing Event Sourcing just means you are making certain not only your query models are created through events, but also your command models. Ergo, you are enforcing a single source of truth by doing this. Using the events to update query models thus does not defeat the purpose of ES.
"Which Id do we pass in here? Aggregate or Entity?": When you are in doubt, try to figure out what you want to achieve. Are you requesting information? Then it's a query targeted towards your query model. Is the intent to change some state to a model? Then you dispatch a command towards the command model. In this exact scenario, you want to change the name of a Person. Hence, you dispatch a command towards the Aggregate/Command Model.
On AxonIQ's webpage there is an Architectural Concepts section describing all the main principles. There's also one about CQRS which might help.
Might help to read up on these; pretty sure a lot of these will solve a lot of the question you have. :-)
Hope this helps!

Domain Driven Design Utility Classes and Pass Through Repositories In An Entity

The question relates to the injection of repositories and utility classes in an entity.
Background
The repository isn't directly used by the entity, but instead is passed through to a strategy which needs to enforce certain business rules.
The utility classes in question help with the validation of the model and image handling functionality. The validation utility applies JSR 303 validation to the model which checks that the entity is valid when created (custom JSR 303 annotations are used for certain business rules too). The image utility is called once the Order is saved (post persist) and uploads the image related to the order. This is done post persist as the ID of the Order in question is needed.
Question
All these dependencies injected into the entity don't feel right. The dilema is whether to move them all (or some, which ones?) out of the domain object and move them elsewhere? (if so what would be the right place?) or whether this is a case of analysis paralysis?
For example, the strategy ensures certain business rules are met. Should that be taken out because it needs the repository? I mean it needs to be run whenever the entity performs that update, so do I really want to lose that encapsulation?
public class OrderFactory {
// This could be autowired
private IOrderRepository orderRepository;
private ValidationUtils validationUtils;
private ImageUtils imageUtils;
public OrderFactory( IOrderRepository orderRepository, ValidationUtils validationUtils, ImageUtils imageUtils ) {
this.orderRepository = orderRepository;
this.validationUtils = validationUtils;
this.imageUtils = imageUtils;
}
public Order createOrderFromSpecialOrder( SpecialOrderDTO dto ) {
Order order = (dto.hasNoOrderId()) ? new Order() : orderRepository.findOrderById(dto.getOrderId());
order.updateFromSpecialOrder( orderRepository, validationUtils, imageUtils, dto.getSpecialOrderAttributes(), dto.getImage());
return order;
}
}
public class Order {
private IOrderRepository orderRepository;
private ValidationUtils validationUtils;
private ImageUtils imageUtils;
private byte[] image;
protected Order() {}
protected void updateFromSpecialOrder( IOrderRepository orderRepository, ValidationUtils validationUtils, ImageUtils imageUtils, ISpecialOrderAttributes attrs, byte[] image ) {
this.orderRepository = orderRepository;
this.imageUtils = imageUtils;
// This uses the orderRepository to do some voodoo based on the SpecialOrderAttributes
SpecialOrderStrategy specialOrderStrategy = new SpecialOrderStrategy( orderRepository );
specialOrderStrategy.handleAttributes( attrs );
this.coupon = coupon;
this.image = image;
validationUtils.validate( this );
}
#PostPersist
public void postPersist() {
if( imageUtils != null ) imageUtils.handleImageUpload( image, id, ImageType.Order );
}
}
What you are looking for is a domain service that plays a coordinating role and it depends on the aforementioned services/repositories. It's a classic mistake to try to stuff too much responsibility into an aggregate due to this twisted view of what invariants really are. Post processing might benefit from a domain event to kick it off (either in or out of process - you've given me too little to go on here). You also might want to assign identifiers from the outside, or allocate one as part of the domain service's method execution. As for the validation, ditch the attributes and hand roll (if they really are about the entity). YMMV.

JPA: How do I specify the table name corresponding to a class at runtime?

(note: I'm quite familiar with Java, but not with Hibernate or JPA - yet :) )
I want to write an application which talks to a DB2/400 database through JPA and I have now that I can get all entries in the table and list them to System.out (used MyEclipse to reverse engineer). I understand that the #Table annotation results in the name being statically compiled with the class, but I need to be able to work with a table where the name and schema are provided at runtime (their defintion are the same, but we have many of them).
Apparently this is not SO easy to do, and I'd appreciate a hint.
I have currently chosen Hibernate as the JPA provider, as it can handle that these database tables are not journalled.
So, the question is, how can I at runtime tell the Hibernate implementation of JPA that class A corresponds to database table B?
(edit: an overridden tableName() in the Hibernate NamingStrategy may allow me to work around this intrinsic limitation, but I still would prefer a vendor agnostic JPA solution)
You need to use the XML version of the configuration rather than the annotations. That way you can dynamically generate the XML at runtime.
Or maybe something like Dynamic JPA would interest you?
I think it's necessary to further clarify the issues with this problem.
The first question is: are the set of tables where an entity can be stored known? By this I mean you aren't dynamically creating tables at runtime and wanting to associate entities with them. This scenario calls for, say, three tables to be known at compile-time. If that is the case you can possibly use JPA inheritance. The OpenJPA documentation details the table per class inheritance strategy.
The advantage of this method is that it is pure JPA. It comes with limitations however, being that the tables have to be known and you can't easily change which table a given object is stored in (if that's a requirement for you), just like objects in OO systems don't generally change class or type.
If you want this to be truly dynamic and to move entities between tables (essentially) then I'm not sure JPA is the right tool for you. An awful lot of magic goes into making JPA work including load-time weaving (instrumentation) and usually one or more levels of caching. What's more the entity manager needs to record changes and handle updates of managed objects. There is no easy facility that I know of to instruct the entity manager that a given entity should be stored in one table or another.
Such a move operation would implicitly require a delete from one table and insertion into another. If there are child entities this gets more difficult. Not impossible mind you but it's such an unusual corner case I'm not sure anyone would ever bother.
A lower-level SQL/JDBC framework such as Ibatis may be a better bet as it will give you the control that you want.
I've also given thought to dynamically changing or assigning at annotations at runtime. While I'm not yet sure if that's even possible, even if it is I'm not sure it'd necessarily help. I can't imagine an entity manager or the caching not getting hopelessly confused by that kind of thing happening.
The other possibility I thought of was dynamically creating subclasses at runtime (as anonymous subclasses) but that still has the annotation problem and again I'm not sure how you add that to an existing persistence unit.
It might help if you provided some more detail on what you're doing and why. Whatever it is though, I'm leaning towards thinking you need to rethink what you're doing or how you're doing it or you need to pick a different persistence technology.
You may be able to specify the table name at load time via a custom ClassLoader that re-writes the #Table annotation on classes as they are loaded. At the moment, I am not 100% sure how you would ensure Hibernate is loading its classes via this ClassLoader.
Classes are re-written using the ASM bytecode framework.
Warning: These classes are experimental.
public class TableClassLoader extends ClassLoader {
private final Map<String, String> tablesByClassName;
public TableClassLoader(Map<String, String> tablesByClassName) {
super();
this.tablesByClassName = tablesByClassName;
}
public TableClassLoader(Map<String, String> tablesByClassName, ClassLoader parent) {
super(parent);
this.tablesByClassName = tablesByClassName;
}
#Override
public Class<?> loadClass(String name) throws ClassNotFoundException {
if (tablesByClassName.containsKey(name)) {
String table = tablesByClassName.get(name);
return loadCustomizedClass(name, table);
} else {
return super.loadClass(name);
}
}
public Class<?> loadCustomizedClass(String className, String table) throws ClassNotFoundException {
try {
String resourceName = getResourceName(className);
InputStream inputStream = super.getResourceAsStream(resourceName);
ClassReader classReader = new ClassReader(inputStream);
ClassWriter classWriter = new ClassWriter(0);
classReader.accept(new TableClassVisitor(classWriter, table), 0);
byte[] classByteArray = classWriter.toByteArray();
return super.defineClass(className, classByteArray, 0, classByteArray.length);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private String getResourceName(String className) {
Type type = Type.getObjectType(className);
String internalName = type.getInternalName();
return internalName.replaceAll("\\.", "/") + ".class";
}
}
The TableClassLoader relies on the TableClassVisitor to catch the visitAnnotation method calls:
public class TableClassVisitor extends ClassAdapter {
private static final String tableDesc = Type.getDescriptor(Table.class);
private final String table;
public TableClassVisitor(ClassVisitor visitor, String table) {
super(visitor);
this.table = table;
}
#Override
public AnnotationVisitor visitAnnotation(String desc, boolean visible) {
AnnotationVisitor annotationVisitor;
if (desc.equals(tableDesc)) {
annotationVisitor = new TableAnnotationVisitor(super.visitAnnotation(desc, visible), table);
} else {
annotationVisitor = super.visitAnnotation(desc, visible);
}
return annotationVisitor;
}
}
The TableAnnotationVisitor is ultimately responsible for changing the name field of the #Table annotation:
public class TableAnnotationVisitor extends AnnotationAdapter {
public final String table;
public TableAnnotationVisitor(AnnotationVisitor visitor, String table) {
super(visitor);
this.table = table;
}
#Override
public void visit(String name, Object value) {
if (name.equals("name")) {
super.visit(name, table);
} else {
super.visit(name, value);
}
}
}
Because I didn't happen to find an AnnotationAdapter class in ASM's library, here is one I made myself:
public class AnnotationAdapter implements AnnotationVisitor {
private final AnnotationVisitor visitor;
public AnnotationAdapter(AnnotationVisitor visitor) {
this.visitor = visitor;
}
#Override
public void visit(String name, Object value) {
visitor.visit(name, value);
}
#Override
public AnnotationVisitor visitAnnotation(String name, String desc) {
return visitor.visitAnnotation(name, desc);
}
#Override
public AnnotationVisitor visitArray(String name) {
return visitor.visitArray(name);
}
#Override
public void visitEnd() {
visitor.visitEnd();
}
#Override
public void visitEnum(String name, String desc, String value) {
visitor.visitEnum(name, desc, value);
}
}
It sounds to me like what you're after is Overriding the JPA Annotations with an ORM.xml.
This will allow you to specify the Annotations but then override them only where they change. I've done the same to override the schema in the #Table annotation as it changes between my environments.
Using this approach you can also override the table name on individual entities.
[Updating this answer as it's not well documented and someone else may find it useful]
Here's my orm.xml file (note that I am only overriding the schema and leaving the other JPA & Hibernate annotations alone, however changing the table here is totally possible. Also note that I am annotating on the field not the Getter)
<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings
xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd"
version="1.0">
<package>models.jpa.eglobal</package>
<entity class="MyEntityOne" access="FIELD">
<table name="ENTITY_ONE" schema="MY_SCHEMA"/>
</entity>
<entity class="MyEntityTwo" access="FIELD">
<table name="ENTITY_TWO" schema="MY_SCHEMA"/>
</entity>
</entity-mappings>
as alternative of XML configuration, you may want to dynamically generate java class with annotation using your preferred bytecode manipulation framework
If you don't mind binding your self to Hibernate, you could use some of the methods described at https://www.hibernate.org/171.html . You may find your self using quite a few hibernate annotations depending on the complexity of your data, as they go above and beyond the JPA spec, so it may be a small price to pay.

Categories