I'm working at the moment on a new project which has following requirement:
Several database schemas are holding the same tables with identical structure (in short: one entity for multiple schemas).
Is it possible to switch between those schemas by code? Want I want to achieve is:
User selects schema B and updates some entities in this. After this he does a insert in schema A and so on. I know that I could do this by basic JDBC by providing the schema to the statements but if I can avoid that I would do so.
Maybe some other java ORM can do this? I'm only familiar with JPA / Hibernate.
Regards
You can use separate SessionFactorys or EntityManagerFactorys, one for each schema.
Since you said that the user selects schema A or B, you can use something like this:
public enum Schema {
A, B
}
public EntityDaoImpl {
// Create and populate the map at DAO creation time (Spring etc.).
private Map<Schema, SessionFactory> sessionFactoryBySchema = ...;
private Session getSession(Schema schema) {
SessionFactory sessionFactory = sessionFactoryBySchema.get(schema);
return sessionFactory.getCurrentSession(); // ... or whatever
}
public void saveEntity(Schema schema, Entity entity) {
getSession(schema).save(entity);
}
}
Related
We are doing a data migration from one database to another using Hibernate and Spring Batch. The example below is slightly disguised.
Therefore, we are using the standard processing pipeline:
return jobBuilderFactory.get("migrateAll")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(DConfiguration.migrateD())
and migrateD consists of three steps:
#Bean(name="migrateDsStep")
public Step migrateDs() {
return stepBuilderFactory.get("migrateDs")
.<org.h2.D, org.mssql.D> chunk(100)
.reader(dReader())
.processor(dItemProcessor)
.writer(dWriter())
.listener(chunkLogger)
.build();
Now asume that this table has a manytomany relationship to another table. How can I persist that? I have basically a JPA Entity Class for all my Entities and fill those in the processor which does the actual migration from the old database objects to the new ones.
#Component
#Import({mssqldConfiguration.class, H2dConfiguration.class})
public class ClassificationItemProcessor implements ItemProcessor<org.h2.d, org.mssql.d> {
public ClassificationItemProcessor() {
super();
}
public Classification process(org.h2.d a) throws Exception {
d di = new di();
di.setA(a.getA);
di.setB(a.getB);`
// asking for object e.g. possible via, But this does not work:
// Set<e> es = eRepository.findById(a.getes());
di.set(es)
...
// How to model a m:n?
return d;
}
So I could basically ask for the related object via another database call (Repository) and add it to d. But when I do that, I rather run into LazyInitializationExceptions or, if it was successful sometimes the data in the intermediate tables will not have been filled up.
What is the best practice to model this?
This is not a Spring Batch issue, it is rather a Hibernate mapping issue. As far as Spring Batch is concerned, your input items are of type org.h2.D and your output items are of type org.mssql.D. It is up to you define what an item is and how to "enrich" it in your item processor.
You need to make sure that items received by the writer are completely "filled in", meaning that you have already set any other entities on them (be it a single entity or a set of of entities such as di.set(es) in your example). If this leads to lazy intitialization exceptions, you need to change your model to be eagerly initialized instead, because Spring Batch cannot help at that level.
I use spring data and hibernate. I have an Entity (TestEntity). I made a custom hibernate type that deserializes one String field to two columns.
If I persist an entity and then change it everything works fine and hibernate sends update query (it makes my type work and update query to DB "splits" my old column to two new).
But my goal is to make this king of migration for every record. I can't use an ordinary DB migration because there is some logic in my custom type.
I want to make something like this:
// here I persist all my entities
List<TestEntity> entities = entityRepository.findAll();
for (TestEntity entity : entities) {
// This piece of code does nothing, because when hibernate merges two entities, it understands, that nothing changed, so it won't send update query.
entityRepository.save(entity);
}
But I want him to send update query, although nothing has changed. Moreover, I want this hibernate behaviour to be in one place only (for example, I will create controller to execute this DB update). What is a solution to my problem? Is there any approach to its solving?
I don't understand why you need it but you need to detach the entity from the session for this to work.
As far as I understand, you need the EntityManger:
#PersistenceContext
private EntityManager entityManager;
...
List<TestEntity> entities = entityRepository.findAll();
for (TestEntity entity : entities) {
entityManager.detach(entity);
entityRepository.save(entity); // or entityManager.unwrap(Session.class).saveOrUpdate();
}
See Spring JpaRepository - Detach and Attach entity
So I'm trying for the first time in a not so complex project to implement Domain Driven Design by separating all my code into application, domain, infrastructure and interfaces packages.
I also went with the whole separation of the JPA Entities to Domain models that will hold my business logic as rich models and used the Builder pattern to instantiate. This approach created me a headache and can't figure out if Im doing it all wrong when using JPA + ORM and Spring Data with DDD.
Process explanation
The application is a Rest API consumer (without any user interaction) that process daily through Scheduler tasks a fairly big amount of data resources and stores or updates into MySQL. Im using RestTemplate to fetch and convert the JSON responses into Domain objects and from there Im applying any business logic within the Domain itself e.g. validation, events, etc
From what I have read the aggregate root object should have an identity in their whole lifecycle and should be unique. I have used the id of the rest API object because is already something that I use to identify and track in my business domain. I have also created a property for the Technical id so when I convert Entities to Domain objects it can hold a reference for the update process.
When I need to persist the Domain to the data source (MySQL) for the first time Im converting them into Entity objects and I persist them using the save() method. So far so good.
Now when I need to update those records in the data source I first fetch them as a List of Employees from data source, convert Entity objects to Domain objects and then I fetch the list of Employees from the rest API as Domain models. Up until now I have two lists of the same Domain object types as List<Employee>. I'm iterating them using Streams and checking if an objects are not equal() between them if yes a collection of List items is created as a third list with Employee objects that need to be updated. Here I've already passed the technical Id to the domain objects in the third list of Employees so Hibernate can identify and use to update the records that are already exists.
Up to here are all fairly simple stuff until I use the saveAll() method to update the records.
Questions
I alway see Hibernate using INSERT instead of updating the list of
records. So If Im correct Hibernate session is not recognising the
objects that Im throwing into it because I have detached them when I
used the convert to domain object?
Does anyone have a better idea how can I implement this differently or fix
this problem?
Or should I stop using this approach as two different objects and continue use
them as rich Entity models?
Simple classes to explain it with code
EmployeeDO.java
#Entity
#Table(name = "employees")
public class EmployeeDO implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public EmployeeDO() {}
...omitted getter/setters
}
Employee.java
public class Employee {
private Long persistId;
private Long employeeId;
private String name;
private Employee() {}
...omitted getters and Builder
}
EmployeeConverter.java
public class EmployeeConverter {
public static EmployeeDO serialize(Employee employee) {
EmployeeDO target = new EmployeeDO();
if (employee.getPersistId() != null) {
target.setId(employee.getPersistId());
}
target.setName(employee.getName());
return target;
}
public static Employee deserialize(EmployeeDO employee) {
return new Country.Builder(employee.getEmployeeId)
.withPersistId(employee.getId()) //<-- Technical ID setter
.withName(employee.getName())
.build();
}
}
EmployeeRepository.java
#Component
public class EmployeeReporistoryImpl implements EmployeeRepository {
#Autowired
EmployeeJpaRepository db;
#Override
public List<Employee> findAll() {
return db.findAll().stream()
.map(employee -> EmployeeConverter.deserialize(employee))
.collect(Collectors.toList());
}
#Override
public void saveAll(List<Employee> employees) {
db.saveAll(employees.stream()
.map(employee -> EmployeeConverter.serialize(employee))
.collect(Collectors.toList()));
}
}
EmployeeJpaRepository.java
#Repository
public interface EmployeeJpaRepository extends JpaRepository<EmployeeDO, Long> {
}
I use the same approach on my project: two different models for the domain and the persistence.
First, I would suggest you to don't use the converter approach but use the Memento pattern. Your domain entity exports a memento object and it could be restored from the same object. Yes, the domain has 2 functions that aren't related to the domain (they exist just to supply a non-functional requirement), but, on the other side, you avoid to expose functions, getters and constructors that the domain business logic never use.
For the part about the persistence, I don't use JPA exactly for this reason: you have to write a lot of code to reload, update and persist the entities correctly. I write directly SQL code: I can write and test it fast, and once it works I'm sure that it does what I want. With the Memento object I can have directly what I will use in the insert/update query, and I avoid myself a lot of headaches about the JPA of handling complex tables structures.
Anyway, if you want to use JPA, the only solution is to:
load the persistence entities and transform them into domain entities
update the domain entities according to the changes that you have to do in your domain
save the domain entities, that means:
reload the persistence entities
change, or create if there're new ones, them with the changes that you get from the updated domain entities
save the persistence entities
I've tried a mixed solution, where the domain entities are extended by the persistence ones (a bit complex to do). A lot of care should be took to avoid that domain model should adapts to the restrictions of JPA that come from the persistence model.
Here there's an interesting reading about the splitting of the two models.
Finally, my suggestion is to think how complex the domain is and use the simplest solution for the problem:
is it big and with a lot of complex behaviours? Is expected that it will grow up in a big one? Use two models, domain and persistence, and manage the persistence directly with SQL It avoids a lot of caos in the read/update/save phase.
is it simple? Then, first, should I use the DDD approach? If really yes, I would let the JPA annotations to split inside the domain. Yes, it's not pure DDD, but we live in the real world and the time to do something simple in the pure way should not be some orders of magnitude bigger that the the time I need to to it with some compromises. And, on the other side, I can write all this stuff in an XML in the infrastructure layer, avoiding to clutter the domain with it. As it's done in the spring DDD sample here.
When you want to update an existing object, you first have to load it through entityManager.find() and apply the changes on that object or use entityManager.merge since you are working with detached entities.
Anyway, modelling rich domain models based on JPA is the perfect use case for Blaze-Persistence Entity Views.
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. I created Entity Views on top of it to allow easy mapping between JPA models and custom interface defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure the way you like and map attributes(getters) via JPQL expressions to the entity model. Since the attribute name is used as default mapping, you mostly don't need explicit mappings as 80% of the use cases is to have DTOs that are a subset of the entity model.
The interesting point here is that entity views can also be updatable and support automatic translation back to the entity/DB model.
A mapping for your model could look as simple as the following
#EntityView(EmployeeDO.class)
#UpdatableEntityView
interface Employee {
#IdMapping("persistId")
Long getId();
Long getEmployeeId();
String getName();
void setName(String name);
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
Employee dto = entityViewManager.find(entityManager, Employee.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features and it can also be saved back. Here a sample repository
#Repository
interface EmployeeRepository {
Employee findOne(Long id);
void save(Employee e);
}
It will only fetch the mappings that you tell it to fetch and also only update the state that you make updatable through setters.
With the Jackson integration you can deserialize your payload onto a loaded entity view or you can avoid loading alltogether and use the Spring MVC integration to capture just the state that was transferred and flush that. This could look like the following:
#RequestMapping(path = "/employee/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> updateEmp(#EntityViewId("id") #RequestBody Employee emp) {
employeeRepository.save(emp);
return ResponseEntity.ok(emp.getId().toString());
}
Here you can see an example project: https://github.com/Blazebit/blaze-persistence/tree/master/examples/spring-data-webmvc
Imagine you have four MySQL database schemas across two environments:
foo (the prod db),
bar (the in-progress restructuring of the foo db),
foo_beta (the test db),
and bar_beta (the test db for new structures).
Further, imagine you have a Spring Boot app with Hibernate annotations on the entities, like so:
#Table(name="customer", schema="bar")
public class Customer { ... }
#Table(name="customer", schema="foo")
public class LegacyCustomer { ... }
When developing locally it's no problem. You mimic the production database table names in your local environment. But then you try to demo functionality before it goes live and want to upload it to the server. You start another instance of the app on another port and realize this copy needs to point to "foo_beta" and "bar_beta", not "foo" and "bar"! What to do!
Were you using only one schema in your app, you could've left off the schema all-together and specified hibernate.default_schema, but... you're using two. So that's out.
Spring EL--e.g. #Table(name="customer", schema="${myApp.schemaName}") isn't an option--(with even some snooty "no-one needs this" comments), so if dynamically defining schemas is absurd, what does one do? Other than, you know, not getting into this ridiculous scenario in the first place.
I have fixed such kind of problem by adding support for my own schema annotation to Hibernate. It is not very hard to implement by extending LocalSessionFactoryBean (or AnnotationSessionFactoryBean for Hibernate 3). The annotation looks like this
#Target(TYPE)
#Retention(RUNTIME)
public #interface Schema {
String alias() default "";
String group() default "";
}
Example of using
#Entity
#Table
#Schema(alias = "em", group = "ref")
public class SomePersistent {
}
And a schema name for every combination of alias and group is specified in a spring configuration.
you can try with interceptors
public class CustomInterceptor extends EmptyInterceptor {
#Override
public String onPrepareStatement(String sql) {
String prepedStatement = super.onPrepareStatement(sql);
prepedStatement = prepedStatement.replaceAll("schema", "Schema1");
return prepedStatement;
}
}
add this interceptor in session object as
Session session = sessionFactory.withOptions().interceptor(new MyInterceptor()).openSession();
so what happens is when ever onPrepareStatement is executed this block of code will be called and schema name will be changed from schema to schema1.
You can override the settings you declare in the annotations using a orm.xml file. Configure maven or whatever you use to generate your deployable build artifacts to create that override file for the test environment.
Let's suppose, am having two tables in my database and i have to write a join query using two tables. I mapped one of those tables as entity class in my MVC project, but there is no mapping for the other table as entity.
so when i run hql, will that join work?
if it doesn't, and if its necessary to have a mapping, should i specify the constraints (primary/foreign key) between those entities?
My application just reads the data from tables, hence i don't want to waste much time writing entity classes. Is there any easy approach using hibernate?
About your question: HQL only works with mapped entity, but can return not-mapped object with ResultTranformer, but is not your case. You can create minimal definition of unwanted entity with just relationships and property needed by your hql.
Another way to resolve is create the plain SQL query and return only mapped entity with session.createSQLQuery(yourQuerySQL).addEntity(YourMappedEntity.class).
Hibernate only knows what is there in session factory. If you have not defined some entity Hibernate would never know about is, so there is no question about writting hql involving that entity.
Alternatively you can get a connection from the session and then execute custom sql rather than hql.
To use plain sql you can use something like:
getSession().doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
// TODO Auto-generated method stub
}
})