Save List changes with Hibernate - java

I have an Object named Token. it has id, name, and value. After saving some data to db, I have loaded them into a web page
_____________________________________________
|____name____|____value____|____operation____|
tkn1 10 ×
tkn2 20 ×
the × sign enable me to delete a token from server collection
now. I have added token tkn3 with value 30 and deleted tkn2 so
the table would be:
_____________________________________________
|____name____|____value____|____operation____|
tkn1 10 ×
tkn3 30 ×
With these changes to the collection, how can I reflect them into database
how to determine the records that deleted, and the records that added?
I applied tow solutions:
I have compared -in business logic layer- the old data with the new data
and find the differences between the then send to database two lists, the first contains
the added tokens and the second contains the ids of tokens to be deleted.
I added a flag named status to the object.. when I add the flag is NEW
when I delete I just set flag to DELETE, and in DB layer I iterate over the collection
one by one object and check the flag.. if NEW then add the record, if DELETE , delete it
and if SAVED (no changes) I do no changes to it..
My questions:
Is this way is good to do this task..?
Is there a Pattern to accomplish this task?
Can Hibernate help me to do that?

• Is this way is good to do this task..?
NO
• Is there a Pattern to accomplish this task?
YES
• Can Hibernate help me to do that?
Hibernate provides the solution for such situation using Cascade Attribute for List property
Refer
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/collections.html
http://www.mkyong.com/hibernate/hibernate-cascade-example-save-update-delete-and-delete-orphan/

The blow entity should solve your problem.
#Entity
public class MyEntity {
private static enum Status {
NEW,
PERSISTENT,
REMOVED
}
#Id
private Long id;
private String name;
private int value;
#Transient
private Status uiStatus = Status.NEW;
public Long getId() {
return this.id;
}
public String getName() {
return this.name;
}
public Status getUiStatus() {
return this.uiStatus;
}
public int getValue() {
return this.value;
}
#PostLoad
public void onLoad() {
this.uiStatus = Status.PERSISTENT;
}
public void setId(Long id) {
this.id = id;
}
public void setName(String name) {
this.name = name;
}
public void setUiStatus(Status uiStatus) {
this.uiStatus = uiStatus;
}
public void setValue(int value) {
this.value = value;
}
}

Related

What is good practice to create pojo as having Class fields or simple

What is good practice to create pojo as having Class fields or simple fields.
I am creating pojo like this.
public class StatusDTO {
private String id;
private int totalNodes;
private int totalServlets;
private boolean status;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public int getTotalNodes() {
return totalNodes;
}
public void setTotalNodes(int totalNodes) {
this.totalNodes = totalNodes;
}
public int getTotalServlets() {
return totalServlets;
}
public void setTotalServlets(int totalServlets) {
this.totalServlets = totalServlets;
}
public boolean isStatus() {
return status;
}
public void setStatus(boolean status) {
this.status = status;
}
}
someone recommanded me to do like this as below
public class StatusDTO {
private String id;
private boolean status;
private Total total;
public Total getTotal() {
return total;
}
public void setTotal(Total total) {
this.total = total;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public boolean isStatus() {
return status;
}
public void setStatus(boolean status) {
this.status = status;
}
public static class Total {
private int nodes;
private int servlets;
public int getNodes() {
return nodes;
}
public void setNodes(int nodes) {
this.nodes = nodes;
}
public int getServlets() {
return servlets;
}
public void setServlets(int servlets) {
this.servlets = servlets;
}
}
}
what difference does it make? what is good practice among those two?
I am using this class to set db info and send info to web socket client(stomp)
The answer, as always in such questions, is: It depends.
Simple classes like the first one have the advantage that they are simpler and smaller. The advantage on the second attempt is that if your class, maybe now, maybe later, gets extended, it might be easier if you create a separate Total class.
Good Objectoriented Programming, and Java is strongly OO, almost always requires you to put everything into it's own class.
As a rule of thumb, I create a separate class if:
there is some functionality you to your fields.
you have more then two, mabye three fields related to each other (e.g. connectionHost, connectionPort)
it's just a model class (e.g. Customer, Article)
I can use the field in multiple other classes
Of course there are more but those are some of the most important ones (comment if you think there is another important one I forgot to mention).
Well, one important thing in a good Java application is separation of concerns, for example in an airport application a service that give the last flight of a customer should not require as parameter an object with the first name, the last name, the social security number, the marital status, the gender or whatever other information about the customer that are completely useless (or should be) in retrieving the customer last flight, such that you need to have an object Customer (with all customer information) and another object CustomerId (with only the necessary bits to get the flights).
Another example is for a online shop application, a service that calculate the total price of the basket should not require all the information about all articles (photos, description, specifications, ...) in the basket but only the prices and the discounts which should be enclosed in another object.
Here you have to decide if the concerns of your Total (you need a better name) object could be taken separately of the concerns of your StatusDTO object, such that a method could require only the Total object without the associated StatusDTO object. If you can take them separately then you should have separate objects, if you can't then it's unnecessary.

Java - Event Sourcing - Event Sequence

So, while developing an app, I have to use event sourcing to track down all changes to model. The app itself is made using spring framework. The problem I encountered: for example, user A sends a command to delete an entity and it takes 1 second to complete this task. User B, at the same time, sends a request to modify, for example, an entity name and it takes 2 seconds to do so. So my program finishes deleting this entity (persisting an event that says this entity is deleted), and after it another event is persisted for the same entity, that says that we just modified its name. But no actions are allowed with deleted entities. Boom, we just broke the app logic. It seems to me, that I have to put methods that write to database in synchronized blocks, but is there are any other way to handle this issue? Like, I dunno, queuing events? The application is not huge, and not a lot of requests are expected, so users can wait for its request turn in the queue (of course I can return 202 HTTP Status Code to him, but like I said, requests are not resource heavy and there wont be a lot of them, so its unnecessary). So what is the best way for me to use here?
EDIT: Added code to illustrate the problem. Is using synchronized in this case is a good practice or there are other choices?
#RestController
#RequestMapping("/api/test")
public class TestController {
#Autowired
private TestCommandService testCommandService;
#RequestMapping(value = "/api/test/update", method = RequestMethod.POST)
#ResponseStatus(HttpStatus.OK)
public void update(TestUpdateCommand command) {
testCommandService.update(command);
}
#RequestMapping(value = "/api/test/delete", method = RequestMethod.POST)
#ResponseStatus(HttpStatus.OK)
public void delete(Long id) {
testCommandService.delete(id);
}
}
public class TestUpdateCommand {
private Long id;
private String name;
public TestUpdateCommand() {
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
public interface TestCommandService {
void delete(Long id);
void update(TestRegisterCommand command);
}
#Service
public class TestCommandServiceImpl implements TestCommandService {
#Autowired
TestEventRepository testEventRepository;
#Override
#Transactional
public void delete(Long id) {
synchronized (TestEvent.class) {
//do logic, check if data is valid from the domain point of view. Logic is also in synchronized block
DeleteTestEvent event = new DeleteTestEvent();
event.setId(id);
testEventRepository.save(event);
}
}
#Override
#Transactional
public void update(TestUpdateCommand command) {
synchronized (TestEvent.class) {
//do logic, check if data is valid from the domain point of view. Logic is also in synchronized block
UpdateTestEvent event = new DeleteTestEvent();
event.setId(command.getId());
event.setName(command.getName());
testEventRepository.save(event);
}
}
}
#Entity
public abstract class TestEvent {
#Id
private Long id;
public Event() {
}
public Event(Long id) {
this.id = id;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
}
#Entity
public class DeleteTestEvent extends TestEvent {
}
#Entity
public class UpdateTestEvent extends TestEvent {
private String name;
public UpdateTestEvent() {
}
public UpdateTestEvent(String name) {
this.name = name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
public interface TestEventRepository extends JpaRepository<TestEvent, Long>{
}
Make sure you read Don't Delete -- Just Don't by Udi Dahan.
I have to put methods that write to database in synchronized blocks, but is there are any other way to handle this issue?
Yes, but you have to be careful about identifying what the issue is...
In the simple version; as you have discovered, allowing multiple sources of "truth" can introduce a conflict. Synchronization blocks is one answer, but scaling synchronization is challenging.
Another approach is to use a "compare and swap approach" -- each of your writers loads the "current" copy of the state, calculates changes, and then swaps the new state for the "current" state. Imagine two writers, one trying to change state:A to state:B, and one trying to change state:A to state:C. If the first save wins the race, then the second save fails, because (A->C) isn't a legal write when the current state is B. The second writer needs to start over.
(If you are familiar with "conditional PUT" from HTTP, this is the same idea).
At a more advanced level, the requirement that the behavior of your system depends on the order that messages arrive is suspicious: see Udi Dahan's Race Conditions Don't Exist. Why is it wrong to change something after deleting it?
You might be interested in Martin Kleppmann's work on conflict resolution for eventual consistency. He specifically discusses examples where one writer edits an element that another writer deletes.

Relationship Handling: Hibernate vs JDBC

Imagine I have a MySQL database with the 2 tables patient and medicine. I have displayed their columns below.
Patient
idPatient (int) (primary key)
first_name (varchar)
last_name (varchar)
Medicine
idMedicine (int) (primary key)
idPatient (int) (foreign key)
drug_name (varchar)
Please note that Medicine table does have the foriegn key of Patient table.
Now, if I use pure JDBC, I will do the following to create a bean for the Medicine and Patient tables
PatientBean class
public class PatientBean
{
private int idPatient;
private String first_name;
private String last_name;
public void setIdPatient(int idPatient)
{
this.idPatient = idPatient;
}
public int getIdPatient()
{
return idPatient;
}
public void setFirstName(String first_name)
{
this.first_name = first_name;
}
public String getFirstName()
{
return first_name;
}
public void setLastName(String last_name)
{
this.last_name = last_name;
}
public String getLastName()
{
return last_name;
}
}
`MedicineBean` class
public class MedicineBean
{
private int idMedicine;
private int idPatient;
private String drug_name;
public void setIdMedicine(int idMedicine)
{
this.idMedicine = idMedicine;
}
public int getIdMedicine()
{
return idMedicine;
}
public void setIdPatient(int idPatient)
{
this.idPatient = idPatient;
}
public int getIdPatient()
{
return idPatient;
}
public void setDrugName(String drug_name)
{
this.drug_name = drug_name;
}
public String getDrugName()
{
return drug_name;
}
}
However if I reverse engineer my database for hibernate using a tool like NetBeans which will generate the POJO files, mapping etc for Hibernate, I can expect something like below.
PatientBean class
public class PatientBean
{
private int idPatient;
private String first_name;
private String last_name;
private MedicineBean medicineBean;
public void setIdPatient(int idPatient)
{
this.idPatient = idPatient;
}
public int getIdPatient()
{
return idPatient;
}
public void setFirstName(String first_name)
{
this.first_name = first_name;
}
public String getFirstName()
{
return first_name;
}
public void setLastName(String last_name)
{
this.last_name = last_name;
}
public String getLastName()
{
return last_name;
}
public void setMedicineBean(String medicineBean)
{
this.medicineBean = medicineBean;
}
public String getMedicineBean()
{
return medicineBean;
}
}
MedicineBean class
public class MedicineBean
{
private int idMedicine;
private int idPatient;
private String drug_name;
private Set<PatientBean> patients = new HashSet<PatientBean>(0);
public void setIdMedicine(int idMedicine)
{
this.idMedicine = idMedicine;
}
public int getIdMedicine()
{
return idMedicine;
}
public void setIdPatient(int idPatient)
{
this.idPatient = idPatient;
}
public int getIdPatient()
{
return idPatient;
}
public void setDrugName(String drug_name)
{
this.drug_name = drug_name;
}
public String getDrugName()
{
return drug_name;
}
public void setPatients(Set<PatientBean>patients)
{
this.patients = patients;
}
public Set<PatientBean> getPatients()
{
return patients;
}
}
Not only this, Hibernate will also map the relationship type (one to one, one to many, many to one) inside the xml files. However in JDBC we don't care about them at all, they are just foreign keys treated in same way.
So my question is, why is this difference? I believe most of the operations Hibernate does are useless and just using CPU. For an example, trying to retrieve the list of patients in Patient table when we call getAllMedicines() method. In 99% of the case we just need all medicines not the list of patients, if we need that we can make a join and get it!
So what is the reason behind this? Or else should we maintain the same behavior for JDBC too?
I don't think that with hibernate you lose the full control as you are afraid.
The main difference is that hibernate will add an extra layer between your code and jdbc. This layer can be really thin : you have the choice to use jdbc in hibernate at anytime. So you are not losing any control.
The harder part is to understand how hibernate works so that you can use its higher level api and know how hibernate will translate that to jdbc. This is a bit a complex task, because orm mapping is a complex subject. Reading several times the reference documentation, to know exactly what hibernate can do, and what they recommend to do and not do, is a good starting point. The remaining will come from experience using hibernate.
For your example, you say hibernate map relationship, but this is not the case : your reverse-engineering tool did it. You are free to not map a relationship and map instead just the foreign key basic type (like a Long if the id is a number).
As for the loading of stuff. If you wish to always have a #OneToMany loaded, just annotate it with FetchType.EAGER. #*ToMany associations are lazy by default (to avoid loading too many data), but on the other hand, #*ToOne assocation are EAGER by default.
This can be configured at the entity level, making it the default behavior for queries, but can be overloaded for each query.
You see ? You are not losing control, you just need to understand how the hibernate api translate to jdbc.
Apart from bugs, which are fixed when raised to the hibernate team, the performance impact of hibernate is not that much. And in performance critical part of the application, you always have the choice to resort to jdbc, where the hibernate overhead is 0.
What do you gain from using hibernate ? From my experience, refactoring in entity model / database model is much easier, because you change the hibernate mapping, and all the queries generated by hibernate are automatically changed too. You just have to update the custom queries (SQL / HQL / Criteria) that you've hand-written.
From my experience (10 years using hibernate) on a several hundred tables (some of them with more than 10B rows), several terabytes database, i would not want to go back to plain jdbc, which does not mean i don't use it when it is the perfect tool, but it is just like 1 or 2% of the orm code i write.
Hope that helps.
EDIT: and if you are using hibernate with spring, have a look at spring-jdbc which adds a nice layer around jdbc. There, you nearly doesn't need to read the doc: you recognize directly how it will be translated to jdbc, but it brings lots of utility that reduce a lot the boilerplate of using jdbc directly (like exception handling to close Resultset and PreparedStatement, transformation of ResultSet to List of DTO, etc.).
Of course hibernate and spring-jdbc can be used in the same application. They just have to be configured to use the same transaction layer, and care be taken when used in the same tx.

How to create a one-to-many relationship with JDBI SQL object API?

I'm creating a simple REST application with dropwizard using JDBI. The next step is to integrate a new resource that has a one-to-many relationship with another one. Until now I couldn't figure out how to create a method in my DAO that retrieves a single object that holds a list of objects from another table.
The POJO representations would be something like this:
User POJO:
public class User {
private int id;
private String name;
public User(int id, String name) {
this.id = id;
this.name = name;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Account POJO:
public class Account {
private int id;
private String name;
private List<User> users;
public Account(int id, String name, List<User> users) {
this.id = id;
this.name = name;
this.users = users;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public List<User> getUsers() {
return users;
}
public void setUsers(List<User> users) {
this.users = users;
}
}
The DAO should look something like this
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public Account getAccountById(#Bind("id") int id);
}
But when the method has a single object as return value (Account instead of List<Account>) there seems to be no way to access more than one line of the resultSet in the Mapper class. The only solution that comes close I could find is described at https://groups.google.com/d/msg/jdbi/4e4EP-gVwEQ/02CRStgYGtgJ but that one also only returns a Set with a single object which does not seem very elegant. (And can't be properly used by the resouce classes.)
There seems to be a way using a Folder2 in the fluent API. But I don't know how to integrate that properly with dropwizard and I'd rather stick to JDBI's SQL object API as recommended in the dropwizard documentation.
Is there really no way to get a one-to-many mapping using the SQL object API in JDBI? That is such a basic use case for a database that I think I must be missing something.
All help is greatly appreciated,
Tilman
OK, after a lot of searching, I see two ways dealing with this:
The first option is to retrieve an object for each column and merge it in the Java code at the resource (i.e. do the join in the code instead of having it done by the database).
This would result in something like
#GET
#Path("/{accountId}")
public Response getAccount(#PathParam("accountId") Integer accountId) {
Account account = accountDao.getAccount(accountId);
account.setUsers(userDao.getUsersForAccount(accountId));
return Response.ok(account).build();
}
This is feasible for smaller join operations but seems not very elegant to me, as this is something the database is supposed to do. However, I decided to take this path as my application is rather small and I did not want to write a lot of mapper code.
The second option is to write a mapper, that retrieves the result of the join query and maps it to the object like this:
public class AccountMapper implements ResultSetMapper<Account> {
private Account account;
// this mapping method will get called for every row in the result set
public Account map(int index, ResultSet rs, StatementContext ctx) throws SQLException {
// for the first row of the result set, we create the wrapper object
if (index == 0) {
account = new Account(rs.getInt("id"), rs.getString("name"), new LinkedList<User>());
}
// ...and with every line we add one of the joined users
User user = new User(rs.getInt("u_id"), rs.getString("u_name"));
if (user.getId() > 0) {
account.getUsers().add(user);
}
return account;
}
}
The DAO interface will then have a method like this:
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public List<Account> getAccountById(#Bind("id") int id);
}
Note: Your abstract DAO class will quietly compile if you use a non-collection return type, e.g. public Account getAccountById(...);. However, your mapper will only receive a result set with a single row even if the SQL query would have found multiple rows, which your mapper will happily turn into a single account with a single user. JDBI seems to impose a LIMIT 1 for SELECT queries that have a non-collection return type. It is possible to put concrete methods in your DAO if you declare it as an abstract class, so one option is to wrap up the logic with a public/protected method pair, like so:
public abstract class AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
protected abstract List<Account> _getAccountById(#Bind("id") int id);
public Account getAccountById(int id) {
List<Account> accountList = _getAccountById(id);
if (accountList == null || accountList.size() < 1) {
// Log it or report error if needed
return null;
}
// The mapper will have given a reference to the same value for every entry in the list
return accountList.get(accountList.size() - 1);
}
}
This still seems a little cumbersome and low-level to me, as there are usually a lot of joins in working with relational data. I would love to see a better way or having JDBI supporting an abstract operation for this with the SQL object API.
In JDBI v3, you can use #UseRowReducer to achieve this. The row reducer is called on every row of the joined result which you can "accumulate" into a single object. A simple implementation in your case would look like:
public class AccountUserReducer implements LinkedHashMapRowReducer<Integer, Account> {
#Override
public void accumulate(final Map<Integer, Account> map, final RowView rowView) {
final Account account = map.computeIfAbsent(rowView.getColumn("a_id", Integer.class),
id -> rowView.getRow(Account.class));
if (rowView.getColumn("u_id", Integer.class) != null) {
account.addUser(rowView.getRow(User.class));
}
}
}
You can now apply this reducer on a query that returns the join:
#RegisterBeanMapper(value = Account.class, prefix = "a")
#RegisterBeanMapper(value = User.class, prefix = "u")
#SqlQuery("SELECT a.id a_id, a.name a_name, u.id u_id, u.name u_name FROM " +
"Account a LEFT JOIN User u ON u.accountId = a.id WHERE " +
"a.id = :id")
#UseRowReducer(AccountUserReducer.class)
Account getAccount(#Bind("id") int id);
Note that your User and Account row/bean mappers can remain unchanged; they simply know how to map an individual row of the user and account tables respectively. Your Account class will need a method addUser() that is called each time the row reducer is called.
I have a small library which will be very useful to maintain one to many & one to one relationship.
It also provide more feature for default mappers.
https://github.com/Manikandan-K/jdbi-folder
There's an old google groups post where Brian McAllistair (One of the JDBI authors) does this by mapping each joined row to an interim object, then folding the rows into the target object.
See the discussion here. There's test code here.
Personally this seems a little unsatisfying since it means writing an extra DBO object and mapper for the interim structure. Still I think this answer should be included for completeness!

Auto Increment like with String values in hibernate

Well i want to know if there is a much appropriate way to tackle generating auto id with string values, my first idea is creating an auto increment id which we can call auto_id then before saving a new entity I'll query for the latest data inside the db to get the id then I'll add 1 to my auto generate value column that I assign name which is stringValue+(id+1) though I'm concerned on how it will affect the performance as to saving this entity needs two access in db which is fetching and saving... like my question earlier is there a much appropriate way to handle this scenario?
And also sorry for my English guys if you want to clarify things with my question kindly ask, thnx in advance..
Here's my code for AttributeModel for hibernate annotation
#Component
#Entity
#Table(name="attribute_info")
public class AttributeModel {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#Column(name="attr_id", nullable=false, unique=true)
private int id;
#Column(name="attr_name")
private String name;
#Column(name="attr_desc")
private String desc;
#Column(name="attr_active")
private int active;
#Column(name="attr_abbr")
private String abbr;
#OneToOne(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#JoinColumn(name="stats_id", referencedColumnName="stats_id")
private BaseStatisticModel baseStats;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getDesc() {
return desc;
}
public void setDesc(String desc) {
this.desc = desc;
}
public int getActive() {
return active;
}
public void setActive(int active) {
this.active = active;
}
public String getAbbr() {
return abbr;
}
public void setAbbr(String abbr) {
this.abbr = abbr;
}
public BaseStatisticModel getBaseStats() {
return baseStats;
}
public void setBaseStats(BaseStatisticModel baseStats) {
this.baseStats = baseStats;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
}
I can only say "Don't do it". How is a String ID like "str10001" better than 10001? It can't be an optimization as strings take more memory and more time. So I guess you need to pass it to some String-expecting method later.
If so, then pass "str" + id instead. Constructing the string on the fly surely won't saturate your server.
If not, then let us know what you actually need rather than what you think it could help you to achieve it.
I'm pretty sure, Hibernate can't do it. It couldn't some long time ago I checked it recently and it makes no sense (in any case, it's not a feature crowds would request).

Categories