Below is our entity class
#Entity(defaultKeyspace = CASSANDRA_KEYSPACE)
#CqlName(CASSANDRA_TABLE)
public static class Scientist implements Serializable {
#CqlName("person_name")
public String name;
#Computed("writetime(person_name)")
#CqlName("name_ts")
public Long nameTs;
#CqlName("person_id")
#PartitionKey
public Integer id;
public Scientist() {}
public Scientist(int id, String name) {
super();
this.id = id;
this.name = name;
}
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
#Override
public String toString() {
return id + ":" + name;
}
#Override
public boolean equals(#Nullable Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Scientist scientist = (Scientist) o;
return id.equals(scientist.id) && Objects.equal(name, scientist.name);
}
#Override
public int hashCode() {
return Objects.hashCode(name, id);
}
}
#Dao
public interface ScientistDao {
#GetEntity
MappedAsyncPagingIterable<Scientist> map(AsyncResultSet resultSet);
#Delete
CompletionStage<Void> deleteAsync(Scientist entity);
#Insert
CompletionStage<Void> saveAsync(Scientist entity);
}
The problem faced is, when the computed fields (in the above case writetime(person_name) )are not selected as part of the query, the mapping fails.
In 3.x driver: mapped fields that are not present in the ResultSet were ignored. link
In 4.x driver: for each entity field, the database table or UDT must contain a column with the corresponding name. link
Please suggest a possible solution/workaround where this computed field can be part of the query on a need basis and the mapping happens successfully without throwing IllegalArgumentException.
Edit:
scientist table schema
CREATE TABLE beam_ks.scientist (person_id int PRIMARY KEY,person_name text);
Below is the query tried:
select person_id,writetime(person_name) as name_ts from beam_ks.scientist where person_id=10
Mapping of the resultset with #GetEntity fails with below error:
Caused by: java.lang.IllegalArgumentException: person_name is not a column in this row
at com.datastax.oss.driver.internal.core.cql.DefaultRow.firstIndexOf(DefaultRow.java:110)
at com.datastax.oss.driver.api.core.data.GettableByName.get(GettableByName.java:144)
at org.apache.beam.sdk.io.cassandra.CassandraIOTest_ScientistHelper__MapperGenerated.get(CassandraIOTest_ScientistHelper__MapperGenerated.java:89)
get method in CassandraIOTest_ScientistHelper__MapperGenerated:
#Override
public CassandraIOTest.Scientist get(GettableByName source) {
CassandraIOTest.Scientist returnValue = new CassandraIOTest.Scientist();
Integer propertyValue = source.get("person_id", Integer.class);
returnValue.setId(propertyValue);
String propertyValue1 = source.get("person_name", String.class);
returnValue.setName(propertyValue1);
return returnValue;
}
Also, the documentation does not specify whether to add getter and setter methods for computed values. So, they are removed from entity class
When using #GetEntity methods, it is your responsibility to provide a result set object that is 100% compatible with the entity definition.
Here your Scientist entity contains two regular fields: person_id (integer) and person_name (text). Therefore your result set must contain (at least) two columns with these names and types.
But you said you provided the following query: select person_id,writetime(person_name) as name_ts from beam_ks.scientist where person_id=10.
This query does not contain the required columns. You should change your query to the one below, or something similar:
select person_id, person_name from beam_ks.scientist where person_id=10
Note that #GetEntity methods do not recognize computed values, only regular ones. It is not necessary to include writetime(person_name) as name_ts, it won't be mapped anyway.
Related
public class Foo {
private long id;
private String name;
private boolean isBar;
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public boolean isBar() {
return isBar;
}
public void setBar(boolean isBar) {
this.isBar = isBar;
}
}
#Component
public class FooDAO {
private JdbcTemplate jdbcTemplate;
private FooDAO(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
public List<Foo> findAll() {
return jdbcTemplate.query( "SELECT * FROM foo", new BeanPropertyRowMapper<>(Foo.class);
}
}
When I setup a custom FooRowMapper and manually call setBar(rs.getBoolean("is_bar")) Foo.isBar is properly getting set to true when db value is 1, but not when using the BeanPropertyRowMapper instead of a custom row mapper.
According to this, BeanPropertyRowMapper should properly convert 1 to true, so why isn't it in my case?
p.s. I already figured out why but thought I'd post it in case it's helpful to anybody. I'm sure it won't take long for someone else to figure it out and post the answer.
I knew this:
Column values are mapped based on matching the column name as obtained from result set meta-data to public setters for the corresponding properties. The names are matched either directly or by transforming a name separating the parts with underscores to the same name using "camel" case.
But got thrown off because my Foo.isBar property had the correct camel case equivalent of my db field name (is_bar), however, my public setter name was incorrect as setBar; the setter should be setIsBar.
After googling I was also thrown off by others wanting to use BeanPropertyRowMapper to convert database values of Y/N to boolean values.
And I also assumed BeanPropertyRowMapper was actually setting the value to false even though it wasn't and the false value simply remained as the default boolean primitive value.
Another solution if for whatever reason setBar instead setIsBar was actually desired would be to use an field alias in the sql select statement like it says in the docs:
To facilitate mapping between columns and fields that don't have matching names, try using column aliases in the SQL statement like "select fname as first_name from customer".
I'm trying to implement a tree structure in JPA, that I want mapped to an H2 database using EclipseLink. The nodes of the tree are possibly subclasses of the base node class. What is happening is that EL is creating a brain-dead link table for the children as follows:
[EL Fine]: sql: 2015-04-10 13:26:08.266--ServerSession(667346055)--Connection(873610597)--CREATE TABLE ORGANIZATIONNODE_ORGANIZATIONNODE (OrganizationNode_IDSTRING VARCHAR NOT NULL, children_IDSTRING VARCHAR NOT NULL, Device_IDSTRING VARCHAR NOT NULL, monitors_IDSTRING VARCHAR NOT NULL, PRIMARY KEY (OrganizationNode_IDSTRING, children_IDSTRING, Device_IDSTRING, monitors_IDSTRING))
OrganizationNode is the proper superclass of Device. Both of these are #Entity, OrganizationNode extends AbstractEntity, which is a #MappedSuperclass where the #Id is defined (it is a string). Even stranger, while there is a Monitor class that is not in the tree structure, the only place "monitors" plural occurs is as a field of Device... what??
Now, it's fine to use a table like that to implement a tree structure, but I don't expect a compound primary key with separate instances of the Id field for each subclass! That's got to break - because some children are not Device, and therefore do not have a "Device_IDSTRING", and sure enough:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException|Internal Exception: org.h2.jdbc.JdbcSQLException: NULL not allowed for column "DEVICE_IDSTRING"; SQL statement:|INSERT INTO ORGANIZATIONNODE_ORGANIZATIONNODE (children_IDSTRING, OrganizationNode_IDSTRING) VALUES (?, ?) [23502-186]|Error Code: 23502|Call: INSERT INTO ORGANIZATIONNODE_ORGANIZATIONNODE (children_IDSTRING, OrganizationNode_IDSTRING) VALUES (?, ?)|?bind => [2 parameters bound]|Query: DataModifyQuery(name="children" sql="INSERT INTO ORGANIZATIONNODE_ORGANIZATIONNODE (children_IDSTRING, OrganizationNode_IDSTRING) VALUES (?, ?)")
This seems like truly bizarre behavior. I've tried every combination of mapping annotations I could possibly think of to fix it. Any ideas?
Classes follow.
AbstractEntity.java:
#MappedSuperclass
public abstract class AbstractEntity {
// #Converter(name="uuidConverter",converterClass=UUIDConverter.class)
transient UUID id = null;
#Id String idString;
static long sequence = 1;
static long GREGORIAN_EPOCH_OFFSET = 12219292800000L;
public AbstractEntity() {
ThreadContext tctx = ThreadContext.getThreadContext();
long msb = tctx.getNodeID();
long lsb = (System.currentTimeMillis()+GREGORIAN_EPOCH_OFFSET) * 1000 + ((sequence++) % 1000);
lsb = (lsb & 0xCFFFFFFFFFFFFFFFL) | (0x8000000000000000L);
msb = (msb & 0xFFFFFFFFFFFF0FFFL) | (0x0000000000001000L);
id = new UUID(msb,lsb);
idString = id.toString();
}
#Id
public UUID getUUID() {
return id;
}
public String getIdString() {
return idString;
}
public void setIdString(String idString) {
this.idString = idString;
this.id = UUID.fromString(idString);
}
void setUUID(UUID id) {
this.id = id;
this.idString = id.toString();
}
#Override
public String toString() {
return "["+this.getClass().getCanonicalName()+" "+this.getUUID()+"]";
}
}
OrganizationNode.java:
#Entity
public class OrganizationNode extends AbstractEntity {
#ManyToOne(cascade=CascadeType.ALL)
NodeType nodeType;
#Column(nullable=true)
String name;
#OneToMany(cascade=CascadeType.ALL)
Set<OrganizationNode> children;
public OrganizationNode() {}
public OrganizationNode(NodeType nt, String name) {
this.nodeType = nt;
this.name = name;
children = new HashSet<>();
}
public void setNodeType(NodeType nt) {
nodeType = nt;
}
public NodeType getNodeType() {
return nodeType;
}
public String getName() {
if ((name == null) || (name.equals(""))) return null;
return name;
}
public void setName(String name) {
this.name = name;
}
public Set<OrganizationNode> getChildren() {
return children;
}
public void setChildren(Set<OrganizationNode> children) {
this.children = children;
}
public void addNode(OrganizationNode node) {
children.add(node);
}
public void removeNode(OrganizationNode node) {
children.remove(node);
}
}
Device.java:
#Entity
public class Device extends OrganizationNode {
Set<Monitor> monitors;
public Device() {
super();
}
public Device(NodeType nt, String name) {
super(nt, name);
monitors = new HashSet<>();
}
public Set<Monitor> getMonitors() {
return monitors;
}
public void setMonitors(Set<Monitor> monitors) {
this.monitors = monitors;
}
public void addMonitor(Monitor monitor) {
monitors.add(monitor);
}
}
You need to decide what inheritance startegy you want to use.
The default one is typically the "Single Table Inheritance" so all the subclasses are represented in one table with merged columns.
#Inheritance
#Entity
public class OrganizationNode extends AbstractEntity {
...
}
and you saw it the sql.
You can have Joined, Multiple Table Inheritance where each subclass has its own table and are joined with parent table:
#Inheritance(strategy=InheritanceType.JOINED)
Finally, the last option is Table Per Class Inheritance, where there is no "inheritance" tree reflected in the tables structure, and each object has its full table with all the columns from the class and supperclasses.
#Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)
The last one is the least efficient.
You can have only one strategy, which you define on the top of the inheritance (OrganizationNode), it cannot be changed in subclasses.
The default single table inheritance is typically the most efficient unless there are really a lot of columns which are not shared between the classes
You should probably explicitly declare column which will be used to deteriment the actual class type: #DiscriminatorColumn(name="NODE_TYPE") and for each Entity define the value: #DiscriminatorValue("TYPE1")
I'm creating a simple REST application with dropwizard using JDBI. The next step is to integrate a new resource that has a one-to-many relationship with another one. Until now I couldn't figure out how to create a method in my DAO that retrieves a single object that holds a list of objects from another table.
The POJO representations would be something like this:
User POJO:
public class User {
private int id;
private String name;
public User(int id, String name) {
this.id = id;
this.name = name;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Account POJO:
public class Account {
private int id;
private String name;
private List<User> users;
public Account(int id, String name, List<User> users) {
this.id = id;
this.name = name;
this.users = users;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public List<User> getUsers() {
return users;
}
public void setUsers(List<User> users) {
this.users = users;
}
}
The DAO should look something like this
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public Account getAccountById(#Bind("id") int id);
}
But when the method has a single object as return value (Account instead of List<Account>) there seems to be no way to access more than one line of the resultSet in the Mapper class. The only solution that comes close I could find is described at https://groups.google.com/d/msg/jdbi/4e4EP-gVwEQ/02CRStgYGtgJ but that one also only returns a Set with a single object which does not seem very elegant. (And can't be properly used by the resouce classes.)
There seems to be a way using a Folder2 in the fluent API. But I don't know how to integrate that properly with dropwizard and I'd rather stick to JDBI's SQL object API as recommended in the dropwizard documentation.
Is there really no way to get a one-to-many mapping using the SQL object API in JDBI? That is such a basic use case for a database that I think I must be missing something.
All help is greatly appreciated,
Tilman
OK, after a lot of searching, I see two ways dealing with this:
The first option is to retrieve an object for each column and merge it in the Java code at the resource (i.e. do the join in the code instead of having it done by the database).
This would result in something like
#GET
#Path("/{accountId}")
public Response getAccount(#PathParam("accountId") Integer accountId) {
Account account = accountDao.getAccount(accountId);
account.setUsers(userDao.getUsersForAccount(accountId));
return Response.ok(account).build();
}
This is feasible for smaller join operations but seems not very elegant to me, as this is something the database is supposed to do. However, I decided to take this path as my application is rather small and I did not want to write a lot of mapper code.
The second option is to write a mapper, that retrieves the result of the join query and maps it to the object like this:
public class AccountMapper implements ResultSetMapper<Account> {
private Account account;
// this mapping method will get called for every row in the result set
public Account map(int index, ResultSet rs, StatementContext ctx) throws SQLException {
// for the first row of the result set, we create the wrapper object
if (index == 0) {
account = new Account(rs.getInt("id"), rs.getString("name"), new LinkedList<User>());
}
// ...and with every line we add one of the joined users
User user = new User(rs.getInt("u_id"), rs.getString("u_name"));
if (user.getId() > 0) {
account.getUsers().add(user);
}
return account;
}
}
The DAO interface will then have a method like this:
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public List<Account> getAccountById(#Bind("id") int id);
}
Note: Your abstract DAO class will quietly compile if you use a non-collection return type, e.g. public Account getAccountById(...);. However, your mapper will only receive a result set with a single row even if the SQL query would have found multiple rows, which your mapper will happily turn into a single account with a single user. JDBI seems to impose a LIMIT 1 for SELECT queries that have a non-collection return type. It is possible to put concrete methods in your DAO if you declare it as an abstract class, so one option is to wrap up the logic with a public/protected method pair, like so:
public abstract class AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
protected abstract List<Account> _getAccountById(#Bind("id") int id);
public Account getAccountById(int id) {
List<Account> accountList = _getAccountById(id);
if (accountList == null || accountList.size() < 1) {
// Log it or report error if needed
return null;
}
// The mapper will have given a reference to the same value for every entry in the list
return accountList.get(accountList.size() - 1);
}
}
This still seems a little cumbersome and low-level to me, as there are usually a lot of joins in working with relational data. I would love to see a better way or having JDBI supporting an abstract operation for this with the SQL object API.
In JDBI v3, you can use #UseRowReducer to achieve this. The row reducer is called on every row of the joined result which you can "accumulate" into a single object. A simple implementation in your case would look like:
public class AccountUserReducer implements LinkedHashMapRowReducer<Integer, Account> {
#Override
public void accumulate(final Map<Integer, Account> map, final RowView rowView) {
final Account account = map.computeIfAbsent(rowView.getColumn("a_id", Integer.class),
id -> rowView.getRow(Account.class));
if (rowView.getColumn("u_id", Integer.class) != null) {
account.addUser(rowView.getRow(User.class));
}
}
}
You can now apply this reducer on a query that returns the join:
#RegisterBeanMapper(value = Account.class, prefix = "a")
#RegisterBeanMapper(value = User.class, prefix = "u")
#SqlQuery("SELECT a.id a_id, a.name a_name, u.id u_id, u.name u_name FROM " +
"Account a LEFT JOIN User u ON u.accountId = a.id WHERE " +
"a.id = :id")
#UseRowReducer(AccountUserReducer.class)
Account getAccount(#Bind("id") int id);
Note that your User and Account row/bean mappers can remain unchanged; they simply know how to map an individual row of the user and account tables respectively. Your Account class will need a method addUser() that is called each time the row reducer is called.
I have a small library which will be very useful to maintain one to many & one to one relationship.
It also provide more feature for default mappers.
https://github.com/Manikandan-K/jdbi-folder
There's an old google groups post where Brian McAllistair (One of the JDBI authors) does this by mapping each joined row to an interim object, then folding the rows into the target object.
See the discussion here. There's test code here.
Personally this seems a little unsatisfying since it means writing an extra DBO object and mapper for the interim structure. Still I think this answer should be included for completeness!
I have an Object named Token. it has id, name, and value. After saving some data to db, I have loaded them into a web page
_____________________________________________
|____name____|____value____|____operation____|
tkn1 10 ×
tkn2 20 ×
the × sign enable me to delete a token from server collection
now. I have added token tkn3 with value 30 and deleted tkn2 so
the table would be:
_____________________________________________
|____name____|____value____|____operation____|
tkn1 10 ×
tkn3 30 ×
With these changes to the collection, how can I reflect them into database
how to determine the records that deleted, and the records that added?
I applied tow solutions:
I have compared -in business logic layer- the old data with the new data
and find the differences between the then send to database two lists, the first contains
the added tokens and the second contains the ids of tokens to be deleted.
I added a flag named status to the object.. when I add the flag is NEW
when I delete I just set flag to DELETE, and in DB layer I iterate over the collection
one by one object and check the flag.. if NEW then add the record, if DELETE , delete it
and if SAVED (no changes) I do no changes to it..
My questions:
Is this way is good to do this task..?
Is there a Pattern to accomplish this task?
Can Hibernate help me to do that?
• Is this way is good to do this task..?
NO
• Is there a Pattern to accomplish this task?
YES
• Can Hibernate help me to do that?
Hibernate provides the solution for such situation using Cascade Attribute for List property
Refer
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/collections.html
http://www.mkyong.com/hibernate/hibernate-cascade-example-save-update-delete-and-delete-orphan/
The blow entity should solve your problem.
#Entity
public class MyEntity {
private static enum Status {
NEW,
PERSISTENT,
REMOVED
}
#Id
private Long id;
private String name;
private int value;
#Transient
private Status uiStatus = Status.NEW;
public Long getId() {
return this.id;
}
public String getName() {
return this.name;
}
public Status getUiStatus() {
return this.uiStatus;
}
public int getValue() {
return this.value;
}
#PostLoad
public void onLoad() {
this.uiStatus = Status.PERSISTENT;
}
public void setId(Long id) {
this.id = id;
}
public void setName(String name) {
this.name = name;
}
public void setUiStatus(Status uiStatus) {
this.uiStatus = uiStatus;
}
public void setValue(int value) {
this.value = value;
}
}
I have an existing database of a film rental system. Each film has a has a rating attribute. In SQL they used a constraint to limit the allowed values of this attribute.
CONSTRAINT film_rating_check CHECK
((((((((rating)::text = ''::text) OR
((rating)::text = 'G'::text)) OR
((rating)::text = 'PG'::text)) OR
((rating)::text = 'PG-13'::text)) OR
((rating)::text = 'R'::text)) OR
((rating)::text = 'NC-17'::text)))
I think it would be nice to use a Java enum to map the constraint into the object world. But it's not possible to simply take the allowed values because of the special char in "PG-13" and "NC-17". So I implemented the following enum:
public enum Rating {
UNRATED ( "" ),
G ( "G" ),
PG ( "PG" ),
PG13 ( "PG-13" ),
R ( "R" ),
NC17 ( "NC-17" );
private String rating;
private Rating(String rating) {
this.rating = rating;
}
#Override
public String toString() {
return rating;
}
}
#Entity
public class Film {
..
#Enumerated(EnumType.STRING)
private Rating rating;
..
With the toString() method the direction enum -> String works fine, but String -> enum does not work. I get the following exception:
[TopLink Warning]: 2008.12.09
01:30:57.434--ServerSession(4729123)--Exception [TOPLINK-116] (Oracle
TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))):
oracle.toplink.essentials.exceptions.DescriptorException Exception
Description: No conversion value provided for the value [NC-17] in
field [FILM.RATING]. Mapping:
oracle.toplink.essentials.mappings.DirectToFieldMapping[rating-->FILM.RATING]
Descriptor: RelationalDescriptor(de.fhw.nsdb.entities.Film -->
[DatabaseTable(FILM)])
cheers
timo
have you tried to store the ordinal value. Store the string value works fine if you don't have an associated String to the value:
#Enumerated(EnumType.ORDINAL)
You have a problem here and that is the limited capabilities of JPA when it comes to handling enums. With enums you have two choices:
Store them as a number equalling Enum.ordinal(), which is a terrible idea (imho); or
Store them as a string equalling Enum.name(). Note: not toString() as you might expect, especially since the default behaviourfor Enum.toString() is to return name().
Personally I think the best option is (2).
Now you have a problem in that you're defining values that don't represent vailid instance names in Java (namely using a hyphen). So your choices are:
Change your data;
Persist String fields and implicitly convert them to or from enums in your objects; or
Use nonstandard extensions like TypeConverters.
I would do them in that order (first to last) as an order of preference.
Someone suggested Oracle TopLink's converter but you're probably using Toplink Essentials, being the reference JPA 1.0 implementation, which is a subset of the commercial Oracle Toplink product.
As another suggestion, I'd strongly recommend switching to EclipseLink. It is a far more complete implementation than Toplink Essentials and Eclipselink will be the reference implementation of JPA 2.0 when released (expected by JavaOne mid next year).
Sounds like you need to add support for a custom type:
Extending OracleAS TopLink to Support Custom Type Conversions
public enum Rating {
UNRATED ( "" ),
G ( "G" ),
PG ( "PG" ),
PG13 ( "PG-13" ),
R ( "R" ),
NC17 ( "NC-17" );
private String rating;
private static Map<String, Rating> ratings = new HashMap<String, Rating>();
static {
for (Rating r : EnumSet.allOf(Rating.class)) {
ratings.put(r.toString(), r);
}
}
private static Rating getRating(String rating) {
return ratings.get(rating);
}
private Rating(String rating) {
this.rating = rating;
}
#Override
public String toString() {
return rating;
}
}
I don't know how to do the mappings in the annotated TopLink side of things however.
i don't know internals of toplink, but my educated guess is the following: it uses the Rating.valueOf(String s) method to map in the other direction. it is not possible to override valueOf(), so you must stick to the naming convention of java, to allow a correct valueOf method.
public enum Rating {
UNRATED,
G,
PG,
PG_13 ,
R ,
NC_17 ;
public String getRating() {
return name().replace("_","-");;
}
}
getRating produces the "human-readable" rating. note that the "-" chanracter is not allowed in the enum identifier.
of course you will have to store the values in the DB as NC_17.
The problem is, I think, that JPA was never incepted with the idea in mind that we could have a complex preexisting Schema already in place.
I think there are two main shortcomings resulting from this, specific to Enum:
The limitation of using name() and ordinal(). Why not just mark a getter with #Id, the way we do with #Entity?
Enum's have usually representation in the database to allow association with all sorts of metadata, including a proper name, a descriptive name, maybe something with localization etc. We need the easy of use of an Enum combined with the flexibility of an Entity.
Help my cause and vote on JPA_SPEC-47
Using your existing enum Rating. You can use AttributeCoverters.
#Converter(autoApply = true)
public class RatingConverter implements AttributeConverter<Rating, String> {
#Override
public String convertToDatabaseColumn(Rating rating) {
if (rating == null) {
return null;
}
return rating.toString();
}
#Override
public Rating convertToEntityAttribute(String code) {
if (code == null) {
return null;
}
return Stream.of(Rating.values())
.filter(c -> c.toString().equals(code))
.findFirst()
.orElseThrow(IllegalArgumentException::new);
}
}
In JPA 2.0, a way to persist an enum using neither the name() nor ordinal() can be done by wrapping the enum in a Embeddable class.
Assume we have the following enum, with a code value intended to be stored in the database :
public enum ECourseType {
PACS004("pacs.004"), PACS008("pacs.008");
private String code;
ECourseType(String code) {
this.code = code;
}
public String getCode() {
return code;
}
}
Please note that the code values could not be used as names for the enum since they contain dots. This remark justifies the workaround we are providing.
We can build an immutable class (as a value object) wrapping the code value of the enum with a static method from() to build it from the enum, like this :
#Embeddable
public class CourseType {
private static Map<String, ECourseType> codeToEnumCache =
Arrays.stream(ECourseType.values())
.collect(Collectors.toMap( e -> e.getCode(), e -> e));
private String value;
private CourseType() {};
public static CourseType from(ECourseType en) {
CourseType toReturn = new CourseType();
toReturn.value = en.getCode();
return toReturn;
}
public ECourseType getEnum() {
return codeToEnumCache.get(value);
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass() ) return false;
CourseType that = (CourseType) o;
return Objects.equals(value, that.value);
}
#Override
public int hashCode() {
return Objects.hash(value);
}
}
Writing proper equals() and hashcode() is important to insure the "value object" aim of this class.
If needed, an equivalence method between the CourseType et ECourseType may be added (but not mixed with equals()) :
public boolean isEquiv(ECourseType eCourseType) {
return Objects.equals(eCourseType, getEnum());
}
This class can now be embedded in an entity class :
public class Course {
#Id
#GeneratedValue
#Column(name = "COU_ID")
private Long pk;
#Basic
#Column(name = "COURSE_NAME")
private String name;
#Embedded
#AttributeOverrides({
#AttributeOverride(name = "value", column = #Column(name = "COURSE_TYPE")),
})
private CourseType type;
public void setType(CourseType type) {
this.type = type;
}
public void setType(ECourseType type) {
this.type = CourseType.from(type);
}
}
Please note that the setter setType(ECourseType type) has been added for convenience. A similar getter could be added to get the type as ECourseType.
Using this modeling, hibernate generates (for H2 db) the following SQL table :
CREATE TABLE "PUBLIC"."COU_COURSE"
(
COU_ID bigint PRIMARY KEY NOT NULL,
COURSE_NAME varchar(255),
COURSE_TYPE varchar(255)
)
;
The "code" values of the enum will be stored in the COURSE_TYPE.
And the Course entities can be searched with a query as simple as this :
public List<Course> findByType(CourseType type) {
manager.clear();
Query query = manager.createQuery("from Course c where c.type = :type");
query.setParameter("type", type);
return (List<Course>) query.getResultList();
}
Conclusion:
This shows how to persist an enum using neither the name nor the ordinal but insure a clean modelling of an entity relying on it.
This is can be particularly useful for legacy when the values stored in db are not compliant to the java syntax of enum names and ordinals.
It also allows refactoring the enum names without having to change values in db.
What about this
public String getRating{
return rating.toString();
}
pubic void setRating(String rating){
//parse rating string to rating enum
//JPA will use this getter to set the values when getting data from DB
}
#Transient
public Rating getRatingValue(){
return rating;
}
#Transient
public Rating setRatingValue(Rating rating){
this.rating = rating;
}
with this you use the ratings as String both on your DB and entity, but use the enum for everything else.
use this annotation
#Column(columnDefinition="ENUM('User', 'Admin')")
Enum
public enum ParentalControlLevelsEnum {
U("U"), PG("PG"), _12("12"), _15("15"), _18("18");
private final String value;
ParentalControlLevelsEnum(final String value) {
this.value = value;
}
public String getValue() {
return value;
}
public static ParentalControlLevelsEnum fromString(final String value) {
for (ParentalControlLevelsEnum level : ParentalControlLevelsEnum.values()) {
if (level.getValue().equalsIgnoreCase(value)) {
return level;
}
}
return null;
}
}
compare -> Enum
public class RatingComparator implements Comparator {
public int compare(final ParentalControlLevelsEnum o1, final ParentalControlLevelsEnum o2) {
if (o1.ordinal() < o2.ordinal()) {
return -1;
} else {
return 1;
}
}
}
Resolved!!!
Where I found the answer: http://programming.itags.org/development-tools/65254/
Briefly, the convertion looks for the name of enum, not the value of attribute 'rating'.
In your case: If you have in the db values "NC-17", you need to have in your enum:
enum Rating {
(...)
NC-17 ( "NC-17" );
(...)