Enum saving in Hibernate - java

I am trying to save an Enum field to database but I am having a problem mapping the field to database. The code I have is as follows:
public enum InvoiceStatus {
PAID,
UNPAID;
}
and I am using this enum in one of my application classes as follows:
public class Invoice {
Enumerated(EnumType.ORDINAL)
#Column(name="INVOICE_STATUS", nullable = false, unique=false)
private InvoiceStatus invoiceStatus;
}
finally I let the app user select the Invoice Status from the view (JSP) using a drop down menu.
But I am not sure how to map the value received from the drop down menu selection to the Invoice Status field
I tried mapping the value received to short as follows, but it won't compile
invoice.setInvoiceStatus(Short.parseShort(request.getParameter("inbStatus")));
can someone please tell me how to map the data received from the view to the enum field?

Enum ordinal values are zero based indexes. In your case:
PAID = 0
UNPAID = 1
So the following code will return PAID:
int invoiceStatus = 0;
invoice.setInvoiceStatus(InvoiceStatus.values()[invoiceStatus]);
And the following code will return UNPAID:
int invoiceStatus = 1;
invoice.setInvoiceStatus(InvoiceStatus.values()[invoiceStatus]);
That means you should be able to do this way:
short invoiceStatus = Short.parseShort(request.getParameter("inbStatus"));
invoice.setInvoiceStatus(InvoiceStatus.values()[invoiceStatus]);
But only if inbStatus is 0 or 1. You should always validate user input for null and invalid values.

I see that u are using
Enumerated(EnumType.ORDINAL)
however after a while it could be quite difficult to troubleshoot if your enum will grow. Another issue with the ordinal is that you could refactor your code and change the order of the enum values and after that you could be in trouble. Mainly if it is a shared codebase and someone just decides to cleanup the code and "group the relevant enum constants together". If you'll use:
Enumerated(EnumType.STRING)
Directly the enum "name" will be inserted into the database. (Therefore you need Varchar type). If you want to present more user friendly version of your enum you could probably have:
public enum InvoiceStatus {
PAID(0, "Paid"), UNPAID(1, "Unpaid"), FAILED(2, "Failed"), PENDING(3, "Pending");
private int st;
private in uiLabel;
private InvoiceStatus(int st, String uiLabel){
this.st = st;
this.uiLabel = uiLabel;
}
private Map<String, InvoiceStatus> uiLabelMap = new HashMap<String, InvoiceStatus> ();
static {
for(InvoiceStatus status : values()) {
uiLableMap.put(status.getUiLabel(), status);
}
}
/** Returns the appropriate enum based on the String representation used in ui forms */
public InvoiceStatus fromUiLabel(String uiLabel) {
return uiLableMap.get(uiLabel); // plus some tweaks (null check or whatever)
}
//
// Same logic for the ORDINAL if you are keen to use it
//
}
Probably this could be also a solution for your problem, however i would really not use the ORDINAL based mapping. But just personal feeling.

Related

Hazelcast not working correctly with SqlPredicate and Index on optional field

We are storing complex objects in Hazelcast maps and need the possibility to search for objects not only based on the key but also on the content of these complex objects. In order to not take too large a performance hit, we are using indices on those search terms.
We are also using spring-data-hazelcast which provides repositories that allow us to use findByAbcXyz() type semantic queries. For some of the more complex queries we are using the #Query annotation (which spring-data-hazelcast internally translates to SqlPredicates).
We have now encountered an issue where under certain situations these #Query based search methods did not return any values, even if we could verify that the searched objects did in fact exist in the map.
I have managed to reproduce this issue with core hazelcast (i.e. without the use of spring-data-hazelcast).
Here is our object structure:
BetriebspunktKey.java
public class BetriebspunktKey implements Serializable {
private Integer uicLand;
private Integer nummer;
public BetriebspunktKey(final Integer uicLand, final Integer nummer) {
this.uicLand = uicLand;
this.nummer = nummer;
}
public Integer getUicLand() {
return uicLand;
}
public Integer getNummer() {
return nummer;
}
}
Betriebspunkt.java
public class Betriebspunkt implements Serializable {
private BetriebspunktKey key;
private List<BetriebspunktVersion> versionen;
public Betriebspunkt(final BetriebspunktKey key, final List<BetriebspunktVersion> versionen) {
this.key = key;
this.versionen = versionen;
}
public BetriebspunktKey getKey() {
return key;
}
}
BetriebspunktVersion.java
public class BetriebspunktVersion implements Serializable {
private List<BetriebspunktKey> zusatzbetriebspunkte;
public BetriebspunktVersion(final List<BetriebspunktKey> zusatzbetriebspunkte) {
this.zusatzbetriebspunkte = zusatzbetriebspunkte;
}
}
In my main file, I am now setting up hazelcast:
Config config = new Config();
final MapConfig mapConfig = config.getMapConfig("points");
mapConfig.addMapIndexConfig(new MapIndexConfig("versionen[any].zusatzbetriebspunkte[any].nummer", false));
HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);
IMap<BetriebspunktKey, Betriebspunkt> map = instance.getMap("points");
I am also preparing my search criteria for later on:
Predicate equalPredicate = Predicates.equal("versionen[any].zusatzbetriebspunkte[any].nummer", 53090);
Predicate sqlPredicate = new SqlPredicate("versionen[any].zusatzbetriebspunkte[any].nummer=53090");
Next, I am creating two objects, one with the "full depth" of information, the other does not contain any "zusatzbetriebspunkte":
final Betriebspunkt abc = new Betriebspunkt(
new BetriebspunktKey(80, 166),
Collections.singletonList(new BetriebspunktVersion(
Collections.singletonList(new BetriebspunktKey(80, 53090))
))
);
final Betriebspunkt def = new Betriebspunkt(
new BetriebspunktKey(83, 141),
Collections.singletonList(new BetriebspunktVersion(
Collections.emptyList()
))
);
Here is, where things become interesting. If I first insert the "full" object into the map, the search using both the EqualPredicate as well as the SqlPredicate works:
map.put(abc.getKey(), abc);
map.put(def.getKey(), def);
Collection<Betriebspunkt> equalResults = map.values(equalPredicate);
Collection<Betriebspunkt> sqlResults = map.values(sqlPredicate);
assertEquals(1, equalResults.size()); // contains "abc"
assertEquals(1, sqlResults.size()); // contains "abc"
However, if I insert the objects into my map in reverse order (i.e. first the "partial" object and then the "full" one), only the EqualPredicate works correctly, the SqlPredicate returns an empty list, no matter what the content of the map or the search criteria.
map.put(abc.getKey(), abc);
map.put(def.getKey(), def);
Collection<Betriebspunkt> equalResults = map.values(equalPredicate);
Collection<Betriebspunkt> sqlResults = map.values(sqlPredicate);
assertEquals(1, equalResults.size()); // contains "abc"
assertEquals(1, sqlResults.size()); // --> this fails, it returns en empty list
What is the reason for this behaviour? It looks like a bug in the hazelcast code.
The reason for failing
After a lot of debugging, I have found the reason for this issue. The reasons can indeed be found in the hazelcast code.
When putting a value into a hazelcast map DefaultRecordStore.putInternal is called. At the end of this method DefaultRecordStore.saveIndex is called which finds the corresponding indexes and then calls Indexes.saveEntryIndex. This method iterates over each index and calls InternalIndex.saveEntryIndex (or rather its implementation IndexImpl.saveEntryIndex. The interesting part of that method are the following lines:
if (this.converter == null || this.converter == TypeConverters.NULL_CONVERTER) {
this.converter = entry.getConverter(this.attributeName);
}
Aparently each index stores a converter class when the first element is put into the map. Looking at QueryableEntry.getConverter explains what happens:
TypeConverter getConverter(String attributeName) {
Object attribute = this.getAttributeValue(attributeName);
if (attribute == null) {
return TypeConverters.NULL_CONVERTER;
} else {
AttributeType attributeType = this.extractAttributeType(attributeName, attribute);
return attributeType == null ? TypeConverters.IDENTITY_CONVERTER : attributeType.getConverter();
}
}
When first inserting the "full" object, extractAttributeType() will follow the "path" of our index definition "versionen[any].zusatzbetriebspunkte[any].nummer" and find out that nummer is an integer type, accordingly a TypeConverters.IntegerConverter will be returned and stored.
When first inserting the "partial" object, "zusatzbetriebspunkte[any]" is emtpy, and there is no way for extractAttributeType to find out what type nummer hast, it therefore returns null which means that TypeConverters.IdentityConverter is used.
Also, whenever a "full" element is inserted an entry is written into the index map using nummer as key, i.e. the index-map is of type Map.
So much for writing to the map. Let's now look at how data is read from the map. When calling map.values(predicate) we will eventually get to QueryRunner.runUsingGlobalIndexSafely which contains a line:
Collection<QueryableEntry> entries = indexes.query(predicate);
this will in turn after some boilerplate code call
Set<QueryableEntry> result = indexAwarePredicate.filter(queryContext);
For both of our predicates we will eventually get to IndexImpl.getRecords() which looks as follows:
public Set<QueryableEntry> getRecords(Comparable attributeValue) {
long timestamp = this.stats.makeTimestamp();
if (this.converter == null) {
this.stats.onIndexHit(timestamp, 0L);
return new SingleResultSet((Map)null);
} else {
Set<QueryableEntry> result = this.indexStore.getRecords(this.convert(attributeValue));
this.stats.onIndexHit(timestamp, (long)result.size());
return result;
}
}
The crucial call is this.convert(attributeValue) where attributeValue is the value of the predicate.
If we compare our two predicates, we can see that the EqualPredicate has two members:
attributeName = "versionen[any].zusatzbetriebspunkte[any].nummer"
value = {Integer} 53090
The SqlPredicate contains the initial string (which we passed to its constructor) but which at constructions was also parsed and mapped to a internal EqualPredicate (which when evaluating the predicate is eventually used and passed to getRecords() above):
sql = "versionen[any].zusatzbetriebspunkte[any].nummer=53090"
predicate = {EqualPredicate}
attributeName = "versionen[any].zusatzbetriebspunkte[any].nummer"
value = {String} "53090"
And this explains why the manually created EqualPredicate works in both cases: Its value is an integer. When passed to the converter, it does not matter whether it is the IntegerConverter or the IdentityConverter, as both will return the integer which can then be used as key in the index-map (which uses an integer as key).
With the SqlPredicate however, the value is a String. If this is passed to the IntegerConverter, it is converted to its corresponding integer value and accessing the index-map works. If it is passed to the IdentityConverter, the string is returned by the conversion and trying to access the index-map with a string will never find any results.
A possible solution
How can we solve this issue? I see several possibilities:
insert a "fully built" dummy value into our map during startup to ensure the converter is correctly initialised. While this works, it is ugly and not maintenance friendly
avoid using SqlPredicate and use the integer based EqualPredicate. This is not an option when working with spring-data-hazelcast as it always converts #Query based searches to SqlPredicates. We could of course use hazelcast directly and circumvent the spring-data wrapper but while that would work it means having two ways of accessing hazelcast which is also not very maintainable
use hazelcast's ValueExtractor class. This is the elegant solution that works both natively and using spring-data-hazelcast. I will outline what that looks like:
First we need to implement a value extractor which returns all zusatzbetriebspunkte of our Betriebspunkt in a form suitable for us
public class BetriebspunktExtractor extends ValueExtractor<Betriebspunkt, String> implements Serializable {
#Override
public void extract(final Betriebspunkt betriebspunkt, final String argument, final ValueCollector valueCollector) {
betriebspunkt.getVersionen().stream()
.map(BetriebspunktVersion::getZusatzbetriebspunkte)
.flatMap(List::stream)
.map(zbp -> zbp.getUicLand() + "_" + zbp.getNummer())
.forEach(valueCollector::addObject);
}
}
You'll notice that I am not only returning the nummer field but also include the uicLand field this is something we really wanted but couldn't get working using the "...[any]..." notation. We could of course only return the nummer if we wanted the exact same behavior as outlined above.
Now we need to modify our hazelcast configuration slightly:
Config config = new Config();
final MapConfig mapConfig = config.getMapConfig("points");
//mapConfig.addMapIndexConfig(new MapIndexConfig("versionen[any].zusatzbetriebspunkte[any].nummer", false));
mapConfig.addMapIndexConfig(new MapIndexConfig("zusatzbetriebspunkt", false));
mapConfig.addMapAttributeConfig(new MapAttributeConfig("zusatzbetriebspunkt", BetriebspunktExtractor.class.getName()));
You'll notice that the "long" index definition using the "...[any]..." notation is no longer needed.
Now we can use this "pseudo attribute" to query our values and it doesn't matter in which order the objects have been added to the map:
Predicate keyPredicate = Predicates.equal("zusatzbetriebspunkt", "80_53090");
Collection<Betriebspunkt> keyResults = map.values(keyPredicate);
assertEquals(1, keyResults.size()); // always contains "abc"
And in our spring-data-hazelcast repository we can now do this:
#Query("zusatzbetriebspunkt=%d_%d")
List<StammdatenBetriebspunkt> findByZusatzbetriebspunkt(Integer uicLand, Integer nummer);
If you do not need to use spring-data-hazelcast, instead of returning a string to the ValueCollector, you could return the BetriebspunktKey directly and then use it in the predicate as well. That would be the cleanest solution:
public class BetriebspunktExtractor extends ValueExtractor<Betriebspunkt, String> implements Serializable {
#Override
public void extract(final Betriebspunkt betriebspunkt, final String argument, final ValueCollector valueCollector) {
betriebspunkt.getVersionen().stream()
.map(BetriebspunktVersion::getZusatzbetriebspunkte)
.flatMap(List::stream)
//.map(zbp -> zbp.getUicLand() + "_" + zbp.getNummer())
.forEach(valueCollector::addObject);
}
}
and then
Predicate keyPredicate = Predicates.equal("zusatzbetriebspunkt", new BetriebspunktKey(80, 53090));
However, for this to work, BetriebspunktKey needs to implement Comparable and must also provide its own equals and hashCode methods.

How to Iterate Datatable with type List<Class> in Cucumber

I have below feature file with Given annotation
Given user have below credentials
|user |password |
|cucumber1 |cucumber |
|cucumber2 |cucumber |
And i'm created below datamodel
public Class DataModel{
public string user;
public String password;
}
Trying to fetch data into the cucumber stepdefinition as below
Public Class stepdefinition {
#Given("^user have below credentials$")
Public void user_have_below_credintials(List<DataModel> dm){
//Iterator or foreach is required to fetch row,column data from dm
}
}
Please help me how can I Iterate object 'dm' to get row and column values
// The old way
for (int i = 0; i < dm.size(); i++) {
DataModel aDataModel = dm.get(i);
String username = aDataModel.user;
String password = aDataModel.password;
}
// A better way if java5+
for (DataModel aDataModel : dm) {
String username = aDataModel.user;
String password = aDataModel.password;
}
// another way if java8+
dm.forEach(aDataModel -> {
String username = aDataModel.user;
String password = aDataModel.password;
});
Note that the variables won't be available outside the loop with the way I wrote it. Just a demonstration of iterating and accessing the properties of each DataModel in your list.
A thing to keep in mind is that you're describing your list of DataModel objects as a data table. But it's not a table, it's simply a collection of values contained in an object, which you have a list of. You may be displaying it, or choosing to conceptualize it as a data table in your head, but the model that your code is describing isn't that, which means you aren't going to iterate through it quite like a table. Once you access a "row", the "columns" have no defined order, you may access them in any order you want to the same effect.

Java class: limit instance variable to one of several possible values, depending on other instance variables

I am sorry for the vague question. I am not sure what I'm looking for here.
I have a Java class, let's call it Bar. In that class is an instance variable, let's call it foo. foo is a String.
foo cannot just have any value. There is a long list of strings, and foo must be one of them.
Then, for each of those strings in the list I would like the possibility to set some extra conditions as to whether that specific foo can belong in that specific type of Bar (depending on other instance variables in that same Bar).
What approach should I take here? Obviously, I could put the list of strings in a static class somewhere and upon calling setFoo(String s) check whether s is in that list. But that would not allow me to check for extra conditions - or I would need to put all that logic for every value of foo in the same method, which would get ugly quickly.
Is the solution to make several hundred classes for every possible value of foo and insert in each the respective (often trivial) logic to determine what types of Bar it fits? That doesn't sound right either.
What approach should I take here?
Here's a more concrete example, to make it more clear what I am looking for. Say there is a Furniture class, with a variable material, which can be lots of things, anything from mahogany to plywood. But there is another variable, upholstery, and you can make furniture containing cotton of plywood but not oak; satin furniture of oak but not walnut; other types of fabric go well with any material; et cetera.
I wouldn't suggest creating multiple classes/templates for such a big use case. This is very opinion based but I'll take a shot at answering as best as I can.
In such a case where your options can be numerous and you want to keep a maintainable code base, the best solution is to separate the values and the logic. I recommend that you store your foo values in a database. At the same time, keep your client code as clean and small as possible. So that it doesn't need to filter through the data to figure out which data is valid. You want to minimize dependency to data in your code. Think of it this way: tomorrow you might need to add a new material to your material list. Do you want to modify all your code for that? Or do you want to just add it to your database and everything magically works? Obviously the latter is a better option. Here is an example on how to design such a system. Of course, this can vary based on your use case or variables but it is a good guideline. The basic rule of thumb is: your code should have as little dependency to data as possible.
Let's say you want to create a Bar which has to have a certain foo. In this case, I would create a database for BARS which contains all the possible Bars. Example:
ID NAME FOO
1 Door 1,4,10
I will also create a database FOOS which contains the details of each foo. For example:
ID NAME PROPERTY1 PROPERTY2 ...
1 Oak Brown Soft
When you create a Bar:
Bar door = new Bar(Bar.DOOR);
in the constructor you would go to the BARS table and query the foos. Then you would query the FOOS table and load all the material and assign them to the field inside your new object.
This way whenever you create a Bar the material can be changed and loaded from DB without changing any code. You can add as many types of Bar as you can and change material properties as you goo. Your client code however doesn't change much.
You might ask why do we create a database for FOOS and refer to it's ids in the BARS table? This way, you can modify the properties of each foo as much as you want. Also you can share foos between Bars and vice versa but you only need to change the db once. cross referencing becomes a breeze. I hope this example explains the idea clearly.
You say:
Is the solution to make several hundred classes for every possible
value of foo and insert in each the respective (often trivial) logic
to determine what types of Bar it fits? That doesn't sound right
either.
Why not have separate classes for each type of Foo? Unless you need to define new types of Foo without changing the code you can model them as plain Java classes. You can go with enums as well but it does not really give you any advantage since you still need to update the enum when adding a new type of Foo.
In any case here is type safe approach that guarantees compile time checking of your rules:
public static interface Material{}
public static interface Upholstery{}
public static class Oak implements Material{}
public static class Plywood implements Material{}
public static class Cotton implements Upholstery{}
public static class Satin implements Upholstery{}
public static class Furniture<M extends Material, U extends Upholstery>{
private M matrerial = null;
private U upholstery = null;
public Furniture(M matrerial, U upholstery){
this.matrerial = matrerial;
this.upholstery = upholstery;
}
public M getMatrerial() {
return matrerial;
}
public U getUpholstery() {
return upholstery;
}
}
public static Furniture<Plywood, Cotton> cottonFurnitureWithPlywood(Plywood plywood, Cotton cotton){
return new Furniture<>(plywood, cotton);
}
public static Furniture<Oak, Satin> satinFurnitureWithOak(Oak oak, Satin satin){
return new Furniture<>(oak, satin);
}
It depends on what you really want to achieve. Creating objects and passing them around will not magically solve your domain-specific problems.
If you cannot think of any real behavior to add to your objects (except the validation), then it might make more sense to just store your data and read them into memory whenever you want. Even treat rules as data.
Here is an example:
public class Furniture {
String name;
Material material;
Upholstery upholstery;
//getters, setters, other behavior
public Furniture(String name, Material m, Upholstery u) {
//Read rule files from memory or disk and do all the checks
//Do not instantiate if validation does not pass
this.name = name;
material = m;
upholstery = u;
}
}
To specify rules, you will then create three plain text files (e.g. using csv format). File 1 will contain valid values for material, file 2 will contain valid values for upholstery, and file 3 will have a matrix format like the following:
upholstery\material plywood mahogany oak
cotton 1 0 1
satin 0 1 0
to check if a material goes with an upholstery or not, just check the corresponding row and column.
Alternatively, if you have lots of data, you can opt for a database system along with an ORM. Rule tables then can be join tables and come with extra nice features a DBMS may provide (like easy checking for duplicate values). The validation table could look something like:
MaterialID UpholsteryID Compatability_Score
plywood cotton 1
oak satin 0
The advantage of using this approach is that you quickly get a working application and you can decide what to do as you add new behavior to your application. And even if it gets way more complex in the future (new rules, new data types, etc) you can use something like the repository pattern to keep your data and business logic decoupled.
Notes about Enums:
Although the solution suggested by #Igwe Kalu solves the specific case described in the question, it is not scalable. What if you want to find what material goes with a given upholstery (the reverse case)? You will need to create another enum which does not add anything meaningful to the program, or add complex logic to your application.
This is a more detailed description of the idea I threw out there in the comment:
Keep Furniture a POJO, i.e., just hold the data, no behavior or rules implemented in it.
Implement the rules in separate classes, something along the lines of:
interface FurnitureRule {
void validate(Furniture furniture) throws FurnitureRuleException;
}
class ValidMaterialRule implements FurnitureRule {
// this you can load in whatever way suitable in your architecture -
// from enums, DB, an XML file, a JSON file, or inject via Spring, etc.
private Set<String> validMaterialNames;
#Overload
void validate(Furniture furniture) throws FurnitureRuleException {
if (!validMaterialNames.contains(furniture.getMaterial()))
throws new FurnitureRuleException("Invalid material " + furniture.getMaterial());
}
}
class UpholsteryRule implements FurnitureRule {
// Again however suitable to implement/config this
private Map<String, Set<String>> validMaterialsPerUpholstery;
#Overload
void validate(Furniture furniture) throws FurnitureRuleException {
Set<String> validMaterialNames = validMaterialsPerUpholstery.get(furniture.getUpholstery();
if (validMaterialNames != null && !validMaterialNames.contains(furniture.getMaterial()))
throws new FurnitureRuleException("Invalid material " + furniture.getMaterial() + " for upholstery " + furniture.getUpholstery());
}
}
// and more complex rules if you need to
Then have some service along the lines of FurnitureManager. It's the "gatekeeper" for all Furniture creation/updates:
class FurnitureManager {
// configure these via e.g. Spring.
private List<FurnitureRule> rules;
public void updateFurniture(Furniture furniture) throws FurnitureRuleException {
rules.forEach(rule -> rule.validate(furniture))
// proceed to persist `furniture` in the database or whatever else you do with a valid piece of furniture.
}
}
material should be of type Enum.
public enum Material {
MAHOGANY,
TEAK,
OAK,
...
}
Furthermore you can have a validator for Furniture that contains the logic which types of Furniture make sense, and then call that validator in every method that can change the material or upholstery variable (typically only your setters).
public class Furniture {
private Material material;
private Upholstery upholstery; //Could also be String depending on your needs of course
public void setMaterial(Material material) {
if (FurnitureValidator.isValidCombination(material, this.upholstery)) {
this.material = material;
}
}
...
private static class FurnitureValidator {
private static boolean isValidCombination(Material material, Upholstery upholstery) {
switch(material) {
case MAHOGANY: return upholstery != Upholstery.COTTON;
break;
//and so on
}
}
}
}
We often are oblivious of the power inherent in enum types. The Java™ Tutorials clearly states "you should use enum types any time you need to represent a fixed set of constants."
How do you simply make the best of enum in resolving the challenge you presented? - Here goes:
public enum Material {
MAHOGANY( "satin", "velvet" ),
PLYWOOD( "leather" ),
// possibly many other materials and their matching fabrics...
OAK( "some other fabric - 0" ),
WALNUT( "some other fabric - 0", "some other fabric - 1" );
private final String[] listOfSuitingFabrics;
Material( String... fabrics ) {
this.listOfSuitingFabrics = fabrics;
}
String[] getListOfSuitingFabrics() {
return Arrays.copyOf( listOfSuitingFabrics );
}
public String toString() {
return name().substring( 0, 1 ) + name().substring( 1 );
}
}
Let's test it:
public class TestMaterial {
for ( Material material : Material.values() ) {
System.out.println( material.toString() + " go well with " + material.getListOfSuitingFabrics() );
}
}
Probably the approach I'd use (because it involves the least amount of code and it's reasonably fast) is to "flatten" the hierarchical logic into a one-dimensional Set of allowed value combinations. Then when setting one of the fields, validate that the proposed new combination is valid. I'd probably just use a Set of concatenated Strings for simplicity. For the example you give above, something like this:
class Furniture {
private String wood;
private String upholstery;
/**
* Set of all acceptable values, with each combination as a String.
* Example value: "plywood:cotton"
*/
private static final Set<String> allowed = new HashSet<>();
/**
* Load allowed values in initializer.
*
* TODO: load allowed values from DB or config file
* instead of hard-wiring.
*/
static {
allowed.add("plywood:cotton");
...
}
public void setWood(String wood) {
if (!allowed.contains(wood + ":" + this.upholstery)) {
throw new IllegalArgumentException("bad combination of materials!");
}
this.wood = wood;
}
public void setUpholstery(String upholstery) {
if (!allowed.contains(this.wood + ":" + upholstery)) {
throw new IllegalArgumentException("bad combination of materials!");
}
this.upholstery = upholstery;
}
public void setMaterials(String wood, String upholstery) {
if (!allowed.contains(wood + ":" + upholstery)) {
throw new IllegalArgumentException("bad combination of materials!");
}
this.wood = wood;
this.upholstery = upholstery;
}
// getters
...
}
The disadvantage of this approach compared to other answers is that there is no compile-time type checking. For example, if you try to set the wood to plywoo instead of plywood you won’t know about your error until runtime. In practice this disadvantage is negligible since presumably the options will be chosen by a user through a UI (or through some other means), so you won’t know what they are until runtime anyway. Plus the big advantage is that the code will never have to be changed so long as you’re willing to maintain a list of allowed combinations externally. As someone with 30 years of development experience, take my word for it that this approach is far more maintainable.
With the above code, you'll need to use setMaterials before using setWood or setUpholstery, since the other field will still be null and therefore not an allowed combination. You can initialize the class's fields with default materials to avoid this if you want.

SpringData/Hibernate: Sort enum by it's field

I have small problem with sorting enums by custom rule with springdata/hibernate.
A have an enum called let's say: DeviceState, which has its own priority field.
public enum DeviceState {
ON(4), OFF(3), UPDATE(2), CRITICAL(0), WARNING(1);
private final int priority;
private DeviceState(int priority) {
this.priority = priority;
}
public int getPriority() {
return priority;
}
}
The enum is used by DeviceEntity:
#Entity
public class Device implements Serializable {
// .... other fields
#Enumerated(EnumType.STRING)
private DeviceState state;
// ....
}
I use SpringData/Repository to get entities from database, I have used findAll(specification, pageable) method from org.springframework.data.repository.Respository class.
Page<Device> pagedDevices = deviceRepository.findAll(DeviceSpecifications.findByFilter(filter), pageable);
And if my request cointains sort=state,asc it will return the Devices sorted by state - alphabetically, but I would like to use priority field as sort criteria instead.
What have i tried:
change #Enumerated(STRING) to #Enumerated(ORIDINAL) - but this cannot happen this is production working system with existing data, there are several devices which are using by external system and my project needs String values.
Control sort order of Hibernate EnumType.STRING properties but i cannot add any new column since I already have production data and all enum values are persisted as String.
I also have tried some "workarounds" with wrapping Page result - but these are just workarounds not pure solutions :)
Any clue, help? idea?
Please keep in mind that this is production working system and I cannot do any "fireworks".
order by case
when state = 'CRITICAL' then 0
when state = 'WARNING' then 1
when state = 'UPDATE' then 2
when state = 'OFF' then 3
when state = 'ON' then 4
end

Persist HashMap in ORMLite

Im using ORMLite in my Android app. I need to persist this class, which has a HashMap. What is a good way of persisting it? Its my first time trying to persist a HashMap, also first time with ORMLite so any advice would be greatly appreciated!
*Edit*
If that makes any difference, the Exercise class is simply a String (that also works as id in the database), and the Set class has an int id (which is also id in database), int weight and int reps.
#DatabaseTable
public class Workout {
#DatabaseField(generatedId = true)
int id;
#DatabaseField(canBeNull = false)
Date created;
/*
* The hashmap needs to be persisted somehow
*/
HashMap<Exercise, ArrayList<Set>> workoutMap;
public Workout() {
}
public Workout(HashMap<Exercise, ArrayList<Set>> workoutMap, Date created){
this.workoutMap = workoutMap;
this.created = created;
}
public void addExercise(Exercise e, ArrayList<Set> setList) {
workoutMap.put(e, setList);
}
...
}
Wow. Persisting a HashMap whose value is a List of Sets. Impressive.
So in ORMLite you can persist any Serializable field. Here's the documentation about the type and how you have to configure it:
http://ormlite.com/docs/serializable
So your field would look something like:
#DatabaseField(dataType = DataType.SERIALIZABLE)
Map<Exercise, List<Set>> workoutMap;
Please note that if the map is at all large then this will most likely not be very performant. Also, your Exercise class (and the List and Set classes) need to implement Serializable.
If you need to search this map, you might consider storing the values in the Set in another table in which case you might want to take a look at how ORMLite persists "foreign objects".

Categories