I am designing a program that allows you to create an object with traits and then add it to a database. For example, a renting property like so:
public class Property
{
PropertyType type;
int bedrooms;
int bathrooms;
double squareFootage;
boolean furnished;
}
Then, you or other users can search the database for objects based on those traits. But here are the restrictions:
All properties have one of each trait defined (you can't leave one trait blank)
You may search for properties by any one trait, combination of traits, or no traits (to see all). AND you can specify a multiplicity for each trait. For example, you can specify a HOUSE with 2, 3 or 4 bedrooms and 2 or 3 bathrooms. Thereby not putting restrictions on square footage or furnishing.
This poses a problem, as the existence of a trait in the search criteria may or may not exist, and may have a multiplicity. Here is my current solution to hold the search criteria:
public class SearchCriteria
{
ArrayList<PropertyType> type;
ArrayList<int> bedrooms;
ArrayList<int> bathrooms;
ArrayList<double> squareFootage;
ArrayList<boolean> furnished;
}
The problem is that when I want to add another trait to Property, I have to add it to both these classes (and probably more in database controller etc) and add additional functions for it in each. What is a design pattern I can utilize to make this code more modular and abstract?
Essentially, a good answer would be a solution that allows the addition or removal of traits by only changing one class/file.
Simply using an interface Trait with an overidden function getTrait() wouldn't work because the return types aren't the same across all traits.
EDIT: I have to implement a SearchCriteria class because this program is run on a client/server connection, so SearchCriteria will be serialized and sent over a socket, not sent directly to the database.
If you only have a handful of traits, and they're fundamental to your business model, it's totally reasonable to have to change more than one class when you add a new trait or want to change the type of behavior of one of those traits.
However, if you're trying to come up with a model that can handle dynamically adding different types of traits to your object, you may consider not encoding the traits as class properties at all. Rather, have your model contain a list of Traits, where each Trait knows its TraitType. Each TraitType has a specific shape for its data, as well as a specific shape for its Criteria. This would enable you to define your model in a file or database somewhere, and change it on demand, and only have to change the code when you identify a new TraitType. But it would also be an enormous amount of work, and would only be worthwhile if your business needs require a high degree of configurability.
Related
In the following, I provide an abstract description of my problem, as it will be more complicated if I provide the context and the specific code.
I have a Class Specification. A file reader accesses specific nested specifications from a file and furnishes a Specification instance correspondingly. The file may contain incomplete specifications, i.e. some values are to be bound later (for example, from a user input).
class Specification{
Criteria spec1;
AnotherCriteria spec2;
...
}
Now consider the interface:
Interface Criteria {
criteriaMethod1();
criteriaMethod2();
}
And consider the implementing class:
class ConcreteCriteria1 implements Criteria{
someDataMember;
#Overrride
criteriaMethod1(){
// some implementation goes here
}
#Overrride
criteriaMethod2(){
// some implementation goes here
}
}
Now let's consider a class responsible for performing some specific tasks depending on a Criteria passed:
class WorkExcuter {
doSomeWork(Criteria criteria);
}
I have a case, in such the criteria passed to doSomeWork is unknown, for example, in a waiting state to get some input from the user. This unknown state forces some specific additional attributes, for example, it's a maximum lenght of characters, an identifier, a waiting time, etc. To model that, let's consider that we have:
Interface ReuqiredInput {
reuqiredInputMethod1();
}
Problem:
I want an 'intelligent' way (e.g. some design pattern) to make both, Specification (in spec1 data memebr) and doSomeWork accept ReuqiredInput instead of Criteria when the state is unknown as described. So they should accept both cases: either Criteria, or RequiredInput.
Possible ways around:
Make ConcreteCriteria1 implement ReuqiredInput, in addition to Criteria; i.e., having a single concrete class implementing both interfaces; but then, I will have my concrete class as an ugly XOR between two different sets of elements that have nothing to do with each other and it has to have a method that figures out which set of elements is filled (to avoid exceptions as the other set will be null), and it complicates the access via the implemented interfaces.
Overload doSomeWork; so that:
class WorkExcuter {
doSomeWork(Criteria criteria);
doSomeWork(RequiredInput input);
}
But that solves the behavior part, but not the structural part, i.e. How to convince Specification that it can accept data member in sepc1 of type RequiredInput instead of Criteria?
Your Criteria doesn't have different data in different states, it has whatever data members it needs for its calculations and they might be unset when it is waiting for user input; there are various options to ensure it receives user input before proceeding to the main calculations. For example
criteriaMethod1() etc. can request user input interactively if needed (presumably, the first time they are called). No RequiredInput objects are used. Likely to be messy because it places user interface code in a business logic component like Criteria.
criteriaMethod1() etc. can return an error if called with an incomplete state, then their caller is expected to ask them what input they require (with some Criteria method returning RequiredInput objects). The caller uses RequiredInput objects to obtain information, passes that information to the Criteria, and then criteriaMethod1() etc. can be called successfully.
even better, Criteria objects are created only when all needed data are available, and they are never in an invalid waiting-for-input state. For the case of combining an incomplete specification file with user inputs, this probably involves specific classes to represent the content of the configuration file and a module that, given the configuration file, asks the user for missing information and creates Criteria. The rest of the program, the part that uses Criteria, only sees complete ones. RequiredInput objects might be useful as an intermediate representation of what the configuration file leaves unspecified.
I am designing an application that has two widgets:
-A list that contains arbitrary objects
-A table that displays specific properties of the currently selected object
The goal is to be able to pick an object from the list, look at the properties, and modify them as necessary. The list can hold objects of various types.
So say the list contains Vehicle objects and Person objects
public class Person
{
public String name;
public Integer age;
}
public class Vehicle
{
public String make;
public String model;
}
If I click on a Person object, the table will display the name and age, and I can assign new values to them. Similarly, if I click on a Vehicle object, it will display the make and model in the table and allow me to modify them.
I have considered writing a method like
public String[] getFields()
{
return new String[] {"name", "age"};
}
Which returns a list of strings that represent the instance variables I want to look at, and use some reflection methods to get/set them. I can define this getFields method in all of the classes so that I can use the table to handle arbitrary objects that might be thrown into the list.
But is there a way to design this so that I don't resort to reflection? The current approach seems like bad design.
On the other hand, I could create multiple TableModel objects, one for every possible class. The table would know what rows to display and how to access the object's instance variables. But then everytime a new class is added I would have to define a new table model, which also sounds like a weak design.
You have a class (Vehicle) and you know the names of some properties (make, model) that you want to be able to manipulate dynamically for an instance of this class through a JTable UI.
You have various different approaches to chose from.
A. Use the reflection API
This is what the reflection API is made for. If you want something so dynamic, there is nothing wrong with using reflection. The performance overhead will not be significant for this use case.
B. Use a library like beanutils that is based on the reflection API
This should be easier than directly using the reflection API, but it has the drawback that you need to include another dependency in your project.
C. Create dynamically at runtime the different TableModel classes.
You can do this using either the java compiler API or javassist. Based on information available at runtime, you are able to compile a new class for each different type of table model. If you follow this approach you must be aware that the creation of the class is a heavy task, so the first time you create a TableModel the application will take some time to respond.
What to chose?
Of course this is your decision. For the specific use case, the overhead added by reflection or beanutils is insignificant, so probably it is better to chose between A or B. In another use case where performance is more critical, then you could examine the C approach, without forgetting the class creation response time problem.
EDIT:
I just realized that in this specific use case there is another important functionality required. Convert from String to the appropriate data type of each property and vice cersa. Beanutils has perfect support for that, so it gets a plus here.
I am designing a game and I have good overview of what I am doing.
However I've been trying to improve my OOP skills but now and then I face the same problem, how should I use the abstracted objects?
Lets say I have a list of Entitys that represents anything that has x and y property on screen and probably width and height haven't figured all out yet!
Then I have special types of entitys, one that can move and one that cannot and probably something like collidable in future.
They're all in a collection of Entitys (List<Entity> in my case) and now I want to simulate entitys that moves and are instances of DynamicEntity on main loop but they're all on abstract list of Entitys and I don't know is the Entity in loop either dynamic entity or not.
I know I could just check it with instanceof but I am pretty sure that's not the best idea..
I've seen some people having something like boolean inside the Entity for checking its type but I don't really want to hardcode all kind of entitys there..
I just want to know what is the best practice in such case?
Usually it's better to avoid checking the type if possible. If you think you need to use instanceof in your code then there's probably an abstraction you could be using to make your design more extensible. (If you decide to add a third type of Entity in the future you don't want to have to go back and update all of your instanceof checks with a third case.)
There are two common ways to have different actions based on an instance's concrete type without checking the concrete type:
One common way is the visitor pattern. The idea here is to create a Visitor class with an action for each type of object. Next, each concrete class has an accept method which simply calls the correct visit method inside the visitor class. This single level of indirection allows the objects to choose the correct action themselves rather than you choosing it by checking the type.
The visitor pattern is usually used for one of two reasons. 1) You can add new actions to a class hierarchy that implements the visitor pattern without access to the classes' source code. You only have to implement a new visitor class and use it in tandem with the visitable classes' pre-existing accept methods. 2) When there are many possible actions one can perform on classes from some type hierarchy sometimes it's more clear to split each action off into it's own visitor class rather than polluting the target classes with a bunch of methods for a bunch of different actions, so you group them with the visitor rather than the target classes.
However, in most cases it's easier to do things the second way: to simply override the definition of a common method in each concrete class. The Entity class might have an abstract draw() method, then each type of Entity would implement that draw() method in a different way. You know that each type of Entity has a draw() method that you can call, but you don't have to know the details of which type of entity it is or what the method's implementation does. All you have to do is iterate over your List<Entity> and call draw() on each one, then they'll perform the correct actions themselves depending on their type since each type has its own specialized draw() implementation.
You're right that you don't want to check the instance type or have some sort of function to check capability. My first question would be - why do you have a list of entities of that base type in the first place ? It sounds to me like you need to maintain a list of dynamic entities.
You could implement a move() method that does nothing for non-dynamic entities, but again that doesn't seem right in this particular scenario.
Perhaps it would be better to implement an event that triggers the iteration of that list, and pass that event into each object in turn. The dynamic entities could decide to move upon that event. The static entities would obviously not.
e.g.
Event ev = ...
foreach(e : entities) {
e.actUpon(ev);
}
In this scenario you could have different event types, and the entities would decide upon their action upon the basis of the event type and the entity type. This is known as double-dispatch or the visitor pattern.
If your processing of entities relies on knowing details about the entity types, then your Entity abstraction doesn't buy you much (at least not in this use-case): your List<Entity> is almost as opaque for you as a mere List<Object>.
If you know that every entity you can imagine will be either static or dynamic, there's no "hard-coding" in having a boolean property to all entities: isDynamic() or something.
However, if the dynamic aspect only makes sense for a subset of your entities, this flag will indeed bring some mess to your abstraction. In this case, my first guess is that you didn't model the use-case properly since you need to work with a list of items that do not provide enough polymorphic information for you to handle them.
So, I have been working on familiarizing myself with JPA's inheritance features and have really liked them so far. One thing that occurred to me recently is that they could actually be used for something other than just retrieving data. Given that it can get subclasses based on a discriminator value, inheritance is actually a convenient way to transform configuration fields into implementations. Being in that stage where my knowledge-to-experience ratio is in the 'just enough to be dangerous/not enough to always realize it zone', I thought it might be best to ask if this was a good idea.
Take this example with a PRODUCT and BILLTYPE table.
Product:
int Id
int billtypeid
Billtype:
int id
varchar[15] description
Billtype is simply a billing strategy for the product (We'll say some orders may be billed by weight, while others could just be billed by case). Each bill type will require the use of different methods during the invoicing process. The Billtype table will likely only have a handful of entries, and shouldn't grow to be very large.
Would it make sense to use inheritance to subclass an abstract Billtype entity that also defines an interface for the different methods the invoice code will need? Something like this:
#Entity
#DiscriminatorColumn("description")
public abstract class BillType {
// Getters, setters
// Abstract methods that could be used elsewhere - ex:
// BigDecimal calculateInvVal(...)
}
#Entity
#DiscriminatorValue("by case")
public class CaseBillType extends BillType {
// Implementation of calculateInvVal - now when invoicing code needs this method,
// the right one is always associated with the current product!
}
This provides a convenient way to associate behaviors with fields in the database that represent configuration data, but mixes business code with entities (which, by most accounts, is very very naughty). There could be a design pattern to fix this issue that I am missing from my repertoire, but I'd really like to avoid having to write lots of, "if bill type is this, get this subclass, if bill type is this, etc" code.
What I am looking for from an answer is an explanation of potential drawbacks to this technique I may not be seeing that would justify looking for another solution to this problem.
It's useful to link a product with a BillType entity if it's possible to add, remove and modify bill types at runtime without any need to rebuild and redeploy a new version of the application. This is not the case with your example.
So if what you have is a static set of bill types, each defining a static behavior encapsulated by the BillType subclass, you could simply have a BillType enum instead. Each instance of this enum defining its own behavior. You don't need an entity hierarchy and an additional table for this.
The code to calculate the InVal in the Product entity would be exactly the same:
BigDecimal computeInVal() {
billType.calculateInVal(this);
}
The code to get all the bill types would be
return BillType.values();
And instead of the following code to associate a bill type to a product:
product.setBillType(em.find(BillType.class, ID_OF_CASE_BILL_TYPE));
you would simply have
product.setBillType(BillType.BY_CASE);
One of the key benefits of NoSQL data stores like MongoDB is that they're schemaless. With dynamically typed languages this seem to be a natural fit. You can receive some arbitrary JSON inputs, perform business logic on the known fields, and persist the whole thing without first having to define the object.
What if your choice of language is limited to the statically typed, say Java? How could I achieve the same level of flexibility?
A typical data flow like the following:
JSON Input
Serialize to Java Object to perform business logic
Deserialize into BSON to persist in Mongo
where the serialization to object step is necessary since you want to perform business logic with POJOs, not JSON strings. However, before I can serialize the input into objects, I must define it first. What if the input contains additional fields undefined in the object? While they may not be used in the business logic, I may still want to be able to persist them. I have seem implementations where the undefined fields are put into a map, but am not sure if that's the best approach. For one, the undefined fields may be complex objects as well.
Schemaless data doesn't necessarily mean structureless data; the fields are typically known in advance and some type-safe pattern can be applied on top of it to avoid the Magic Container anti-pattern But this is not always the case. Sometimes keys are entered by the user and cannot be known in advance.
I've used the Role Object Pattern several times to give coherence to a dynamic structure. I think it is well suited here for both cases.
The Role Object Pattern defines a way to access different views of an object. The canonical example being a User that can assume several roles such as Customer, Vendor, and Seller. Each of these views has different operations it can perform and can be accessed from any of the other views. Common fields are typically available at the interface level (especially userId(), or in your case toJson()).
Here's an example of using the pattern:
public void displayPage(User user) {
display(user.getName());
if (user.hasView(Customer.class))
displayShoppingCart(user.getView(Customer.class);
if (user.hasView(Seller.class))
displayProducts(user.getView(Seller.class));
}
In the case of data with a known structure, you can have several views bringing different sets of keys into cohesive units. These different views can read the json data on construction.
In the case of data with a dynamic structure, an authoritative RawDataView can have the data in it's dynamic form (ie. a Magic Container like a HashMap<String, Object>). This can be used to query the dynamic data. At the same time, type-safe wrappers can be created lazily and can delegate to the RawDataView to assist in program readability/maintainability:
public class Customer implements User {
private final RawDataView data;
public CustomerView(UserView source) {
this.data = source.getView(RawDataView.class);
}
// All User views must specify this
#Override
public long id() {
return data.getId();
}
#Override
public <T extends UserView> T getView(Class<T> view) {
// construct or look up view
}
#Override
public Json toJson() {
return data.toJson();
}
//
// Specific to Customer
//
public List<Item> shoppingCart() {
List<Item> items = (List<Item>) data.getValue("items", List.class);
}
// etc....
}
I've had success with both of these approaches. Here are some extra pointers that I've discovered along the way:
Have a static structure structure to your data as much as possible. This makes things a lot easier to maintain. I had to break this rule and use the RawDataView approach when working on a legacy system. You may also have to break it with dynamically-entered user data as mentioned above. In which case, use a convention for non-dynamic field names such as a leading underscore (_userId)
Have equals() and hashcode() implemented such that user.getView(A.class).equals(user.getView(B.class)) is always true for the same user.
Have a UserCore class that does all the heavy lifting of common code such as creating views; performing common operations (like toJson()) returning common fields (like userId()); and implementing equals() and hashcode(). Have all views delegate to this core object
Have an AbstractUserView that delegates to the UserCore and implements equals() and hashcode()
Use a type-safe heterogeneous container (like ClassToInstanceMap) constructing/caching views.
Allow the existence of a view to be queried. This can be done with either a hasView() method or by having getView return Optional<T>
You can always have a class which provides both:
easy access to attributes you know about and optional fallback cases to older formats (for example it can return "name" if it exists, or older case of "name.first" + "name.last" if it doesn't (or some similar scenario))
easy access to unknown elements simulating the map interface
Whether you do a full validation or not, whether you allow extra undefined attributes or not depends on what you want to achieve. But I think that creating an abstraction which allows you either way of accessing the data is the best solution.
Hopefully over time, you'll get to the stage where your schema is pretty much stable and messing directly with the attributes is not needed anymore.
This is not well solved in Java due to the lack of dynamic types. One way this can be solved is using Maps.
Map
The object can again be a Map of objects.
This is not an elegant way but works in Java. An example : SnakeYaml library for YAML allows traversal in this way.