Specific Java inheritance inquiry - Need suggestions - java

The problem I am having is quite specific, and a bit difficult to explain. Let me know if you need more details about anything. I have an abstract class called System. To hold my System objects, I have a SystemManager which contains an list of Systems, and some functions for manipulating it. Inside it contains:
List<System> systems = new ArrayList<System>();
Now, I want to create another abstract class which is a specific type of System called RenderSystem. This will inherit from System but have a few more functions. I also want to create a RenderSystemManager which should do everything SystemManager does, except with a few extra features. Also, instead of having a list of System in the manager, I would like it to have a list of RenderSystem to ensure that the programmers don't put any regular System objects in it. My initial instinct was to inherit SystemManger, and just change the type of the list to RenderSystem:
systems = new ArrayList<RenderSystem>();
Java doesn't allow this as systems is type System not RenderSystem. I would have assumed it would be OK considering RenderSystem inherits from System. One way I can think of to resolve this issue is to copy and paste all the code from SystemManager into RenderSystemManager and just change the line of code to be:
List<RenderSystem> systems = new ArrayList<RenderSystem>();
My other instinct would be to override the addSystem(System system) function to ensure that it only handles RenderSystem, but the programmers might think they are allowed to do it even if it doesn't work.
#Override
public void addSystem(System system)
{
if (system instanceof RenderSystem)
{
super.addSystem(system);
}
}
These doesn't seem very elegant though. Anybody have any suggestions?

Your managers have the same type-safety requirements as the list they wrap. They should thus follow the same strategy, and be generic types:
public class BaseSystemManager<T extends System> {
private List<T> systems = new ArrayList<>();
public void addSystem(T system) {
systems.add(system);
}
// common methods
}
public class SystemManager extends BaseSystemManager<System> {
// methods specific to System handling
}
public RenderSystemManager extends BaseSystemManager<RenderSystem> {
// methods specific to RenderSystem handling
}

I think your second instinct to add protection into the addSystem call is the correct one. That way SystemManager can still operate on the list of Systems. However I would change the implementation of addSystem to instruct developers in the proper usage:
#Override
public void addSystem(System system)
{
if (system instanceof RenderSystem)
{
super.addSystem(system);
}
else
{
throw new IllegalArgumentException("Only RenderSystem objects can be added to a RenderSystemManager");
}
}

Your SystemManager could have a a list of System objects, and the list could be private, and the only way to add an object to that list would be a function that only took a RenderSystem as an argument. You're trying to manhandle generics into a use for which they probably are not appropriate.
But I think you have bigger problems.
I think this happens to many of us when we start trying to design "from the inside out", i.e., you are taking programming constructs and trying to string them together at a level of detail that ignores (or forgets) what the code is trying to do from a higher level. It's like saying "I want a while loop inside a do loop that has a switch statement with try-catch-finally-whatever, but I don't want to nest all these damn braces."
Take a few steps back and think about the external functionality you want to accomplish, and progress in small steps through design and implementation details from there...

Related

_Global_ list in Java

I am developing a java program for class, and I have some restrictions that are killing me. I have to read a file and save its words on a list, and then keep that list so I can take words out of it.
I have to read the file and select n words out of the list, and return those words, not the whole list. My question is: is there some way of creating the complete list as global or extern, so every method can access to it, with no need to be a parameter?? I need to be modifying the complete list, removing the words I am needing, and then using it again in other methods.
Thank you :)
You can make a member variable in a class public and static, and that way you can access it from anywhere:
public class One {
public static List<String> names = new ArrayList<>();
}
public class Two {
public void addName(String name) {
One.names.add(name);
}
}
public class Three {
public void printTheNames() {
System.out.println(One.names);
}
}
However...
These restrictions (like no way to create true global variables) are there for a good reason, and not because Java is lacking features. If you have trouble with these restrictions, then that's a sign you are trying to do things the wrong way - it means the design of your program has problems.
Almost always, global variables are bad.
You can get a globally accessible variable by making it a public static. In terms of design principles, though, this is generally a bad idea because it tends to lead to rampant dependencies that are hard to replace later one.

Does this method belong to Value object or Manager

In an e-commerce application, below are the high level API
interface Order{
public List<PaymentGroup> getPaymentGroups();
}
interface PaymentGroup{}
class PaymentGroupImpl implements PaymentGroup{}
class CreditCard extends PaymentGroupImpl{}
class GiftCard extends PaymentGroupImpl{}
class OrderManager{ //Manager component used to manipulate Order}
There is a need to add some utility methods like hasGiftCard(), hasCreditCard(), getGiftCards(), getCreditCards()
Two approaches -
1) Add these in Order. However, this would result in coupling between Order and PaymentGroup implementors (like CreditCard, GiftCard) Example -
interface Order {
public List<GiftCard> getGiftCards();
}
2) Move these to OrderManager.
class OrderManager{
public List<GiftCard> getGiftCards(Order order){}
}
I personally prefer 2), am just curious would there be any reason to choose 1) over 2)
I have two answers. One is what I'll call Old Skool OOP and the other I'll call New Skool OOP.
Let's tackle New Skool first. The GoF and Martin Fowler changed the way people look at OOP. Adding methods like hasGiftCard() leads to adding conditional logic/branching into the code. It might look something like this:
if (order.hasGiftCard()) {
//Do gift card stuff
} else {
//Do something else
}
Eventually this kind of code becomes brittle. On a big application, lots of developers will be writing predicate methods. Predicate methods assert something and return true or false. These methods usually start with the word "has", "is" or "contains". For example, isValid(), hasAddress(), or containsFood(). Still more developers write conditional logic that uses those predicate methods.
To avoid all of this conditional logic software engineers changed how they thought about object-orientation. Instead of predicate-methods-and-conditional-logic, they started using things like the strategy pattern, visitor pattern, and dependency injection. An example from your problem domain might look like this:
//Old Skool
if (this.hasCreditCard()) {
orderManager.processCreditCard(this.getCreditCards());
}
Here is another approach to solving the same problem:
//New Skool
for(PaymentItem each : getPaymentItems()){
each.process(this);
}
The New Skool approach turns the problem on its head. Instead of making the Order and OrderManager responsible for the heavy lifting the work is pushed out to the subordinate objects. These kind of patterns are slick because:
they eliminate a lot of "if" statements,
the code is more supple and it is easier to extend the application, and
instead of every developer making changes to Order and OrderManager, the work is spread out among more classes nd code merges are easier.
That's New Skool. Back in the day, I wrote a lot of Old Skool object-oriented code. If you want to go that route, here are my recommendations.
IMHO, you don't need both a PaymentGroup interface and a PaymentGroupImpl class. If all payment classes extend PaymentGroupImpl, then get rid of the interface and make PaymentGroup a class.
Add methods like isCreditCard(), isGiftCertificate() to the PaymentGroup class. Have them all return "false".
In the subclasses of PaymentGroup, override these methods to return true where appropriate. For example, in the CreditCard class, isCreditCard() should return "true".
In the Order class, create methods to filter the payments by type. Create methods like getCreditCards(), getGiftCertificates(), and so on. In traditional Java (no lambdas or helper libraries), these methods might look something like this
List getCreditCards() {
List list = new ArrayList();
for(PaymentGroup each : getPaymentGroups()){
if(each.isCreditCard()) {
list.add(each);
}
return list;
}
-In the Order class, create predicate methods like hasCreditCards(). If performance is not an issue, do this:
boolean hasCreditCards() {
return !getCreditCards().isEmpty();
}
If performance is an issue, do something more clever:
boolean hasCreditCards() {
for(PaymentGroup each : getPaymentGroups()){
if(each.isCreditCard()) {
return true;
}
return false;
}
}
Realize that if you add a new payment group, a code must be added in a lot of places in the Old Skool paradigm.

Is doing something else in setter methods considered having side effects?

Recently I have read some articles saying that methods having side effects is not good. So I just want to ask if my implementation here can be categorized as having side effect.
Suppose I have a SecurityGuard which checks to see if he should allow a customer to go to the club or not.
The SecurityGuard either has only list of validNames or list of invalidNames, not both.
if the SecurityGuard has only validNames, he only allows customer whose name on the list.
if the SecurityGuard has only invalidNames, he only allows customer whose name NOT on the list.
if the SecurityGuard has no lists at all, he allows everyone.
So to enforce the logic, on setter of each list, I reset the other list if the new list has value.
class SecurityGaurd {
private List<String> validNames = new ArrayList<>();
private List<String> invalidNames = new ArrayList<>();
public void setValidNames(List<String> newValidNames) {
this.validNames = new ArrayList<>(newValidNames);
// empty the invalidNames if newValidNames has values
if (!this.validNames.isEmpty()) {
this.invalidNames = new ArrayList<>();
}
}
public void setInvalidNames(List<String> newInvalidNames) {
this.invalidNames = new ArrayList<>(newInvalidNames);
// empty the validNames if newInvalidNames has values
if (!this.invalidNames.isEmpty()) {
this.validNames = new ArrayList<>(); //empty the validNames
}
}
public boolean allowCustomerToPass(String customerName) {
if (!validNames.isEmpty()) {
return validNames.contains(customerName);
}
return !invalidNames.contains(customerName);
}
}
So here you can see the setter methods have an implicit action, it resets the other list.
The question is what I'm doing here could be considered having a side effect? Is it bad enough so that we have to change it? And if yes, how can I improve this?
Thanks in advance.
Well, setters themselves have side effects (A value in that instance is left modified after the function ends). So, no, I wouldn't consider it something bad that needs to be changed.
Imagine that the guard just had one SetAdmissionPolicy which accepted a reference to an AdmissionPolicy defined:
interface AdmissionPolicy {
boolean isAcceptable(String customerName) {
}
and set the guard's admissionPolicy field to the passed-in reference. The guard's own allowCustomerToPass method simply called admissionPolicy.isAcceptable(customerName);.
Given the above definitions, one can imagine three classes that implement AdmissionPolicy: one would accept a list in its constructor, and isAcceptable would return true for everyone on the list, another would also accept a list in its constructor, but its isAcceptable would return true only for people not on the list. A third would simply return true unconditionally. If the club needs to close occasionally, one might also have a fourth implementation that returned false unconditionally.
Viewed in such a way, setInvalidNames and setValidNames could both be implemented as:
public void setAdmissionPolicyAdmitOnly(List<String> newValidNames) {
admissionPolicy = new AdmitOnlyPolicy(newValidNames);
}
public void setAdmissionPolicyAdmitAllBut(List<String> newInvalidNames) {
admissionPolicy = new AdmitAllButPolicy(newInvalidNames);
}
With such an implementation, it would be clear that each method was only "setting" one thing; such an implementation is how I would expect a class such as yours to behave.
The behavior of your class as described, however, I would regard as dubious at best. The issue isn't so much that adding admitted items clears out the rejected items, but rather that the behavior when a passed-in list is empty depends upon the earlier state in a rather bizarre fashion. It's hardly intuitive that if everyone but Fred is allowed access, calling setValidNames to nothing should have no effect, but if it's set to only allow George access that same call should grant access to everyone. Further, while it would not be unexpected that setValidNames would remove from invalidNames anyone who was included in the valid-names list nor vice versa, given the way the functions are named, the fact that setting one list removes everyone from the other list is somewhat unexpected (the different behavior with empty lists makes it especially so).
It does not have any side effect although , its assumed by developers that getters and setters may not have any underlying code apart from getting and setting the variable. Hence when another developer tries to maintain the code , he would probably overlook at your code of the Bean and do the same checks as done by you in the setters - Possible Boiler Plate code as you would call it
I'd not consider it as a side effect. You are maintaining the underlying assumptions of your object. I'm not sure it's the best design, but it's certainly a working one.
In this case I don't think changing the other linkedlist will be a side affect, since the scope is within this class.
However, based on your description, maybe it is better design to have one linkedList (called nameList) and a boolean (isValid) that differentiate between a whitelist and a blacklist. This way it is clear that only one type of list be filled at any time.
I think it's OK. E.g. if you want your class to be immutable the best place to do it is setter:
public void setNames(List<String> names) {
this.names = names == null ? Collections.emptyList() : Collections.unmodifiableList(names);
}

Pros and cons of casting vs. providing a method that returns the required type (Java)

I'm doing a bit of playing about to learn a framework I'm contributing to, and an interesting question came up. EDIT: I'm doing some basic filters in the Okapi Framework, as described in this guide, note that the filter must return different event types to be useful, and that resources must be used by reference (as the same resource may be used in other filters later). Here's the code I'm working with:
while (filter.hasNext()) {
Event event = filter.next();
if (event.isTextUnit()) {
TextUnit tu = (TextUnit)event.getResource();
if (tu.isTranslatable()) {
//do something with it
}
}
}
Note the cast of the resource to a TextUnit object on line 4. This works, I know it's a TextUnit because events that are isTextUnit() will always have a TextUnit resource. However, an alternative would be to add an asTextUnit() method to the IResource interface that returns the event as a TextUnit (as well as equivalent methods for each common resource type), so that the line would become:
TextUnit tu = event.getResource().asTextUnit;
Another approach might be providing a static casting method in TextUnit itself, along the lines of:
TextUnit tu = TextUnit.fromResource(event.getResource());
My question is: what are some arguments for doing it one way or the other? Are there performance differences?
The main advantage I can think of with asTextUnit() (or .fromResource) is that more appropriate exceptions could be thrown if someone tries to get a resource as the wrong type (i.e. with a message like "Cannot get this RawDocument type resource as a TextUnit - use asRawDocument()" or "The resource is not a TextUnit").
The main disadvantages I can think of with .asTextUnit() is that each resource type would then have to implement all the methods (most of which will just throw an exception), and if another major resource type is added there would be some refactoring to add the new method to every resource type (although there's no reason the .asSomething() methods would have to be defined for every possible type, the less common resources could just be cast, although this would lead to inconsistency of approach). This wouldn't be a problem with .fromResource() since it's just one method per type, and could be added or not per type depending on preference.
If the aim is to test an object's type and cast it, then I don't see any value in creating / using custom isXyz and asXyz methods. You just end up with a bunch of extra methods that make little difference to code readability.
Re: your point about appropriate exception messages, I would say that it is most likely not worth it. It is reasonable to assume that not having a TextUnit when a TextUnit is expected is symptom of a bug somewhere. IMO, it is not worthwhile trying to provide "user friendly" diagnostics for bugs. The person that the information is aimed at is a Java programmer, and for that person the default message and stacktrace for a regular ClassCastException (and the source code) provides all of the information required. (Translating it into pretty language adds no real value.)
On the flip-side, the performance differences between the two forms are not likely to be significant. But consider this:
if (x instanceof Y) {
((Y) x).someYMethod();
}
versus
if (x.isY()) {
x.asY().someYMethod();
}
boolean isY(X x) { return x instanceof Y; }
Y asY(X x) { return (Y) x; }
The optimizer might be able to do a better job of the first compared with the second.
It might not inline the method calls in the second case, especially if it is changed to use instanceof and throw a custom exception.
It is less likely to figure out that only one type test is really required in the second case. (It might not in the first case either ... but it is more likely to.)
But either way, the performance difference is going to be small.
Summary, the fancy methods are not really worth the effort, though they don't do any real harm.
Now if the isXyz or asXyz methods were testing the state of the object (not just the object's Java type), or if the asXyz was returning a wrapper, then the answers would be different ...
You could also just go
if (event instanceof TextUnit) {
// ...
}
and save yourself the trouble.
To answer your question regarding whether to go asTextUnit() vs. TextUnit.fromResource, the performance difference would depend upon how you actually implement these methods.
In the case of the static converter you would have a to create and return a new object of type TextUnit. However, in the case of the member function you could simply return this casted or you could create an return a new object - depends upon your use case.
Either ways, seems like instanceof is probably the cleanest way here.
What if your filter were extended - or wrapped - to return only text unit events? In fact, what if it returned only the resources of text unit events? Then your loop would be much simpler. I would think the clean way to do this would be a second filter, which simply returned just the text unit events, followed by, let's say, an Extractor, which returned the properly cast resource.
If you have a common base class, you can have a single asXMethod there for every derived class, and needn't refactor all derived classes:
abstract class Base {
A asA () { throw new InstantiationException ("not an A"); }
B asB () { throw new InstantiationException ("not an B"); }
C asC () { throw new InstantiationException ("not an C"); }
// much more ...
}
class A extends Base {
A asA () { /* hard work */ return new A (); }
// no asB, asC requiered
}
class B extends Base {
B asB () { /* hard work */ return new B (); }
// no asA, asC required
}
// and so on.
looks pretty clever. For a new Class N, just add a new Method to Base, and all derived classes get it. Just N needs to implement asN.
But it smells.
Why should a B have a method asA if it will always fail? That's not a good design. Exceptions in the generator are cheap, if they aren't triggered. Only thrown exceptions might be costly.
Yes, there are differences. Creating new immutable elements is better then casting. Pass all serializable data (non transient or computable data) to a Builder and build appropriate class.

can I add to a private list directly through the getter?

I realize I'm going to get flamed for not simply writing a test myself... but I'm curious about people's opinions, not just the functionality, so... here goes...
I have a class that has a private list. I want to add to that private list through the public getMyList() method.
so... will this work?
public class ObA{
private List<String> foo;
public List<String> getFoo(){return foo;}
}
public class ObB{
public void dealWithObAFoo(ObA obA){
obA.getFoo().add("hello");
}
}
Yes, that will absolutely work - which is usually a bad thing. (This is because you're really returning a reference to the collection object, not a copy of the collection itself.)
Very often you want to provide genuinely read-only access to a collection, which usually means returning a read-only wrapper around the collection. Making the return type a read-only interface implemented by the collection and returning the actual collection reference doesn't provide much protection: the caller can easily cast to the "real" collection type and then add without any problems.
Indeed, not a good idea. Do not publish your mutable members outside, make a copy if you cannot provide a read-only version on the fly...
public class ObA{
private List<String> foo;
public List<String> getFoo(){return Collections.unmodifiableList(foo);}
public void addString(String value) { foo.add(value); }
}
If you want an opinion about doing this, I'd remove the getFoo() call and add an add(String msg) and remove(String msg) methods (or whatever other functionality you want to expose) to ObA
Giving access to collection always seems to be a bad thing in my experience--mostly because they are virtually impossible to control once they get out. I've taken to the habit of NEVER allowing direct access to collections outside the class that contains them.
The main reasoning behind this is that there is almost always some sort of business logic attached to the collection of data--for instance, validation on addition or perhaps some day you'll need to add a second closely-related collection.
If you allow access like you are talking about, it will be very difficult in the future to make a modification like this.
Oh, also, I often find that I eventually have to store a little more data with the object I'm storing--so I create a new object (only known inside the "Container" that houses the collection) and I put the object inside that before putting it in the collection.
If you've kept your collection locked down, this is a trivial refactor. Try to imagine how difficult it would be in some case you've worked on where you didn't keep the collection locked down...
If you wanted to support add and remove functions to Foo, I would suggest the methods addFoo() and removeFoo(). I ideally you could eliminate the getFoo at together by creating a method for each piece of functionality you need. This make it clear as to the functions a caller will preform on the list.

Categories