I am developing a game in which the user can turn on / off certain effects. These effects cause a drain on the users' energy and various parts of the program must be able to check if an affect is active. Currently, I'm using an enum type to store and check the effects:
public static enum Effects { SUPER_FIRE, FIRE_SHIELD }
if (someEffect == Effects.SUPER_FIRE) {
// Breath fire etc..
}
Saying this, I have to store other variables for each effect - such as the level required to use it or the rate at which it drains energy. So, the other method I thought of was to use a class:
public class SuperFire extends Effect {
public static int levelRequired = 10;
public static int drainRate = 5;
public boolean active() {
// Check if it's active
}
public boolean activate() {
}
public boolean deactivate() {
}
}
SuperFireEffect sfe = new SuperFire();
sfe.activate();
if (sfe.active()) {
energyLevel -= sfe.drainRate;
}
sfe.deactivate();
Which implementation (or any other) is the best for this situation?
I hesitate to say "best" in any case, but it would appear that your second example is "better" in the "more flexible" meaning of the word.
Of course, from your very small code snippets, you are not encapsulating the functionality very well, so it would appear you may wish to do some more design work first.
In the end, you want to have the game code think in terms of the Effect base class and what it can do, and not have to know anything about the implementation of things like SuperFireEffect.
I would probably choose the second one as it is more "object-oriented" in my opinion. Plus if you start to add a lot of effects it will be more easily maintainable, and you can benefit from inheritance for super-effects.
The 2nd implementation is better since you mention about the effects which are specified with their own set of features / properties / fields / attributes like "levelRequired, drainRate". So following this approach, you should well define your classes / entities and their features and common characteristics. Object Oriented Programming principles should be conveyed.
Related
Trying to understand the concept of encapsulation, I came across this definition "Combining the attributes and methods in the same entity in such a way as to hide what should be hidden and make visible what is intended to be visible".
But practicing the same, I am not sure which of the following code is more apt for OOP:
public class Square {
//private attribute
private int square;
//public interface
public int getSquare(int value) {
this.square = value * value;
return this.square;
}
}
or
public class Square {
//private attribute
private int square;
//public interface
public int getSquare(int value) {
this.square = calculateSquare(value);
return this.square;
}
//private implementation
private int calculateSquare(int value) {
return value * value;
}
}
Combining the attributes and methods in the same entity in such a way as to hide what should be hidden and make visible what is intended to be visible
This is a potentially misleading statement. You are NOT hiding anything from anyone. It is also not about methods or fields. Unfortunately this is the way things are worded in almost every place.
How
When writing any piece of program, (be it a function, class, module, or library) we think of the piece we are working on as my code, every other code as my client code. Now assume that all the client code is written by someone else, NOT you. You write just this code. Just assume this, even if you are the only one person working on the entire project.
Now the client code needs to interact with my code. So my code should be nice and decent to talk to. The concept of encapsulation says, that I partition my code in two parts, (1) that the client code should be bothered with, (2) that the client code should NOT be bothered with. The OO way of achieving encapsulation is by using keywords like public and private. The non OO way of achieving this is naming convention like leading underscores. Remember, you are not hiding, you are just marking it as none-of-your-business.
Why
So why should we encapsulate things? What should be organize my code into public and private regions? When someone uses my code, they are of-course using the whole thing, not just public thing, so how come private is something that is none-of-their-business? Note here words like someone and their could refer to yourself - but only while working on the other piece of code.
The answer is easy testability and maintainability. A complete project if tested exhaustively, can be quite a task. So at minimum, when you are done coding, you just test the public aspects of my code. You do not test any of the client code, you do not test any of the private aspects of my code. This reduces test effort while preserving sufficient coverage.
Another aspect is maintainability. My code will NEVER be perfect, it WILL need revisions. Either because of bugfix or enhancement, my code will need tinkering. So when a new version of my code is available, how much is client code impacted? None, if changes are in private regions. Also, while planning a change, we try to confine it as much as possible in private regions. So the change, from client's perspective becomes a no-impact. A change in public aspects of my code, will almost always require changes in client code, now that will need testing. While planning the big picture of my code, we try to maximize the area under private regions and minimize the area under public regions.
And more
The idea of encapsulating links with the idea of abstracting which in turn links with idea of polymorphism. None of these are strictly about OO. Even in non OO world like C or even Assembly, these apply. The way to achieve these differ. Even this applies to things beyond computers.
The process of sewage management, for example, is
encapsulated within the public interface of drains. The general public bothers only with the drains. The treatment, the disposal, the recycling are none of general public's business. Thus, the sewage management could be treated as an -
abstract entity - an interface with just the drains. Different government and companies implement this in their own way. Now an city may have a permanent system of sewage management, or it can periodically -
switch providers. In fifty years of government operation, the situation was bad, but once they contracted that BigCorp Inc, now people can breathe free. We just did polymorphism. We switched implementations, keeping the public interface same. Both government and the BigCorp Inc use the same drains, but their own processing facilities, which are encapsulated away and polymorphically switchable.
In your code
In both your codes you chose to encapsulate the storage, the field is made private. This is a nice approach and certainly OO way. In both of your codes, the algorithm is also encapsulated - i.e not visible to the client. Nice. In your second code, you went ahead and extracted the algorithm in a separate non-public method. This is commendable approach, although obviously an overkill for doing something trivial. Better OO none the less.
What you did in second code even has a name: the strategy pattern. Even though here it is useless (and overkill), it could be useful in a scenario when let say you are dealing with extremely large numbers, such that calculating their squares take very long time. In such a scenario, you could make your calculateSquare method protected, have a class FastButApproxSquare extends Square, and override the calculateSquare method with a different algo which calculates an approx value much faster. This way you could do Polymorphism. Whoever needs exact value will use the Square class. Whoever needs approx value will use FastButApproxSquare class.
Encapsulation is about hiding implementation and structure details from client code. In addition it is about coherence: keep things close together which are highly related to each other.
For example consider a class which manages players of a football team:
public class FootballTeam {
public final List<Player> players = new ArrayList<>();
}
Client code would have access to the list of players, to look them up, to add players and so on:
public class FootballManager {
private final FootballTeam team = new FootballTeam();
public void hirePlayer(Player player) {
team.players.add(player);
}
public void firePlayer(int jerseyNo) {
Optional<Player> player = team.players.stream()
.filter(p -> p.getJerseyNo() == jerseyNo)
.findFirst();
player.ifPresent(p -> team.players.remove(p));
}
}
Now, if someone decides to change the field FootballTeam.players into a Map<Integer, Player>, mapping the players jersey number to the player, the client code would break.
In addition the client code deals with aspects / features closely related to a player. To protect the client code and to ensure changeability of the FootballTeam implementation hide all implementation details, keep player related functionality close to the structure, representing the team and reduce the public interface surface:
public class FootballTeam {
private final Map<Integer, Player> players = new HashMap<>();
public void addPlayer(Player player) {
players.put(player.getJerseyNo(), player);
}
public Optional<Player> lookupPlayer(int jerseyNo) {
return Optional.ofNullable(players.get(jerseyNo));
}
public void remove(Player player) {
players.remove(player.getJerseyNo());
}
}
public class FootballManager {
private final FootballTeam team = new FootballTeam();
public void hirePlayer(Player player) {
team.addPlayer(player);
}
public void firePlayer(int jerseyNo) {
team.lookupPlayer(jerseyNo)
.ifPresent(player -> team.remove(player));
}
}
If any code serves the purpose of encapsulation then that code is correct. The purpose of encapsulation is to provide a process of hiding the variables from other classes (i.e. by making the variable as private) and also to provide a way for other classes to access and modify the variables. Both of your code serves this purpose correctly.
If you would have used "calculateSquare(int value)" method as "public" then there would have been a problem. Other class could call this method directly without using set/get method. So as far as your this method is "private" I think both the codes are all right.
Say I have a List of object which were defined using lambda expressions (closures). Is there a way to inspect them so they can be compared?
The code I am most interested in is
List<Strategy> strategies = getStrategies();
Strategy a = (Strategy) this::a;
if (strategies.contains(a)) { // ...
The full code is
import java.util.Arrays;
import java.util.List;
public class ClosureEqualsMain {
interface Strategy {
void invoke(/*args*/);
default boolean equals(Object o) { // doesn't compile
return Closures.equals(this, o);
}
}
public void a() { }
public void b() { }
public void c() { }
public List<Strategy> getStrategies() {
return Arrays.asList(this::a, this::b, this::c);
}
private void testStrategies() {
List<Strategy> strategies = getStrategies();
System.out.println(strategies);
Strategy a = (Strategy) this::a;
// prints false
System.out.println("strategies.contains(this::a) is " + strategies.contains(a));
}
public static void main(String... ignored) {
new ClosureEqualsMain().testStrategies();
}
enum Closures {;
public static <Closure> boolean equals(Closure c1, Closure c2) {
// This doesn't compare the contents
// like others immutables e.g. String
return c1.equals(c2);
}
public static <Closure> int hashCode(Closure c) {
return // a hashCode which can detect duplicates for a Set<Strategy>
}
public static <Closure> String asString(Closure c) {
return // something better than Object.toString();
}
}
public String toString() {
return "my-ClosureEqualsMain";
}
}
It would appear the only solution is to define each lambda as a field and only use those fields. If you want to print out the method called, you are better off using Method. Is there a better way with lambda expressions?
Also, is it possible to print a lambda and get something human readable? If you print this::a instead of
ClosureEqualsMain$$Lambda$1/821270929#3f99bd52
get something like
ClosureEqualsMain.a()
or even use this.toString and the method.
my-ClosureEqualsMain.a();
This question could be interpreted relative to the specification or the implementation. Obviously, implementations could change, but you might be willing to rewrite your code when that happens, so I'll answer at both.
It also depends on what you want to do. Are you looking to optimize, or are you looking for ironclad guarantees that two instances are (or are not) the same function? (If the latter, you're going to find yourself at odds with computational physics, in that even problems as simple as asking whether two functions compute the same thing are undecidable.)
From a specification perspective, the language spec promises only that the result of evaluating (not invoking) a lambda expression is an instance of a class implementing the target functional interface. It makes no promises about the identity, or degree of aliasing, of the result. This is by design, to give implementations maximal flexibility to offer better performance (this is how lambdas can be faster than inner classes; we're not tied to the "must create unique instance" constraint that inner classes are.)
So basically, the spec doesn't give you much, except obviously that two lambdas that are reference-equal (==) are going to compute the same function.
From an implementation perspective, you can conclude a little more. There is (currently, may change) a 1:1 relationship between the synthetic classes that implement lambdas, and the capture sites in the program. So two separate bits of code that capture "x -> x + 1" may well be mapped to different classes. But if you evaluate the same lambda at the same capture site, and that lambda is non-capturing, you get the same instance, which can be compared with reference equality.
If your lambdas are serializable, they'll give up their state more easily, in exchange for sacrificing some performance and security (no free lunch.)
One area where it might be practical to tweak the definition of equality is with method references because this would enable them to be used as listeners and be properly unregistered. This is under consideration.
I think what you're trying to get to is: if two lambdas are converted to the same functional interface, are represented by the same behavior function, and have identical captured args, they're the same
Unfortunately, this is both hard to do (for non-serializable lambdas, you can't get at all the components of that) and not enough (because two separately compiled files could convert the same lambda to the same functional interface type, and you wouldn't be able to tell.)
The EG discussed whether to expose enough information to be able to make these judgments, as well as discussing whether lambdas should implement more selective equals/hashCode or more descriptive toString. The conclusion was that we were not willing to pay anything in performance cost to make this information available to the caller (bad tradeoff, punishing 99.99% of users for something that benefits .01%).
A definitive conclusion on toString was not reached but left open to be revisited in the future. However, there were some good arguments made on both sides on this issue; this is not a slam-dunk.
To compare labmdas I usually let the interface extend Serializable and then compare the serialized bytes. Not very nice but works for the most cases.
I don't see a possibility, to get those informations from the closure itself.
The closures doesn't provide state.
But you can use Java-Reflection, if you want to inspect and compare the methods.
Of course that is not a very beautiful solution, because of the performance and the exceptions, which are to catch. But this way you get those meta-informations.
I'm currently currently working with a rather massive project with several classes of over 20, 000 lines. This is because it was someone's bright idea to mix in all the generated swing code for the UI with all of the functional code.
I was wondering if it would incur any extra cost in terms of memory or run time, to move most of the non-UI related functions into a separate class.
To provide an example, this is something along the lines of what I'm building.
public class Class1{
private Class1Util c1u;
List<String> infoItems;
...
public void Class1(){
c1u = new Class1Util(this);
}
public void btnAction(ActionListener al){
...
c1u.loadInfoFromDatabase();
}
}
public class Class1Util{
private Class1 c;
public void Class1Util(Class1 c){
this.c = c;
}
public void loadInfoFromDatabase(){
c.infoItems.add("blah");
}
}
Eventually, I'd also like to move some of the fields like infoItems over as well, which would result in a reverse relationship, with Class1 accessing c1u.infoItems.
no, separation of concerns is a good object oriented design practice. it will not cost you anything meaningful in terms of performance and will gain you many, many benefits in terms of maintenance, extensibility, etc.
You may get a tiny performance hit for an extra level of dereferencing, but it would not be noticeable in a UI code, and you will get so much extra clarity in return that you wouldn't regret it.
Eventually you may want to externalize state keeping into a third class, and then use that state from both your hand-written and generated code, or use the Generation Gap Pattern to manage complexity introduced by the need to integrate with the generated code.
This might sound like a weird idea and I haven't thought it through properly yet.
Say you have an application that ends up requiring a certain number of singletons to do some I/O for example. You could write one singleton and basically reproduce the code as many times as needed.
However, as programmers we're supposed to come up with inventive solutions that avoid redundancy or repetition of any kind. What would be a solution to make multiple somethings that could each act as a singleton.
P.S: This is for a project where a framework such as Spring can't be used.
You could introduce an abstraction like this:
public abstract class Singleton<T> {
private T object;
public synchronized T get() {
if (object == null) {
object = create();
}
return object;
}
protected abstract T create();
}
Then for each singleton, you just need to write this:
public final Singleton<Database> database = new Singleton<Database>() {
#Override
protected Database create() {
// connect to the database, return the Database instance
}
};
public final Singleton<LogCluster> logs = new Singleton<LogCluster>() {
...
Then you can use the singletons by writing database.get(). If the singleton hasn't been created, it is created and initialized.
The reason people probably don't do this, and prefer to just repeatedly write something like this:
private Database database;
public synchronized Database getDatabase() {
if (database == null) {
// connect to the database, assign the database field
}
return database;
}
private LogCluster logs;
public synchronized LogCluster getLogs() {
...
Is because in the end it is only one more line of code for each singleton, and the chance of getting the initialize-singleton pattern wrong is pretty low.
However, as programmers we're supposed to come up with inventive solutions that avoid redundancy or repetition of any kind.
That is not correct. As programmers, we are supposed to come up with solutions that meet the following criteria:
meet the functional requirements; e.g. perform as required without bugs,
are delivered within the mandated timeframe,
are maintainable; e.g. the next developer can read and modify the code,
performs fast enough for the task in hand, and
can be reused in future tasks.
(These criteria are roughly ordered by decreasing priority, though different contexts may dictate a different order.)
Inventiveness is NOT a requirement, and "avoid[ing] redundancy or repetition of any kind" is not either. In fact both of these can be distinctly harmful ... if the programmer ignores the real criteria.
Bringing this back to your question. You should only be looking for alternative ways to do singletons if it is going to actually make the code more maintainable. Complicated "inventive" solutions may well return to bite you (or the people who have to maintain your code in the future), even if they succeed in reducing the number of lines of repeated code.
And as others have pointed out (e.g. #BalusC), current thinking is that the singleton pattern should be avoided in a lot of classes of application.
There does exist a multiton pattern. Regardless, I am 60% certain that the real solution to the original problem is a RDBMS.
#BalusC is right, but I will say it more strongly, Singletons are evil in all contexts.
Webapps, desktop apps, etc. Just don't do it.
All a singleton is in reality is a global wad of data. Global data is bad. It makes proper unit testing impossible. It makes tracing down weird bugs much, much harder.
The Gang of Four book is flat out wrong here. Or at least obsolete by a decade and a half.
If you want only one instance, have a factory that makes only one. Its easy.
How about passing a parameter to the function that creates the singleton (for example, it's name or specialization), that knows to create a singleton for each unique parameter?
I know you asked about Java, but here is a solution in PHP as an example:
abstract class Singleton
{
protected function __construct()
{
}
final public static function getInstance()
{
static $instances = array();
$calledClass = get_called_class();
if (!isset($instances[$calledClass]))
{
$instances[$calledClass] = new $calledClass();
}
return $instances[$calledClass];
}
final private function __clone()
{
}
}
Then you just write:
class Database extends Singleton {}
Suppose you're maintaining an API that was originally released years ago (before java gained enum support) and it defines a class with enumeration values as ints:
public class VitaminType {
public static final int RETINOL = 0;
public static final int THIAMIN = 1;
public static final int RIBOFLAVIN = 2;
}
Over the years the API has evolved and gained Java 5-specific features (generified interfaces, etc). Now you're about to add a new enumeration:
public enum NutrientType {
AMINO_ACID, SATURATED_FAT, UNSATURATED_FAT, CARBOHYDRATE;
}
The 'old style' int-enum pattern has no type safety, no possibility of adding behaviour or data, etc, but it's published and in use. I'm concerned that mixing two styles of enumeration is inconsistent for users of the API.
I see three possible approaches:
Give up and define the new enum (NutrientType in my fictitious example) as a series of ints like the VitaminType class. You get consistency but you're not taking advantage of type safety and other modern features.
Decide to live with an inconsistency in a published API: keep VitaminType around as is, and add NutrientType as an enum. Methods that take a VitaminType are still declared as taking an int, methods that take a NutrientType are declared as taking such.
Deprecate the VitaminType class and introduce a new VitaminType2 enum. Define the new NutrientType as an enum. Congratulations, for the next 2-3 years until you can kill the deprecated type, you're going to deal with deprecated versions of every single method that took a VitaminType as an int and adding a new foo(VitaminType2 v) version of each. You also need to write tests for each deprecated foo(int v) method as well as its corresponding foo(VitaminType2 v) method, so you just multiplied your QA effort.
What is the best approach?
How likely is it that the API consumers are going to confuse VitaminType with NutrientType? If it is unlikely, then maybe it is better to maintain API design consistency, especially if the user base is established and you want to minimize the delta of work/learning required by customers. If confusion is likely, then NutrientType should probably become an enum.
This needn't be a wholesale overnight change; for example, you could expose the old int values via the enum:
public enum Vitamin {
RETINOL(0), THIAMIN(1), RIBOFLAVIN(2);
private final int intValue;
Vitamin(int n) {
intValue = n;
}
public int getVitaminType() {
return intValue;
}
public static Vitamin asVitamin(int intValue) {
for (Vitamin vitamin : Vitamin.values()) {
if (intValue == vitamin.getVitaminType()) {
return vitamin;
}
}
throw new IllegalArgumentException();
}
}
/** Use foo.Vitamin instead */
#Deprecated
public class VitaminType {
public static final int RETINOL = Vitamin.RETINOL.getVitaminType();
public static final int THIAMIN = Vitamin.THIAMIN.getVitaminType();
public static final int RIBOFLAVIN = Vitamin.RIBOFLAVIN.getVitaminType();
}
This allows you to update the API and gives you some control over when to deprecate the old type and scheduling the switch-over in any code that relies on the old type internally.
Some care is required to keep the literal values in sync with those that may have been in-lined with old consumer code.
Personal opinion is that it's probably not worth the effort of trying to convert. For one thing, the "public static final int" idiom isn't going away any time soon, given that it's sprinkled liberally all over the JDK. For another, tracking down usages of the original ints is likely to be really unpleasant, given that your classes will compile away the reference so you're likely not to know you've broken anything until it's too late
(by which I mean
class A
{
public static final int MY_CONSTANT=1
}
class B
{
....
i+=A.MY_CONSTANT;
}
gets compiled into
i+=1
So if you rewrite A you may not ever realize that B is broken until you recompile B later.
It's a pretty well known idiom, probably not so terrible to leave it in, certainly better than the alternative.
There is a rumor that the creator of "make" realized that the syntax of Makefiles was bad, but felt that he couldn't change it because he already had 10 users.
Backwards compatibility at all costs, even if it hurts your customers, is a bad thing. SO can't really give you a definitive answer on what to do in your case, but be sure and consider the cost to your users over the long term.
Also think about ways you can refactor the core of your code will keeping the old integer based enums only at the outer layer.
Wait for the next major revision, change everything to enum and provide a script (sed, perl, Java, Groovy, ...) to convert existing source code to use the new syntax.
Obviously this has two drawbacks:
No binary compatibility. How important this one is depends on the use cases, but can be acceptable in the case of a new major release
Users have to do some work. If the work is simple enough, then this too may be acceptable.
In the meantime, add new types as enums and keep old types as ints.
The best would be if you could just fix the published versions, if possible. In my opinion consistency would be the best solution, so you would need to do some refactoring. I personally don't like deprecated things, because they get into way. You might be able to wait until a bigger version release and use those ints until then, and refactor everything in a big project. If that is not possible, you might consider yourself stuck with the ints, unless you create some kinds of wrappers or something.
If nothing helps but you still evolve the code, you end up losing consistency or living with the deprecated versions. In any case, usually at least at some point of time people become fed up with old stuff if it has lost it's consistency and create new from scratch... So you would have the refactoring in the future no matter what.
The customer might scrap the project and buy an other product, if something goes wrong. Usually it is not the customer's problem can you afford refactoring or not, they just buy what is appropriate and usable to them. So in the end it is a tricky problem and care needs to be taken.