This is a theory question I guess that I am using to find the standard procedure for this.
If I have a Constructor method that does a whole lot of setup operations gathering data and such, should I keep "all things construction" in the constructor, or should I try to call other methods from inside the constructor (for code looks basically), or should I just initialize everything I have to and leave other things to be dealt with later if they are actually needed?
Here is an example.
I am creating an object that is a collection manager basically. It needs to read in data from a file and it stores it inside of an array.
Do I use the constructor to just create an object with base properties and read data later,
or should I read in all the info and set up the array inside the constructor which saves time later but takes up extra time here, or should I do something along the lines of
public myConstructor(String filename) {
data = readDataIn(filename);
}
This is not actual code, just an example of outsourcing to different methods to "pretty up the code" instead of a super long constructor method I can have say 5-6 short and good looking methods that can only be accessed by the constructor.
The constructor should do just enough work to get the instance into a state that satisfies its contract. Each method should then do just enough work to fulfill the method's contract and leave the instance in a state that satisfies its contract.
Very rarely should a constructor call cause side-effects or modify its inputs. These are just not often required to satisfy a contract. For example, a connection class shouldn't touch the network on construction. Since it has to be closeable, the closed state must be part of its contract, and so the "just enough work" standard dictates that the constructor puts it in a ready, but not yet open state.
Your particular example couples your class to the file system. You would probably get a more testable, more general class by using Guava Files to do the reading and taking a string with the content instead. You can get the convenience of a constructor coupled to the file system by writing a convenient static MyClass fromFile(String path) factory function that does new MyClass. That moves the portion of your code that is coupled to the filesystem outside the portion that interacts with instance variables reducing the number of possible interactions to test. As others have noted, dependency injection is another good way to achieve decoupling.
Really depends on your API style. Note that you may wish to have multiple constructors, such as:
public MyThing(String filename) { }
public MyThing(FileInputStream filestream) {}
public MyThing(File file) { }
public MyThing(byte[] rawdata) { }
at which its judicious to consolidate the file loading operation into a method or two (file open and file parse)
In this case, I would use dependency injection, so that your constructor requires data that has already been computed, and defers the computation to whatever invokes the constructor. I might provide an additional static factory function that does all this complicated setup so that it is convenient to construct this object (e.g. in tests), but at least it would be possible for the user of this class to come up with a more clever (possibly parallelized or lazily-initialized) way of creating this class.
Related
I was wondering, when constructing an object, is there any difference between a setter returning this:
public User withId(String name) {
this.name = name;
return this;
}
and a builder (for example one which is generated by Builder Generator plugin for IDEA)?
My first impression is that a setter returning this is much better:
it uses less code - no extra class for builder, no build() call at the end of object construction.
it reads better:
new User().withName("Some Name").withAge(30);
vs
User.UserBuilder.anUserBuilder().withName("Some Name").withAge(30).build();
Then why to use builder at all? Is there anything I am missing?
The crucial thing to understand is the concept of an immutable type.
Let's say I have this code:
public class UnitedStates {
private static final List<String> STATE_NAMES =
Arrays.asList("Washington", "Ohio", "Oregon", "... etc");
public static List<String> getStateNames() {
return STATE_NAMES:
}
}
Looks good, right?
Nope! This code is broken! See, I could do this, whilst twirling my moustache and wielding a monocle:
UnitedStates.getStateNames().set(0, "Turtlia"); // Haha, suck it washington!!
and that will work. Now for ALL callers, apparently there's some state called Turtlia. Washington? Wha? Nowhere to be found.
The problem is that Arrays.asList returns a mutable object: There are methods you can invoke on this object that change it.
Such objects cannot be shared with code you don't trust, and given that you don't remember every line you ever wrote, you can't trust yourself in a month or two, so, you basically can't trust anybody. If you want to write this code properly, all you had to do is use List.of instead of Arrays.asList, because List.of produces an immutable object. It has zero methods that change it. It seems like it has methods (it has a set method!), but try invoking it. It won't work, you'll get an exception, and crucially, the list does not change. It is in fact impossible to do so. Fortunately, String is also immutable.
Immutables are much easier to reason about, and can be shared freely with whatever you like without copying.
So, want your own immutable? Great - but apparently the only way to make one, is to have a constructor where all values are set and that's it - immutable types cannot have set methods, because that would mutate them.
If you have a lot of fields, especially if those fields have the same or similar types, this gets annoying fast. Quick!
new Bridge("Golden Gate", 1280, 1937, 2737);
when was it built? How long is it? What's the length of the largest span?
Uhhhhhhh..... how about this instead:
newBridge()
.name("Golden Gate")
.longestSpan(1280)
.built(1937)
.length(2737)
.build();
sweet. Names! builders also let you build over time (by passing the builder around to different bits of code, each responsible for setting up their bits). But a bridgebuilder isn't a bridge, and each invoke of build() will make a new one, so you keep the general rules about immutability (a BridgeBuilder is not immutable, but any Bridge objects made by the build() method are.
If we try to do this with setters, it doesn't work. Bridges can't have setters. you can have 'withers', where you have set-like methods that create entirely new objects, but, calling these 'set' is misleading, and you create both a ton of garbage (rarely relevant, the GC is very good at collecting short lived objects), and intermediate senseless bridges:
Bridge goldenGate = Bridge.create().withName("Golden Gate").withLength(2737);
somewhere in the middle of that operation you have a bridge named 'Golden Gate', with no length at all.
In fact, the builder can decide to not let you build() bridge with no length, by checking for that and throwing if you try. This process of invoking one method at a time can't do that. At best it can mark a bridge instance as 'invalid', and any attempt to interact with it, short of calling .withX() methods on it, results in an exception, but that's more effort, and leads to a less discoverable API (the with methods are mixed up with the rest, and all the other methods appear to throw some state exception that is normally never relevant.. that feels icky).
THAT is why you need builders.
NB: Project Lombok's #Builder annotation gives you builders for no effort at all. All you'd have to write is:
import lombok.Value;
import lombok.Builder;
#Value #Builder
public class Bridge {
String name;
int built;
int length;
int span;
}
and lombok automatically takes care of the rest. You can just Bridge.builder().name("Golden Gate").span(1280).built(1937).length(2737).build();.
Builders are design patterns and are used to bring a clear structure to the code. They are also often used to create immutable class variables. You can also define preconditions when calling the build() method.
I think your question is better formulated like:
Shall we create a separate Builder class when implementing the Builder Pattern or shall we just keep returning the same instance?
According to the Head First Design Patterns:
Use the Builder Pattern to encapsulate the construction of a product
and allow it to be constructed in steps.
Hence, the Encapsulation is important point.
Let's now see the difference in the approaches you have provided in your original question. The main difference is the Design, of how you implement the Builder Pattern, i.e. how you keep building the object:
In the ObjecBuilder separate class approach, you keep returning the Builder object, and you only(!) return the finalized/built Object, after you have finalized building, and that's what better encapsulates creation process, as it's more consistent and structurally well designed approach, because you have a clearly separated two distinct phases:
1.1) Building the object;
1.2) Finalizing the building, and returning the built instance (this may give you the facility to have immutable built objects, if you eliminate setters).
In the example of just returning this from the same type, you still can modify it, which probably will lead to inconsistent and insecure design of the class.
It depends on the nature of your class. If your fields are not final (i.e. if the class can be mutable), then doing this:
new User().setEmail("alalal#gmail.com").setPassword("abcde");
or doing this:
User.newBuilder().withEmail("alalal#gmail.com").withPassowrd("abcde").build();
... changes nothing.
However, if your fields are supposed to be final (which generally speaking is to be preferred, in order to avoid unwanted modifications of the fields, when of course it is not necessary for them to be mutable), then the builder pattern guarantees you that your object will not be constructed until when all fields are set.
Of course, you may reach the same result exposing a single constructor with all the parameters:
public User(String email, String password);
... but when you have a large number of parameters it becomes more convenient and more readable to be able to see each of the sets you do before building the object.
One advantage of a Builder is you can use it to create an object without knowing its precise class - similar to how you could use a Factory. Imagine a case where you want to create a database connection, but the connection class differs between MySQL, PostgreSQL, DB2 or whatever - the builder could then choose and instantiate the correct implementation class, and you do not need to actually worry about it.
A setter function, of course, can not do this, because it requires an object to already be instantiated.
The key point is whether the intermediate object is a valid instance.
If new User() is a valid User, and new User().withName("Some Name") is a valid User, and new User().withName("Some Name").withAge(30) is a valid user, then by all means use your pattern.
However, is a User really valid if you've not provided a name and an age? Perhaps, perhaps not: it could be if there is a sensible default value for these, but names and ages can't really have default values.
The thing about a User.Builder is the intermediate result isn't a User: you set multiple fields, and only then build a User.
I have just started to learn Java and is curious is it any good practice in Java for good object decomposition? Let me describe a problem. In big software project it's always a big classes like 'core' or 'ui' that tends to have a lot of methods and are intended as a mediators between smaller classes. For example, if user clicks a button on some window, this window's class sends a message to 'ui' class. This 'ui' class catches this message and acts accordingly by doing something with application user interface ( via calling method of one of it's member objects ) or by posting message to application 'core' if it's something like 'exit application' or 'start network connection'.
Such objects is very hard to break apart since they are a mere mediators between a lots of small application objects. But having a classes in application with hundreds and thousands of methods is not very handy, event if such methods are trivial task delegation from one object to another. C# solves such problem by allowing to break class implementation into multiple source files: you can divide god object any way you choose, and it will work.
Any practices by dividing such objects in Java?
One way to begin breaking such a large object apart is to first find a good subset of fields or properties managed by the large object that are related to each other and that don't interact with other fields or properties of the object. Then, create a new, smaller object using only those fields. That is, move all logic from the large class to the new smaller class. In the original large class, create a delegation method that simply passes the request along. This is a good first step that only involves changing the big object. It doesn't reduce the number of methods, but it can greatly reduce the amount of logic needed in the large class.
After a few rounds of doing this, you can begin to remove some of the delegation by pointing other objects directly at the newer, smaller objects, rather than going through the previously-huge object that was in the middle of everything.
See Wikipedia's Delegation pattern discussion for example.
As a simple example, if you have a personnel object to represent staff at a company, then you could create a payroll object to keep track of payroll-related values, a ratings object to keep track of employee ratings, an awards object to keep track of awards that the person has won, and so on.
To wit, if you started out with one big class containing the following methods, each containing business logic, among many other methods:
...
public boolean isManagement() { ... }
public boolean isExecutive() { ... }
public int getYearsOfService() { ... }
public Date getHireDate() { ... }
public int getDepartment() { ... }
public BigDecimal getBasePay() { ... }
public BigDecimal getStockShares() { ... }
public boolean hasStockSharePlan() { ... }
...
then this big object could, in its constructor, create a newly created object StaffType and a newly created object PayInformation and a newly created object StaffInformation, and initially these methods in the big object would look like:
// Newly added variables, initialized in the constructor (or as appropriate)
private final StaffType staffType;
private final StaffInformation staffInformation;
private final PayInformation payInformation;
...
public boolean isManagement() { return staffType.isManagement(); }
public boolean isExecutive() { return staffType.isExecutive(); }
public int getYearsOfService() { return staffInformation.getYearsOfService(); }
public Date getHireDate() { return staffInformation.getHireDate(); }
public int getDepartment() { return staffInformation.getDepartment(); }
public BigDecimal getBasePay() { return payInformation.getBasePay(); }
public BigDecimal getStockShares() { return payInformation.getStockShares(); }
public boolean hasStockSharePlan() { return payInformation.hasStockSharePlan(); }
...
where the full logic that used to be in the big object has been moved to these three new smaller objects. With this change, you can break the big object into smaller parts without having to touch anything that makes use of the big object. However, as you do this over time, you'll find that some clients of the big object may only need access to one of the divisible components. For these clients, instead of them using the big object and delegating to the specific object, they can make direct use of the small object. But even if this refactoring never occurs, you've improved things by separating the business logic of unrelated items into different classes.
The next logical step may be to change the BigClass into a java package. Next create new objects for each group of related functionality (noting in each class that the object is part of the new package).
The benefits of doing this are dependency reduction and performance.
No need to import the entire
package/BigClass just to get a few
methods.
Code changes to related
functionality don't require a
recompile/redeploy of the entire
package/BigClass.
Less memory used
for allocating/deallocating objects,
since you are using smaller classes.
I've seen some cases where this is solved by inheritance: let's say class Big takes care of 5 different things, and (for various reasons) they all have to be in the same class. So you pick an arbitrary inheritance order, and define:
BigPart1 // all methods dealing with topic #1
BigPart2 extends BigPart1 // all methods dealing with topic #2
...
Big extends BigPart4 // all methods dealing with the last topic.
If you can really layer things up, so that the breakage makes sense (Part2 actually uses stuff from Part1, but not vice versa, etc.) then maybe it makes some sense.
The place where I've seen this is in WebWorks, where a single class had tons of getter/setter methods -- the setters used for dependency injection (e.g., URL args passed to the object upon execution) and the getters for making values accessible to various page templates (I think it was JSPs).
So, the breakdown grouped stuff logically, e.g., assuming the class was called MyAction, there was MyActionBasicArgs (fields and setters for basic CGI arguments), extended by MyActionAdvancedArgs (advanced-option args), extended by MyActionExposedValues (getters), extended by MyActionDependencies (setters used by Spring dependency injection, non-CGI args), extended by MyAction (which contained the actual execute() method).
Because of the way dependency injection in WebWorks works (or at least, used to work, back then), it had to be one huge class, so breaking it down this way made things more maintainable. But first, please, please, see if you can simply avoid having a single huge class; think carefully about your design.
Yes, C# provides partial classes. I assume this is what you are referring to when you say:
C# solves such problem by allowing to break class implementation into multiple source
files: you can divide god object any way you choose, and it will work.
This does help make huge classes more manageable. However, I find partial classes best used when one needs to extend code created by a code generator. When a class is as large as you're talking about, it can almost always be divided into smaller classes by proper object oriented design. Using a partial class sidesteps the more correct object oriented design, which is sometimes OK as the end goal is stable, reliable, maintainable code, and not a textbook example of OO code. However, many times, putting the code of a large object into a large number of smaller partial class instances of the same class is not the ideal solution.
If you can possibly find subsets of the properties of the "god" object that do not interact with one another, then each one of those sets would logically make a good candidate for a new object type. However, if all properties of this "god" object depend on one another, then there is not much you can do to decompose the object.
I don't know why you would ever have such a large class.
I suppose if you were using a gui builder code generation and being lazy about it, you might end up in such a situation, but codegen usually ends up nasty unless you take control yourself.
Splitting a single class arbitrarily is a terrible solution to a terrible manufactured problem. (Code reuse, for one thing will become virtually impossible)
If you have to use a GUI builder, have it build smaller components, then use the small components to build up a bigger GUI. Each component should do exactly one job and do it well.
Try not to EVER edit generated code if you can avoid it. Putting business logic into a genned "frame" is just a horrid design pattern. Most code generators aren't very helpful with this, so try to just make a single, minimal edit to get at what you need from external classes (think MVC where the genned code is your View and the code you edit should be in your Model and Controller).
Sometimes you can just expose the getComponents method from the Frame object, get all the components out by iterating through the containers and then dynamically bind them to data and code (often binding to the name property works well), I've been able to safely use form editors this way, and all the binding code tends to be very easily abstracted and reused.
If you're not talking about generated code--Well in your "God" class, does it do exactly one small job and do it well? If not, pull out a "Job", put it in it's own class, and delegate to it.
Is your GOD class fully factored? When I've seen huge classes like this, I've usually seen a lot of copy/paste/edit lines. If there is enough of a similarity to copy and past and edit some section, then there is enough to factor these lines into a single chunk of code.
If your big class is a GUI class, consider decorators--reusable and moves stuff out of your main class. A double win.
I guess the answer to your question is that in Java we just use good OO to ensure that the problem doesn't arise in the first place (or we don't--Java's certainly not immune to the problems you are talking about any more than any other language)
Original Question: Given a method I would like to determine if an object returned is created within the execution of that method. What sort of static analysis can or should I use?
Reworked Questions: Given a method I would like to determine if an object created in that method may be returned by that method. So, if I go through and add all instantiations of the return type within that method to a set, is there an analysis that will tell me, for each member of the set, if it may or may not be returned. Additionally, would it be possible to not limit the set to a single method but, all methods called by the original method to account for delegation?
This is not specific to any invocation.
It looks like method escape analysis may be the answer.
Thanks everyone for your suggestions.
Your question seems to be either a simple "reaching" analysis ("does a new value reach a return statements") if you are interested in any invocation and only if a method-local new creates the value. If you need to know if any invocation can return a new value from any subcomputation you need to compute the possible call-graph and determine if any called function can return a new value, or pass a new value from a called function to its parent.
There are a number of Java static analysis frameworks.
SOOT is a byte-code based analysis framework. You could probably implement your static query using this.
The DMS Software Reengineering Toolkit is a generic engine for building custom analyzers and transformation tools. It has a full Java front end, and computes various useful base analyses (def/use chains, call graph) on source code. It can process class files but presently only to get type information.
If you wanted a dynamic analysis, either by itself or as a way to tighten up the static analysis, DMS can be used to instrument the source code in arbitrary ways by inserting code to track allocations.
I'm not sure if this would work for you circumstances, but one simple approach would be to populate a newly added 'instantiatedTime' field in the constructor of the object and compare that with the time the method was call was made. This assumes you have access to the source for the object in question.
Are you sure static analysis is the right tool for the job? Static analysis can give you a result in some cases but not in all.
When running the JVM under a debugger, it assigns objects with increasing object IDs, which you can fetch via System.identityHashCode(Object o). You can use this fact to build a test case that creates an object (the checkpoint), and then calls the method. If the returned object as an id greater than the checkpoint id, then you know the object was created in the method.
Disclaimer: that this is observed behaviour under a debugger, under Windows XP.
I have a feeling that this is impossible to do without a specially modified JVM. Here are some approaches ... and why they won't work in general.
The Static Analysis approach will work in simple cases. However, something like this is likely to stump any current generation static analysis tool:
// Bad design alert ... don't try this at home!
public class LazySingletonStringFactory {
private String s;
public String create(String initial) {
if (s == null) {
s = new String(initial);
}
return s;
}
}
For a static analyser to figure out if a given call to LazySingletonStringFactory.create(...) returns a newly created String it must figure out that it has not been called previously. The Halting Problem tells us that this is theoretically impossible in some cases, and in practice this is beyond the "state of the art".
The IdentityHashCode approach may work in a single-threaded application that completes without the garbage collector running. However, if the GC runs you will get incorrect answers. And if you have multiple threads, then (depending on the JVM) you may find that objects are allocated in different "spaces" resulting in object "id" creation sequence that is no longer monotonic across all threads.
The Code Instrumentation approach works if you can modify the code of the Classes you are concerned about, either direct source-code changes, annotation-based code injection or by some kind of bytecode processing. However, in general you cannot do these things for all classes.
(I'm not aware of any other approaches that are materially different to the above three ... but feel free to suggest them as a comment.)
Not sure of a reliable way to do this statically.
You could use:
AspectJ or a similar AOP library could be use to instrument classes and increment a counter on object creation
a custom classloader (or JVM agent, but classloader is easier) could be used similarly
How can i get hold of the instantiating object from a constructor in java?
I want to store reference to the parent object for some GUI classes to simulate event bubbling - calling parents handlers - but i dont wanna change all the existing code.
Short answer: there isn't a way to do this in Java. (You can find out what class called you, but the long answer below applies there for the most part as well.)
Long answer: Code that magically behaves differently depending on where it's being invoked from is almost always a bad idea. It's confusing to whoever has to maintain your code, and it seriously hurts your ability to refactor. For example, suppose you realize that two of the places instantiating your object have basicaly the same logic, so you decide to factor out the common bits. Surprise! Now the code behaves differently because it's being instantiated from somewhere else. Just add the parameter and fix the callers. It'll save you time in the long run.
If you want to know the invoking class, then pass "this" as a parameter to the constructor.
Thing thing = new Thing(this);
Edit: A modern IDE allowing refactoring will make this very easy to do.
Intercepting method calls (including constructors) without changing a ton of existing code is one thing Aspect-oriented programming was made for.
Check out AspectJ for a start.
With AspectJ, you can define a "pointcut" that specifies that you want to intercept constructor calls for a certain object or set of objects (using wildcards if need be), and within the interception code ("advice"), you will be given method context, which includes information about the both the calling method and object.
You can even use AspectJ to add fields to your object's to store the parent reference without modifying their existing code (this is called "introduction").
When is it considered poor practice to use the static keyword in Java on method signatures? If a method performs a function based upon some arguments, and does not require access to fields that are not static, then wouldn't you always want these types of methods to be static?
Two of the greatest evils you will ever encounter in large-scale Java applications are
Static methods, except those that are pure functions*
Mutable static fields
These ruin the modularity, extensibility and testability of your code to a degree that I realize I cannot possibly hope to convince you of in this limited time and space.
*A "pure function" is any method which does not modify any state and whose result depends on nothing but the parameters provided to it. So, for example, any function that performs I/O (directly or indirectly) is not a pure function, but Math.sqrt(), of course, is.
More blahblah about pure functions (self-link) and why you want to stick to them.
I strongly encourage you to favor the "dependency injection" style of programming, possibly supported by a framework such as Spring or Guice (disclaimer: I am co-author of the latter). If you do this right, you will essentially never need mutable static state or non-pure static methods.
One reason why you may not want it to be static is to allow it to be overridden in a subclass. In other words, the behaviour may not depend on the data within the object, but on the exact type of the object. For example, you might have a general collection type, with an isReadOnly property which would return false in always-mutable collections, true in always-immutable collections, and depend on instance variables in others.
However, this is quite rare in my experience - and should usually be explicitly specified for clarity. Normally I'd make a method which doesn't depend on any object state static.
In general, I prefer instance methods for the following reasons:
static methods make testing hard because they can't be replaced,
static methods are more procedural oriented.
In my opinion, static methods are OK for utility classes (like StringUtils) but I prefer to avoid using them as much as possible.
What you say is sort of true, but what happens when you want to override the behavior of that method in a derived class? If it's static, you can't do that.
As an example, consider the following DAO type class:
class CustomerDAO {
public void CreateCustomer( Connection dbConn, Customer c ) {
// Some implementation, created a prepared statement, inserts the customer record.
}
public Customer GetCustomerByID( Connection dbConn, int customerId ) {
// Implementation
}
}
Now, none of those methods require any "state". Everything they need is passed as parameters. So they COULD easily be static. Now the requirement comes along that you need to support a different database (lets say Oracle)
Since those methods are not static, you could just create a new DAO class:
class OracleCustomerDAO : CustomerDAO {
public void CreateCustomer( Connection dbConn, Customer c ) {
// Oracle specific implementation here.
}
public Customer GetCustomerByID( Connection dbConn, int customerId ) {
// Oracle specific implementation here.
}
}
This new class could now be used in place of the old one. If you are using dependancy injection, it might not even require a code change at all.
But if we had made those methods static, that would make things much more complicated as we can't simply override the static methods in a new class.
Static methods are usually written for two purposes. The first purpose is to have some sort of global utility method, similar to the sort of functionality found in java.util.Collections. These static methods are generally harmless. The second purpose is to control object instantiation and limit access to resources (such as database connections) via various design patterns such as singletons and factories. These can, if poorly implemented, result in problems.
For me, there are two downsides to using static methods:
They make code less modular and harder to test / extend. Most answers already addressed this so I won't go into it any more.
Static methods tend to result in some form of global state, which is frequently the cause of insidious bugs. This can occur in poorly written code that is written for the second purpose described above. Let me elaborate.
For example, consider a project that requires logging certain events to a database, and relies on the database connection for other state as well. Assume that normally, the database connection is initialized first, and then the logging framework is configured to write certain log events to the database. Now assume that the developers decide to move from a hand-written database framework to an existing database framework, such as hibernate.
However, this framework is likely to have its own logging configuration - and if it happens to be using the same logging framework as yours, then there is a good chance there will be various conflicts between the configurations. Suddenly, switching to a different database framework results in errors and failures in different parts of the system that are seemingly unrelated. The reason such failures can happen is because the logging configuration maintains global state accessed via static methods and variables, and various configuration properties can be overridden by different parts of the system.
To get away from these problems, developers should avoid storing any state via static methods and variables. Instead, they should build clean APIs that let the users manage and isolate state as needed. BerkeleyDB is a good example here, encapsulating state via an Environment object instead of via static calls.
That's right. Indeed, you have to contort what might otherwise be a reasonable design (to have some functions not associated with a class) into Java terms. That's why you see catch-all classes such as FredsSwingUtils and YetAnotherIOUtils.
when you want to use a class member independently of any object of that class,it should be declared static.
If it is declared static it can be accessed without an existing instance of an object of the class.
A static member is shared by all objects of that specific class.
An additional annoyance about static methods: there is no easy way to pass a reference to such a function around without creating a wrapper class around it. E.g. - something like:
FunctorInterface f = new FunctorInterface() { public int calc( int x) { return MyClass.calc( x); } };
I hate this kind of java make-work. Maybe a later version of java will get delegates or a similar function pointer / procedural type mechanism?
A minor gripe, but one more thing to not like about gratuitous static functions, er, methods.
Two questions here
1) A static method that creates objects stays loaded in memory when it is accessed the first time? Isnt this (remaining loaded in memory) a drawback?
2) One of the advantages of using Java is its garbage collection feature - arent we ignoring this when we use static methods?