Imposing constraints or restrictions on method body, in Java - java

Context (Edit)
Some clarification was on demand, so I'll try to sum up what influences the question.
The goal of the project is to provide a certain functionality to programmers, most probably in the form of a library (a JAR with class files, I guess).
To use said functionality, programmers would have to conform to the constraints that must (should) be satisfied. Otherwise it won't function as expected (just like the locks from java.util.concurrent, that must be acquired / freed at the appropriate time and place).
This code won't be the entry point to the applications using it (ie, has no main).
There's a limited (and small) amount of operations exposed in the API.
Examples:
Think of a small game, where almost everything is implemented and managed by the already implemented classes. The only thing left for the programmer to do, is to write a method, or a couple of them, that describe what the character will do (walk, change direction, stop, inspect object). I would want to make sure that their methods (possibly marked with an annotation?) just walk, or changeDirection, or calculate diff = desiredValue - x, and not, say, write to some file, or open a socket connection.
Think of a transaction manager. The manager would be provided by this library, as well as some constant attributes of transactions (their isolation level, time-outs, ...). Now, the programmers would like to have transactions and use this manager. I would want to make sure that they only read, write, commit, or rollback on some resources, known to the manager. I wouldn't want them to launchRocket in the middle of the transaction, if the manager does not control any rockets to launch.
The Problem
I want to impose some invariants / restrictions / constraints on the body of a method (or group of methods), to be later implemented by some other programmer, in some other package/location. Say, I give them something along the lines of:
public abstract class ToBeExtended {
// some private stuff they should not modify
// ...
public abstract SomeReturnType safeMethod();
}
It is important (probably imperative), for the purposes of this project, that the method body satisfies some invariants. Or rather, it is imperative that the set of commands this method's implementation uses is limited. Examples of these constraints:
This method must not perform any I/O.
This method must not instantiate any unknown (potentially dangerous) objects.
...
Put another way:
This method can call the methods of a known (specific) class.
This method can execute some basic instructions (maths, assign local variables, ifs, loops...).
I've been looking through Annotations, and there seems to be nothing close to this.
My options so far:
Define some annotation, #SafeAnnotation, and apply it to the method, defining a contract with the implementer, that he will follow the rules imposed, or else the system will malfunction.
Define an Enum with the allowed operations. Instead of exposing the allowed methods, only a method is exposed, that accepts a list of these enum objects (or something similar to a Control Flow Graph?) and executes it, giving me the control of what can be done.
Example:
public enum AllowedOperations { OP1, OP2 }
public class TheOneKnown {
public void executeMyStuff (List<AllowedOperations> ops) {
// ...
}
}
My Question
Is there any feature in the language, such as annotations, reflection, or otherwise, allowing me to inspect (either at compile time or runtime) if a method is valid (ie, satisfies my constraints)?
Or rather, is there any way to enforce it to call only a limited set of other methods?
If not (and I think not), would this second approach be a suitable alternative?
Suitable, as in intuitive, well designed and/or good practice.
Update (Progress)
Having had a look at some related questions, I'm also considering (as a third option, maybe) following the steps given in the accepted answer of this question. Although, this may require some rethinking on the architecture.
The whole idea of using annotations to impose restrictions seems to require implementing my own annotation processor. If this is true, I might as well consider a small domain-specific language, so that the programmer would use these limited operations, later translating the code to Java. This way, I would also have control over what is specified.

Have a look at java policy files. I've not used them, and I'm not sure they'll fit your problem exactly, but with some digging into the docs they may be a fit. Here's a couple SO questions that may be of help
Limiting file access in Java
What is a simple Java security policy for restricting file writes to a single directory?
And here's some documentation on the policy file.
http://docs.oracle.com/javase/6/docs/technotes/guides/security/PolicyFiles.html

I think that the direction in this question is good.
Use a specific ClassLoader lo load the class. Beware, that they're an interesting type of horse, it usually happens that the class itself is loaded by a parent classloader. Probably you want some sort of UrlClassLoader, and the parent classloader would be set to the Root classloader It is not enough, though.
Use threads to avoid infinite loops (rather implementing Runnable than extending Thread, like there) - this may be unnecessary if you're not worrying about it.
Use SecurityManager to avoid java.io operations
In addition to the above, I recommend 2 options:
Give the method a controller, that would contain the functions it can call
For example:
public void foo(Controller ctrl) {
}
public class Controller {
public boolean commit();
public boolean rollback();
}
This can give the user a handle, what operations are allowed.
Use an Intent-like command pattern
In Android, the components of the system are quite closed. They cannot directly communicate to each other, they only can fire an event, that "this happened", or "I want to do that".
This way the set of the usable commands are not restricted. Usually, if the methods do only small business logic, that is enough.

You can restrict the classes used by untrusted code with a custom class loader:
public class SafeClassLoader extends ClassLoader {
Set<String> safe = new HashSet<>();
{
String[] s = {
"java.lang.Object",
"java.lang.String",
"java.lang.Integer"
};
safe.addAll(Arrays.asList(s));
}
#Override
protected Class<?> loadClass(String name, boolean resolve)
throws ClassNotFoundException {
if (safe.contains(name)) {
return super.loadClass(name, resolve);
} else {
throw new ClassNotFoundException(name);
}
}
}
public class Sandboxer {
public static void main(String[] args) throws Exception {
File f = new File("bin/");
URL[] urls = {f.toURI().toURL()};
ClassLoader loader = new URLClassLoader(urls, new SafeClassLoader());
Class<?> good = loader.loadClass("tools.sandbox.Good");
System.out.println(good.newInstance().toString());
Class<?> evil = loader.loadClass("tools.sandbox.Evil");
System.out.println(evil.newInstance().toString());
}
}
public class Good {
#Override
public String toString() {
return "I am good";
}
}
public class Evil {
#Override
public String toString() {
new Thread().start();
return "I am evil.";
}
}
Running this will result in
I am good
Exception in thread "main" java.lang.NoClassDefFoundError: java/lang/Thread
at tools.sandbox.Evil.toString(Evil.java:7)
at tools.sandbox.Sandboxer.main(Sandboxer.java:18)
Caused by: java.lang.ClassNotFoundException: java.lang.Thread
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 2 more
Of course, this assumes care is taken with the classes you white list. It also can't prevent denial of service stuff such as
while (true) {}
or
new long[1000000000];

Another alternative will be to use en embedded script interpreter, for example groovy one (https://docs.groovy-lang.org/latest/html/documentation/guide-integrating.html) and to evaluate the third-party methods content at runtime with a pre-execution validation.
Advantage is that you will be able to limit access to only the variables you will bind for the script execution.
You could also write your own validation dsl and apply it, for example using custom annotation, to the method that will execute the script.

There are several design by contract libraries available for Java, but I'm not able to recommend one in particular. Java Argument Validation appears to be a lightweight solution, but again, I don't have first-hand experience with it.

Related

Conditionally Remove Java Methods at Compile-Time

I am trying to achieve something similar to the C# preprocessor. I am aware that Java does NOT have the same preprocessor capabilities, and am aware that there are ways to achieve similar results using design patterns such as Factory. However, I am still interested in finding a solution to this question.
Currently, what I do is create a class that contains several static final boolean attributes, such as the following example:
public class Preprocessor
{
public static final boolean FULLACCESS = false;
}
I then use this in the following manner:
public ClassName getClassName()
{
if(Preprocessor.FULLACCESS)
{
return this;
}
else
{
return this.DeepCopy();
}
}
So far so good, this solves my problem (the example above is trivial, but I do use this in other instances where it is helpful). My question is, would there be a way to place the conditional around an entire method, so that the method itself would be unavailable given the correct "Preprocessor" variables? For example, I would like to be able to make a specific constructor available only for packages that are given "Full Access", as follows:
public ClassName()
{
// do things
}
if(FULLACCESS)
{
public ClassName(ClassName thing)
{
// copy contents from thing to the object being created
}
}
Again, I am aware of the limitations (or design decisions) of Java as a language, and am aware that in most circumstances this is unnecessary. As a matter of fact, I have considered simply creating these "extra" methods and placing the entire code of them within a conditional, while throwing an Exception if the conditional is not active, but that is a very crude solution that does not seem helpful to my programmers when I make these libraries available to them.
Thank you very much in advance for any help.
Edit:
To complement the question, the reason why I am attempting to do this is that by using exceptions as a solution, the IDE would display methods as "available" when they are actually not. However, again, it might just be a case of my being ignorant of Java.
The reasons for my wanting to do this are primarily so that I may have more than one public interface available, say, one restrictive where control is tighter within the methods, and one more permissive where direct alteration of attributes is allowed. However, I do also want to be able to actively remove portions of code from the .class, for instance, in a Product Line development approach where certain variants are not available.
Edit2.:
Furthermore, it is important to note that I will be generating the documentation conditionally as well. Therefore, each compiled version of the packages would have its own documentation, containing only that which is actually available.
Well, you can make it happen. A word of caution, though...
I can only think of one time when I thought this kind of approach was the best way, and it turned out I was wrong. The case of changing a class's public interface especially looks like a red flag to me. Throwing an exception when the access level isn't high enough to invoke the method might be more code-friendly.
But anyway, when I thought I wanted a preprocessor, what I did was to write one. I created a custom annotation to place on conditionally-available methods, grabbed a Java parser and wrote a little program that used the parser to find and remove methods that have the annotation. Then add that (conditionally) to the build process.
Because it turned out to be useless to me, I discarded mine; and I've never seen anyone else do it and publish it; so as far as I know you'd have to roll your own.
This answer is based partially on the comments you have left on the question and on Mark's answer.
I would suggest that you do this using Java interfaces which expose just the API that you desire. When you need a less restrictive API contract, extend an interface or create a separate implementation of an existing interface to get what you need.
public interface A
{
void f();
}
A above is your general API. Now you want to have some special extra methods to test A or to debug it or manipulate it or whatever...
public interface B extends A
{
void specialAccess();
}
Also, Java now supports default method implementations for interfaces which might be useful to you depending on how you implement your API. They take the following form...
public interface A
{
List getList();
// this is still only an interface, but you have a default impl. here
default void add(Object o)
{
getList().add(o);
}
}
You can read more about default methods on Oracle's page about it here.
In your API, your general distribution of it could include A and omit B entirely, and omit any implementations that offer the special access; then you can include B and special implementations for the special access version of the API you mentioned. This would allow plain old Java objects, nothing different to the code other than an extra interface and maybe an extra implementation of it. The custom part would just be in your packaging of the library. If you want to hand someone a "non-special" low-access version, hand them a jar that does not include B and does not include any possible BImplementation, possibly by having a separate build script.
I use Netbeans for my Java work, and I like to let it use the default build scripts that it auto generates. So if I were doing this and I were doing it in Netbeans, I would probably create two projects, one for base API and one for special-access API, and I would make the special-access one dependent on the base project. That would leave me with two jars instead of one, but I would be fine with that; if two jars bothered me enough I would go through the extra step mentioned above of making a build script for the special access version.
Some examples straight from Java
Swing has examples of this kind of pattern. Notice that GUI components have a void paint(Graphics g). A Graphics gives you a certain set of functionality. Generally, that g is actually a Graphics2D, so you can treat it as such if you so desire.
void paint(Graphics g)
{
Graphics2d g2d = Graphics2d.class.cast(g);
}
Another example is with Swing component models. If you use a JList or a JComboBox to display a list of objects in a GUI, you probably do not use the default model it comes with if you want to change that list over time. Instead, you create a new model with added functionality and inject it.
JList list = new JList();
DefaultListModel model = new DefaultListModel();
list.setModel(model);
Now your JList model has extra functionality that is not normally apparent, including the ability to add and remove items easily.
Not only is extra functionality added this way, but the original author of ListModel did not even need to know that this functionality could exist.
the only way in Java to reach that is to use preprocessor, for instance PostgresJDBC team uses java comment preprocessor for such manipulations, here is example from their Driver.java
//#if mvn.project.property.postgresql.jdbc.spec >= "JDBC4.1"
#Override
public java.util.logging.Logger getParentLogger() {
return PARENT_LOGGER;
}
//#endif
With Gradle you can manage your sources and I think that no preprocessor macros are no longer needed. Right now in src directory you have main/java with all sources but if you need specific methods in e.g. debug and release builds to do / or not specific things then create debug/java and release/java in src and put YourClass there. Note that by doing this you'll have to have YourClass in debug/java and release/java but not in main/java.

Should my classes restrict developers from doing wrong things with them?

I am trying to understand where good contracts end and paranoia starts.
Really, I just have no idea what good developer should care about and what shall he leave out :)
Let's say I have a class that holds value(s), like java.lang.Integer. Its instances are aggregated by other objects (MappedObjects), (one-to-many or many-to-many), and often used inside MappedObjects' methods. For performance reasons, I also track these relationships in TreeMap (guava MultiMap, doesn't matter) in addition, to be able to get fast iterations over MappedObjects bound to some range of Integer keys.
So, to keep system in consistent state, I should modify MappedObject.bind(Integer integer) method to update my Map like:
class MappedObject {
public void bind (Integer integer) {
MegaMap.getInstance().remove(fInteger, this);
fInteger = integer;
MegaMap.getInstance().add(fInteger, this);
}
...
private Integer fInteger;
}
I could just make abstract MappedObject class with this final method, forcing other to inherit from it, but it is rude. If I will define MappedObject as interface with method bind() and provide skeletal implementation -- other developer might later just forget to include it in object and implement method by himself without Map updating.
Yes, you should force people to do the right thing with your code. A great example of letting people do the wrong thing is the servlet method init( ServletConfig config ) that expected you would store the servlet config yourself but, obviously, a lot of people forgot to store the config and when running their servlets just failed to work.
When defining APIs, you should always follow the open-closed principle, your class should be open for extension and closed for modification. If your class has to work like this, you should only open extension points where they make sense, all the other functionality should not be available for modification, as it could lead to implementation issues in the future.
Try to focus on functionality first and leave all unnecessary things behind. Btw you can't prohibit reflection so don't worry too much on misuse. On the other hand your API should be clear and straightforward so users will have clear idea, what they should and what they shouldn't do with it.
I'd say your classes should be designed for as simple use as possible.
If you allow a developer to override methods you definitely should document the contract as good as possible. In that case the developer opts to override some basic functionality and thus is responsible to provide an implementation that adheres to the contract.
In cases where you don't want the developer to override parts of the functionality - for security reasons, if there is no sensible alternative etc. - just make that part final. In your case, the bind method might look like this:
class MappedObject {
public final void bind (Integer integer) {
MegaMap.getInstance().remove(fInteger);
internalBind( integer );
MegaMap.getInstance().add(fInteger);
}
protected void internalBind( Integer integer ) {
fInteger = integer;
}
...
private Integer fInteger;
}
Here you'd allow the developer to override the internalBind() method but ensure that bind() will do the mapping.
To summarize: Make using and extending classes as easy as (sensibly) possible and don't have the developer to copy lots of boiler plate code (like the map updates in your case) in case he just wants to override some basic functionality (like the actual binding).
At least you should do really everything that prevents bugs but cost no effort.
For example: use primitive types (int) instead of wrappers (Integer) if the variable is not allowed to be null.
So in your bind method. If you not have intended to bind null, then use int instead of Integer as parameter type.
If you think your API users are stupid, you should prohibit wrong usage. Otherwise you should not stand in their way to do things they need to do.
Domumentation and good naming of classes and methods should indicate how to use your API.

How much should be done in a constructor

This is a theory question I guess that I am using to find the standard procedure for this.
If I have a Constructor method that does a whole lot of setup operations gathering data and such, should I keep "all things construction" in the constructor, or should I try to call other methods from inside the constructor (for code looks basically), or should I just initialize everything I have to and leave other things to be dealt with later if they are actually needed?
Here is an example.
I am creating an object that is a collection manager basically. It needs to read in data from a file and it stores it inside of an array.
Do I use the constructor to just create an object with base properties and read data later,
or should I read in all the info and set up the array inside the constructor which saves time later but takes up extra time here, or should I do something along the lines of
public myConstructor(String filename) {
data = readDataIn(filename);
}
This is not actual code, just an example of outsourcing to different methods to "pretty up the code" instead of a super long constructor method I can have say 5-6 short and good looking methods that can only be accessed by the constructor.
The constructor should do just enough work to get the instance into a state that satisfies its contract. Each method should then do just enough work to fulfill the method's contract and leave the instance in a state that satisfies its contract.
Very rarely should a constructor call cause side-effects or modify its inputs. These are just not often required to satisfy a contract. For example, a connection class shouldn't touch the network on construction. Since it has to be closeable, the closed state must be part of its contract, and so the "just enough work" standard dictates that the constructor puts it in a ready, but not yet open state.
Your particular example couples your class to the file system. You would probably get a more testable, more general class by using Guava Files to do the reading and taking a string with the content instead. You can get the convenience of a constructor coupled to the file system by writing a convenient static MyClass fromFile(String path) factory function that does new MyClass. That moves the portion of your code that is coupled to the filesystem outside the portion that interacts with instance variables reducing the number of possible interactions to test. As others have noted, dependency injection is another good way to achieve decoupling.
Really depends on your API style. Note that you may wish to have multiple constructors, such as:
public MyThing(String filename) { }
public MyThing(FileInputStream filestream) {}
public MyThing(File file) { }
public MyThing(byte[] rawdata) { }
at which its judicious to consolidate the file loading operation into a method or two (file open and file parse)
In this case, I would use dependency injection, so that your constructor requires data that has already been computed, and defers the computation to whatever invokes the constructor. I might provide an additional static factory function that does all this complicated setup so that it is convenient to construct this object (e.g. in tests), but at least it would be possible for the user of this class to come up with a more clever (possibly parallelized or lazily-initialized) way of creating this class.

Why use javabean bound properties instead of events?

What exactly is the point of bound properties? To me they seem to be a less type-safe version of events using EventObjects - it seems a bit weak to be using string equality checking for event.getPropertyName().
Why would you use one over the other?
The whole point of Java Beans is that a system (GUI Builder, in particular) can examine a Java Bean and configure it without any previous knowledge of that component.
Although this is fairly cool, it's really only useful in this specific situation and these days annotations would work MUCH better.
So the reason they use bound properties was simply to support this drop-in GUI component technology, and I wouldn't really prefer it over events unless you need to support a reflective gui-building system.
Response to #Mike rodent
Let's say you allow your user to create a class that you will control. This class has a "Main" and can handle a couple events.
Normally you have your user do something like this:
class UserClass(someClass) {
void mainMethod() {
someClass.addEventListener(new EventListener() {
public void eventsHappen(Event e){
event1(e)
}
}
}
someClass.addDifferentEventListener(new DifferentEventListener() {
public void eventsHappen(DifferentEvent e){
event2(e)
}
}
}
}
public void event1(Event e) {
//java code for event 1
}
public void event2(DifferentEvent e) {
// java code for event 2
}
}
Anyway, you get the idea. Of course, you assume that this class is registered somewhere--probably in an xml/config file. You read it, instantiate it and execute the mainMethod(defined by agreement or by interface) and it registers itself and starts getting calls to the event handlers.
Now here's how you can accomplish the same thing with annotations: (You might recognize the pattern--it's pretty much how Junit annotates tests.)
class UserClass() {
#Event1
void event1Method(Event e) {
event1's code
}
#Event2
void event2Method(AnotherEvent e) {
event2's code
}
}
This is much more straight-froward and removes most of the boilerplate, and it also removes the need for an agreement or interface, annotations are more clearly defined and independant. (You don't actually even NEED the event annotations if you care to inspect the parameters being passed into the methods, but dragons lie in that general direction).
You still need the class registered somewhere, but this time you just scan each method for your event annotations and register them yourself. Since it's only a few lines of code to read and process a class like this, why restrict this pattern to unit testing?
Another thing I found really neat, I used this pattern for Groovy classes that were "plugged-in" to my Java program. Since I was compiling all the classes in a given directory anyway, scanning for annotations was trivial.. The effect to the user is that he drops in (or edits) a properly annotated groovy text file and my code immediately compiles, integrates and starts calling their event handlers.
I think that JavaBeans specification was designed with generic object handling in mind. By example, putting a JavaBean in a IDE and using a visual property editor to configure it. In that case the IDE will use the general PropertyChangeEvent and so on.
Or if you want to copy equal named properties from a bean to another... it's another case of bean use (BeanUtils class).
But, if you plan to do specific things, as Noel Ang says I'd recommend strong-typing.
JavaBeans is a specification. It defines a bound property as that whose modification results in a notification being emitted, and a PropertyChangeEvent is the sanctioned notification entity.
So the putative JavaBeans-spec bean editor is supposed to listen for PropertyChangeEvents. Beyond the need to work with that spec, I wouldn't use it, myself.

When NOT to use the static keyword in Java?

When is it considered poor practice to use the static keyword in Java on method signatures? If a method performs a function based upon some arguments, and does not require access to fields that are not static, then wouldn't you always want these types of methods to be static?
Two of the greatest evils you will ever encounter in large-scale Java applications are
Static methods, except those that are pure functions*
Mutable static fields
These ruin the modularity, extensibility and testability of your code to a degree that I realize I cannot possibly hope to convince you of in this limited time and space.
*A "pure function" is any method which does not modify any state and whose result depends on nothing but the parameters provided to it. So, for example, any function that performs I/O (directly or indirectly) is not a pure function, but Math.sqrt(), of course, is.
More blahblah about pure functions (self-link) and why you want to stick to them.
I strongly encourage you to favor the "dependency injection" style of programming, possibly supported by a framework such as Spring or Guice (disclaimer: I am co-author of the latter). If you do this right, you will essentially never need mutable static state or non-pure static methods.
One reason why you may not want it to be static is to allow it to be overridden in a subclass. In other words, the behaviour may not depend on the data within the object, but on the exact type of the object. For example, you might have a general collection type, with an isReadOnly property which would return false in always-mutable collections, true in always-immutable collections, and depend on instance variables in others.
However, this is quite rare in my experience - and should usually be explicitly specified for clarity. Normally I'd make a method which doesn't depend on any object state static.
In general, I prefer instance methods for the following reasons:
static methods make testing hard because they can't be replaced,
static methods are more procedural oriented.
In my opinion, static methods are OK for utility classes (like StringUtils) but I prefer to avoid using them as much as possible.
What you say is sort of true, but what happens when you want to override the behavior of that method in a derived class? If it's static, you can't do that.
As an example, consider the following DAO type class:
class CustomerDAO {
public void CreateCustomer( Connection dbConn, Customer c ) {
// Some implementation, created a prepared statement, inserts the customer record.
}
public Customer GetCustomerByID( Connection dbConn, int customerId ) {
// Implementation
}
}
Now, none of those methods require any "state". Everything they need is passed as parameters. So they COULD easily be static. Now the requirement comes along that you need to support a different database (lets say Oracle)
Since those methods are not static, you could just create a new DAO class:
class OracleCustomerDAO : CustomerDAO {
public void CreateCustomer( Connection dbConn, Customer c ) {
// Oracle specific implementation here.
}
public Customer GetCustomerByID( Connection dbConn, int customerId ) {
// Oracle specific implementation here.
}
}
This new class could now be used in place of the old one. If you are using dependancy injection, it might not even require a code change at all.
But if we had made those methods static, that would make things much more complicated as we can't simply override the static methods in a new class.
Static methods are usually written for two purposes. The first purpose is to have some sort of global utility method, similar to the sort of functionality found in java.util.Collections. These static methods are generally harmless. The second purpose is to control object instantiation and limit access to resources (such as database connections) via various design patterns such as singletons and factories. These can, if poorly implemented, result in problems.
For me, there are two downsides to using static methods:
They make code less modular and harder to test / extend. Most answers already addressed this so I won't go into it any more.
Static methods tend to result in some form of global state, which is frequently the cause of insidious bugs. This can occur in poorly written code that is written for the second purpose described above. Let me elaborate.
For example, consider a project that requires logging certain events to a database, and relies on the database connection for other state as well. Assume that normally, the database connection is initialized first, and then the logging framework is configured to write certain log events to the database. Now assume that the developers decide to move from a hand-written database framework to an existing database framework, such as hibernate.
However, this framework is likely to have its own logging configuration - and if it happens to be using the same logging framework as yours, then there is a good chance there will be various conflicts between the configurations. Suddenly, switching to a different database framework results in errors and failures in different parts of the system that are seemingly unrelated. The reason such failures can happen is because the logging configuration maintains global state accessed via static methods and variables, and various configuration properties can be overridden by different parts of the system.
To get away from these problems, developers should avoid storing any state via static methods and variables. Instead, they should build clean APIs that let the users manage and isolate state as needed. BerkeleyDB is a good example here, encapsulating state via an Environment object instead of via static calls.
That's right. Indeed, you have to contort what might otherwise be a reasonable design (to have some functions not associated with a class) into Java terms. That's why you see catch-all classes such as FredsSwingUtils and YetAnotherIOUtils.
when you want to use a class member independently of any object of that class,it should be declared static.
If it is declared static it can be accessed without an existing instance of an object of the class.
A static member is shared by all objects of that specific class.
An additional annoyance about static methods: there is no easy way to pass a reference to such a function around without creating a wrapper class around it. E.g. - something like:
FunctorInterface f = new FunctorInterface() { public int calc( int x) { return MyClass.calc( x); } };
I hate this kind of java make-work. Maybe a later version of java will get delegates or a similar function pointer / procedural type mechanism?
A minor gripe, but one more thing to not like about gratuitous static functions, er, methods.
Two questions here
1) A static method that creates objects stays loaded in memory when it is accessed the first time? Isnt this (remaining loaded in memory) a drawback?
2) One of the advantages of using Java is its garbage collection feature - arent we ignoring this when we use static methods?

Categories