Explicit typing in Groovy: sometimes or never? - java

[Later: Still can't figure out if Groovy has static typing (seems that it does not) or if the bytecode generated using explicit typing is different (seems that it is). Anyway, on to the question]
One of the main differences between Groovy and other dynamic languages -- or at least Ruby -- is that you can statically explicitly type variables when you want to.
That said, when should you use static typing in Groovy? Here are some possible answers I can think of:
Only when there's a performance problem. Statically typed variables are faster in Groovy. (or are they? some questions about this link)
On public interfaces (methods, fields) for classes, so you get autocomplete. Is this possible/true/totally wrong?
Never, it just clutters up code and defeats the purpose of using Groovy.
Yes when your classes will be inherited or used
I'm not just interested in what YOU do but more importantly what you've seen around in projects coded in Groovy. What's the norm?
Note: If this question is somehow wrong or misses some categories of static-dynamic, let me know and I'll fix it.

In my experience, there is no norm. Some use types a lot, some never use them. Personally, I always try to use types in my method signatures (for params and return values). For example I always write a method like this
Boolean doLogin(User user) {
// implementation omitted
}
Even though I could write it like this
def doLogin(user) {
// implementation omitted
}
I do this for these reasons:
Documentation: other developers (and myself) know what types will be provided and returned by the method without reading the implementation
Type Safety: although there is no compile-time checking in Groovy, if I call the statically typed version of doLogin with a non-User parameter it will fail immediately, so the problem is likely to be easy to fix. If I call the dynamically typed version, it will fail some time after the method is invoked, and the cause of the failure may not be immediately obvious.
Code Completion: this is particularly useful when using a good IDE (i.e. IntelliJ) as it can even provide completion for dynamically added methods such as domain class' dynamic finders
I also use types quite a bit within the implementation of my methods for the same reasons. In fact the only times I don't use types are:
I really want to support a wide range of types. For example, a method that converts a string to a number could also covert a collection or array of strings to numbers
Laziness! If the scope of a variable is very short, I already know which methods I want to call, and I don't already have the class imported, then declaring the type seems like more trouble than it's worth.
BTW, I wouldn't put too much faith in that blog post you've linked to claiming that typed Groovy is much faster than untyped Groovy. I've never heard that before, and I didn't find the evidence very convincing.

I worked on a several Groovy projects and we stuck to such conventions:
All types in public methods must be specified.
public int getAgeOfUser(String userName){
...
}
All private variables are declared using the def keyword.
These conventions allow you to achieve many things.
First of all, if you use joint compilation your java code will be able to interact with your groovy code easily. Secondly, such explicit declarations make code in large projects more readable and sustainable. And of-course auto-completion is an important benefit too.
On the other hand, the scope of a method is usually quite small that you don't need to declare types explicitly. By the way, modern IDEs can auto-complete your local variables even if you use defs.

I have seen type information used primarily in service classes for public methods. Depending on how complex the parameter list is, even here I usually see just the return type typed. For example:
class WorkflowService {
....
WorkItem getWorkItem(processNbr) throws WorkflowException {
...
...
}
}
I think this is useful because it explicitly tells the user of the service what type they will be dealing with and does help with code assist in IDE's.

Groovy does not support static typing. See it for yourself:
public Foo func(Bar bar) {
return bar
}
println("no static typing")
Save and compile that file and run it.

Related

Minimizing interfaces in Golang

In golang, interfaces are extremely important for decoupling and composing code, and thus, an advanced go program might easily define 1000s of interfaces .
How do we evolve these interfaces over time, to ensure that they remain minimal?
Are there commonly used go tools which check for unused functions ?
Are there best practices for annotating go functions with something similar to java's #Override, which ensures that a declared function is properly implementing a expected contract?
Typically in the java language, it is easy to keep code tightly bound to an interface specification because the advanced tooling allows us to find and remove functions which aren't referenced at all (usually this is highlighted automatically for you in any common IDE).
Are there commonly used go tools which check for unused functions ?
Sort of, but it is really hard to be sure for exported interfaces. oracle can be used to find references to types or methods, but only if you have all of the code that references you availible on your gopath.
can you ensure a type implements a contract?
If you attempt to use a type as an interface, the compiler will complain if it does not have all of the methods. I generally do this by exporting interfaces but not implementations, and making a constructor:
type MyInterface interface{
Foo()
}
type impl struct{}
func (i *impl) Foo(){}
func NewImpl() MyInterface{
return &impl{}
}
This will not compile if impl does not implement all of the required functions.
In go, it is not needed to declare that you implement an interface. This allows you to implement an interface without even referencing the package it is defined in. This is pretty much exactly the opposite of "tightly binding to an interface specification", but it does allow for some interesting usage patterns.
What your asking for isn't really a part of Go. There are no best practices for annotating that a function satisfies an interface. I would personally say the only clear best practice is to document which interfaces your types implement so that people can know. If you want to test explicitly (at compile time) if a type implements an interface you can do so using assignment, check out my answer here on the topic; How to check if an object has a particular method?
If you're just looking to take inventory of your code base to do some clean up I would recommend using that assignment method for all your types to generate compile time errors regarding what they don't implement, scale down the declarations until it compiles. In doing so you should become aware of the disparity between what might be implemented and what actually is.
Go is also lacking in IDE options. As a result some of those friendly features like "find all references" aren't there. You can use text searching tricks to get around this, like searching func TheName to get only the declaration and .TheName( to get all invocations. I'm sure you'll get used to it pretty quickly if you continue to use this tooling.

Java inheritance not recognised in reflection

I generally oppose extension since it creates a very strong connection between classes, which is easy to accidentally break.
However, I finally thought I'd found a reasonable case for it - I want to optionally use a compressed version of a file type in an existing system. The compressed version would be almost as quick as the uncompressed, and would have exactly the same methods available (i.e. read and write) - the only difference would be in the representation on disk. Therefore, I had the compressed version extend the uncompressed version so that either kind of file could be used, just by optionally insantiating the other type.
public class CompressedSpecialFile extends SpecialFile(){ ... }
if (useCompression){
SpecialFile = new CompressedSpecialFile();
} else {
SpecialFile = new SpecialFile();
}
However, at a later point in the program, we use reflection:
Object[] values = new Object[]{SpecialFile sf, Integer param1, String param2, ...}
Class myclass = Class.forName(algorithmName);
Class[] classes = // created by calling .getClass on each object in values
constructor = myclass.getConstructor(classes);
Algorithm = (Algorithm) constructor.newInstance(values)
Which all worked fine, but now the myclass.getConstructor class throws a NoSuchMethodException since the run-time type of the SpecialFile is CompressedSpecialFile.
However, I thought that was how extension is supposed to work - since CompressedSpecialFile extends SpecialFile, any parameter accepting a SpecialFile should accept a CompressedSpecialFile. Is this an error in Java's reflection, or a failure of my understanding?
Hmm, the response to this bug report seems to indicate that this is intentional.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4301875
We cannot make this change for compatibility reaons. Furthermore, we
would expect that getConstructor should behave analogously to getDeclaredMethod,
which also requires an exact match, thus it does not make sense to change one
without changing the other. It would be possible to add an additional suite of
methods that differed only in the way in which the argument types were matched,
however.
There are certainly cases where we might want to apply at runtime during
reflection the same overload-resolution algorithm used statically by the
compiler, i.e., in a debugger. It is not difficult to implement this
functionality with the existing API, however, so the case for adding this
functionality to core reflection is weak.
That bug report was closed as a duplicate of the following one, which provides a bit more implementation detail:
http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=1b08c721077da9fffffffff1e9a6465911b4e?bug_id=4287725
Work Around
Users of getMethod must be precise identifying the Class passed to the argument.
Evaluation
The essence of this request is that the user would like for Class.getMethod
to apply the same overloading rules as the compiler does. I think this is
a reasonable request, as I see a need for this arising frequently in certain
kinds of reflective programs, such as debuggers and scripting interpreters,
and it would be helpful to have a standard implementation so that everybody
gets it right. For compatibility, however, the behavior of the existing
Class.getMethod should be left alone, and a new method defined. There is
a case for leaving this functionality out on the basis of footprint, as it
can be implemented using existing APIs, albeit somewhat inefficiently.
See also 4401287.
Consensus appears to be that we should provide overload resolution in
reflection. Exactly when such functionality is provided would depend largely
on interest and potential uses.
For compatibility reasons, the Class.get(Declared)+{Method,Constructor}
implementation should not change; new method should be introduced. The
specification for these methods does need to be modified to define "match". See
bug 4651775.
You can keep digging into those referenced bugs and the actual links I provided (where there's discussion as well as possible workarounds) but I think that gets at the reasoning (though why a new method reflecting java's oop in reflection as well has not yet been implemented, I don't know).
In terms of workarounds, I suppose that for the one-level-deep version of inheritance, you can just call getSuperclass() on each class whose name is that of the extending class, but that's extremely inelegant and tied to you using it only on your classes implementing in the prescribed manner. Very kludgy. I'll try and look for another option though.

from java to javascript: the object model

I'm trying to port an application I wrote in java to javascript (actually using coffeescript).
Now, I'm feeling lost.. what do you suggest to do to create class properties? Should I use getter/setters? I don't like to do this:
myObj.prop = "hello"
because I could use non existing properties, and it would be easy to mispell something..
How can I get javascript to be a bit more like java, with private, public final properties etc..? Any suggestion?
If you just translate your Java code into JavaScript, you're going to be constantly fighting JavaScript's object model, which is prototype-based, not class-based. There are no private properties on objects, no final properties unless you're using an ES5-compatible engine (you haven't mentioned what your target runtime environment is; browsers aren't use ES5-compatible, it'll be another couple of years), no classes at all in fact.
Instead, I recommend you thoroughly brief yourself on how object orientation actually works in JavaScript, and then build your application fully embracing how JavaScript does it. This is non-trivial, but rewarding.
Some articles that may be of use. I start with closures because really understanding closures is absolutely essential to writing JavaScript, and most "private member" solutions rely on closures. Then I refer to a couple of articles by Douglas Crockford. Crockford is required reading if you're going to work in JavaScript, even if you end up disagreeing with some of his conclusions. Then I point to a couple of articles specifically addressing doing class-like things.
Closures are not complicated - Me
Prototypical inheritance in JavaScript - Crockford
Private Members in JavaScript - Crockford
Simple, Efficient Supercalls in JavaScript - Me Includes syntactic sugar to make it easier to set up hierarchies of objects (it uses class-based terminology, but actually it's just prototypical inheritance), including calling "superclass" methods.
Private Members in JavaScript - Me Listing Crockford's solution and others
Mythical Methods - Me
You must remember this - Me
Addressing some of your specific questions:
what do you suggest to do to create class properties? Should I use getter/setters? I don't like to do this:
myObj.prop = "hello"
because I could use non existing properties, and it would be easy to mispell something..
I don't, I prefer using TDD to ensure that if I do have a typo, it gets revealed in testing. (A good code-completing editor will also be helpful here, though really good JavaScript code-completing editors are thin on the ground.) But you're right that getters and setters in the Java sense (methods like getFoo and setFoo) would make it more obvious when you're creating/accessing a property that you haven't defined in advance (e.g., through a typo) by causing a runtime error, calling a function that doesn't exist. (I say "in the Java sense" because JavaScript as of ES5 has a different kind of "getters" and "setters" that are transparent and wouldn't help with that.) So that's an argument for using them. If you do, you might look at using Google's Closure compiler for release builds, as it will inline them.
How can I get javascript to be a bit more like java, with private...
I've linked Crockford's article on private members, and my own which lists other ways. The very basic explanation of the Crockford model is: You use a variable in the context created by the call to your constructor function and a function created within that context (a closure) that has access to it, rather than an object property:
function Foo() {
var bar;
function Foo_setBar(b) {
bar = b;
}
function Foo_getBar() {
return bar;
}
this.setBar = Foo_setBar;
this.getBar = Foo_getBar;
}
bar is not an object property, but the functions defined in the context with it have an enduring reference to it. This is totally fine if you're going to have a smallish number of Foo objects. If you're going to have thousands of Foo objects you might want to reconsider, because each and every Foo object has its own two functions (really genuinely different Function instances) for Foo_getBar and Foo_setBar.
You'll frequently see the above written like this:
function Foo() {
var bar;
this.setBar = function(b) {
bar = b;
};
this.getBar = function() {
return bar;
};
}
Yes, it's briefer, but now the functions don't have names, and giving your functions names helps your tools help you.
How can I get javascript to be a bit more like java, with...public final properties
You can define a Java-style getter with no setter. Or if your target environment will be ES5-compliant (again, browsers aren't yet, it'll be another couple of years), you could use the new Object.defineProperty feature that allows you to set properties that cannot be written to.
But my main point is to embrace the language and environment in which you're working. Learn it well, and you'll find that different patterns apply than in Java. Both are great languages (I use them both a lot), but they work differently and lead to different solutions.
You can use module pattern to make private properties and public accessors as one more option.
This doesn't directly answer your question, but I would abandon the idea of trying to make the JavaScript app like Java. They really are different languages (despite some similarities in syntax and in their name). As a general statement, it makes sense to adopt the idioms of the target language when porting something.
Currently there are many choices for you , you can check dojo library. In dojo, you can code mostly like java programming
Class
Javascript doesn’t have a Class system like Java,dojo provide dojo.declare to define a functionality to simulate this. Check this page . There are field variable, constructor method, extend from other class.
JavaScript has a feature that constructor functions may return any object (not necesserily this). So, your constructor function could just return a proxy object, that allows access only to the public methods of your class. Using this method you can create real protected member, just like in Java (with inheritance, super() call, etc.)
I created a little library to streamline this method: http://idya.github.com/oolib/
Dojo is one option. I personally prefer Prototype. It also has a framework and API for creating classes and using inheritance in a more "java-ish" way. See the Class.create method in the API. I've used it on multiple webapps I've worked on.
I mainly agree with #Willie Wheeler that you shouldn't try too hard to make your app like Java - there are ways of using JavaScript to create things like private members etc - Douglas Crockford and others have written about this kind of thing.
I'm the author of the CoffeeScript book from PragProg. Right now, I use CoffeeScript as my primary language; I got fluent in JavaScript in the course of learning CoffeeScript. But before that, my best language was Java.
So I know what you're going through. Java has a very strong set of best practices that give you a clear idea of what good code is: clean encapsulation, fine-grained exceptions, thorough JavaDocs, and GOF design patterns all over the place. When you switch to JavaScript, that goes right out the window. There are few "best practices," and more of a vague sense of "this is elegant." Then when you start seeing bugs, it's incredibly frustrating—there are no compile-time errors, and far fewer, less precise runtime errors. It's like playing without a net. And while CoffeeScript adds some syntactic sugar that might look familiar to Java coders (notably classes), it's really no less of a leap.
Here's my advice: Learn to write good CoffeeScript/JavaScript code. Trying to make it look like Java is the path to madness (and believe me, many have tried; see: just about any JS code released by Google). Good JS code is more minimalistic. Don't use get/set methods; use exceptions sparingly; and don't use classes or design patterns for everything. JS is ultimately a more expressive language than Java is, and CoffeeScript even moreso. Once you get used to the feeling of danger that comes with it, you'll like it.
One note: JavaScripters are, by and large, terrible when it comes to testing. There are plenty of good JS testing frameworks out there, but robust testing is much rarer than in the Java world. So in that regard, there's something JavaScripters can learn from Java coders. Using TDD would also be a great way of easing your concerns about how easy it is to make errors that, otherwise, wouldn't get caught until some particular part of your application runs.

Keeping track of what's in a Collection in pre-generics Java?

For a bunch of reasons that (believe it or not) are not as unsound as you may think, we are still (sigh) using Java 1.4 to build and run our code (though we plan to finally move to Java 7 by the end of the year).
Our existing code that uses Collection classes doesn't do a very good job of making it clear what is expected to be in the Collection. Obviously, you can read the code and see what the downcasts end up being done and infer from that, but you can't just look at a method declaration and know what the Collection object that is a method argument or method return value actually holds.
In new code that I'm writing and when I am in older code that uses Collections, I've been adding in-line comments to Collections declarations to show what would have been declared if generics were being used. For example:
Map/*<String, Set<Integer>>*/ theMap = new HashMap/*<String, Set<Integer>>*/();
or
List/*<Actions>*/ someMethod(List/*<Job>*/ jobs);
In keeping with the frowning at subjectivity here at SO, rather than asking what you think of this (though admittedly I'd like to know -- I do find it a bit ugly but still like having the type info there) I'd instead just ask what, if anything, you do to make it clear what is being held by pre-generics Collection objects.
What we recommended back in the old days -- and I was a Java Architect at Sun when Java 1.1 was the New Thing -- was to write a class around the structure (I don't think 1.1 even had Collection as a base class) so that the typecasts happned in code you control instead of in user code. So, for example, something like
public class ArrayOfFoo {
Object [] ary; // ctor left as exercise
public void set(int index, Foo value){
ary[index] = (Object) value; // cast strictly not needed, any Foo is an Object
}
public void get(int index){
return (Foo) ary[index]; // cast needed, not every Object is a Foo
}
}
Sounds like the code base you have isn't built to this convention; if you're writing new code, there's no reason you can't start. Failing that, your convention isn't bad, but it's easy to forget the cast and then have to search to find out why you're getting a bad cast exception. It's mildly better to resort of some variant on Hungarian notation, or the Smalltalk 'aVariable' convention, by encoding the type in the names, so that you use
Object fooAry = new Object[aZillion];
fooAry[42] = new Foo();
Foo aFoo = fooAry[42];
Use clear variable identifiers such as jobList, actionList, or dictionaryMap. If you're concerned with the type of objects they contain, you could even make it a convention to always let the identifier of a Collection hint about which type of objects it holds.
The inlined comments aren't that idea actually. When I ported a 1.5 project back to 1.4 I did just that (instead of removing the type parameters). It worked out quite well.
I'd recommend writing tests. For various reasons:
You should be writing tests anyway!
You can assert the type of a collection member very easily to ensure that all your code paths are adding the right types to the collection
You can use the test to write code that serves as an "example" of how to use the collection correctly
If you just need binary compatibility to 1.4 you could consider using a tool to downgrade the class files back to 1.4 and thus start to develop in 1.6 or 1.7 right now. You would of course need to avoid any API that hasn't been there in 1.4 (unfortunately you can't compile code with generics against the 1.4 jars directly as they don't declare any generic types). The Bytecode is still the same (at least with 1.6, I don't know for sure about 1.7). One free tool that can do the trick is ProGuard. It can do much more sophisticated things and can also remove all traces of generics in the class files. Just turn off the obfuscation and optimization if you don't need it. It will also warn you if some missing API was used in the processed code if you feed it the 1.4 libraries.
I'm aware that is considered a hack by many but we had a similar requirement where we needed some code to still run on a Personal Java VM (this is essentially Java 1.1) and several other exotic VMs and this approach worked quite well. We started with ProGuard and then made our own tool for the task to be able to implement a few workarounds for some Bugs in the diverse VMs.

Why all the java code is packed in Classes?

I have started learning Java and picked up some books and collected materials online too. I have programmed using C++. What I do not understand is that ,even the main method is packed inside a Class in Java. Why do we have everything packed inside some class in Java ? Why it does not have independent functions ?
This is the main concept of object oriented programming languages: everything is an object which is an instance of a class.
So because there's nothing but classes in Java (except the few Java primitive types, like int, float, ...) we have to define the main method, the starting point for a java application, inside a class.
The main method is a normal static method that behaves just like any other static method. Only that the virtual machine uses this one method (only) to start the main thread of the application.
Basically, it works like this:
You start the application with java MyClass
The JVM loads this class ("classloading")
The JVM starts a new thread (the main thread)
The JVM invokes the method with the signature public static void main(String[])
That's it (in brief).
Because that's how the designers of the language wanted it to be.
Java enforces the Object Oriented paradigm very very heavily. That said, there are plenty of ways to work around it if you find it cumbersome.
For one, the class that contains main can easily have lots of other methods in it. Also, it's not uncommon to make a class called 'Utility' or something similar to store all your generic methods that aren't associated with any particular class or objects.
If you work in a smart IDE such as eclipse, you'll eventually find that Java's seemingly over-abundant restrictions, exceptions, and rigorous structure are actually a blessing in disguise. The compiler can understand and work through your code much better when the language isn't cluttered with syntactic junk. It will give you information about unused variables and methods, dead code, etc. and the IDE will give you suggested fixes and code completion (and of course auto-format). I never actually type out import statements anymore, I just mouse over the code and select the class I want. With rigorous types, generic types, casting restrictions etc. the compiler can catch a lot of code which might otherwise result in all kinds of crazy undetectable behavior at runtime. Java is the strictest language in the sense that most of what you type will not compile or else very quickly throw an Exception of one kind or another.
So, if you ask a question about the structure of Java, Java programmers will generally just answer "because that's the rule" while Python programmers are trying to get their indentation right (no auto-format to help), Ruby programmers are writing unit tests to make sure all their arguments are of the correct type and C programmers are trying to figure out where the segfault is occuring. As I understand C++ has everything Java has, but too many other capabilities (including casting anything to anything and oh-so-dangerous pointer arithmetic).
And you can have multiple entry points in a jar depending on how many classes inside the package has main.

Categories