This question already has answers here:
What’s the current state of closures in Java?
(6 answers)
Closed 9 years ago.
So is Java 7 finally going to get the closures? What's the latest news?
Yes, closures were include to release plan of java 7 (and it was the most significant reason to delay release from winter to autumn (expected in September 2010)).
The latest news could be found at Project Lambda. You may also be interested in reading latest specification draft.
There is no official statement on the state of closures at the moment.
Here are some readable examples of how it could work and look like.
If you want to get some insight into what's going on I refer you to the OpenJDK mailing list.
Overview
Basically there is some hope, because code together with some tests were already committed to some source code branch and there are at least some halfway working infrastructure to test it.
The change message from Maurizio Cimadamore reads:
initial lambda push; the current prototype suuports the following features:
function types syntax (optionally enabled with -XDallowFunctionTypes)
function types subtyping
full support for lambda expression of type 1 and 2
inference of thrown types/return type in a lambda
lambda conversion using rules specified in v0.1.5 draft
support references to 'this' (both explicit and implicit)
translation using method handles
The modified script build of the
langtools repository now generates an
additional jarfile called javacrt.jar
which contains an helper class to be
used during SAM conversion; after the
build, the generated scripts
javac/java will take care of
automatically setting up the required
dependencies so that code containing
lambda expressions can be compiled and
executed.
But this is ongoing work and quite buggy at the moment.
For instance the compiler sometimes crashes on valid expressions, doesn't compile correct closure syntax code or generates illegal byte code.
On the negative side there are some statements from Neal Gafter:
It's been nearly three months since the 0.15 draft, and it is now less
than two weeks before the TL (Tools and Languages) final integration
preceding openjdk7 feature complete. If you've made progress on the
specification and implementation, we would very much appreciate it being
shared with us. If not, perhaps we can help. If Oracle has decided that
this feature is no longer important for JDK7, that would be good to know
too. Whatever is happening, silence sends the wrong message.
A discussion between Neal Gafter and Jonathan Gibbons:
Great to see this, Maurizio! Unfortunately it arrives a week too late, and
in the wrong repository, to be included in jdk7.
I notice that none of the tests show a variable of function type being
converted to a SAM type. What are the plans there?
Jonathan Gibbons' response:
Since the published feature list for jdk7 and the published schedule for
jdk7 would appear to be at odds, why do you always assume the schedule
is correct?
Neal Gafter's answer:
Because I recall repeated discussion to the effect that the feature set
would be adjusted based on their completion status with respect to the
schedule.
Some people even question if the whole thing makes sense anymore and suggest moving to another language:
One starts to wonder, why not just move to Scala -- there's much more
that needs to be added to Java in order to build a coherent combination
of features around lambdas. And now these delays, which affect not just
users of ParallelArray but everyone who wants to build neatly refactored,
testable software in Java.
Seems like nobody's proposing to add declaration-site variance in Java
=> means FunctionN<T, ...> interfaces will not subtype the way they should.
Nor is there specialization for primitives. (Scala's #specialized is
broken for all but toy classes, but at least it's moving in the right
direction)
No JVM-level recognition that an object is a closure, and can hence be
eliminated, as it can be with Scala's closure elimination (if the HOF can
also be inlined.) The JVM seems to add something like an unavoidable machine
word access to every polymorphic call site, even if they are supposedly
inline-cached and not megamorphic, even inside a loop. Result that I've
seen is approximately a 2x slowdown on toy microbenchmarks like "sum an array
of integers" if implemented with any form of closures other than something
that can be #inline'd in Scala. (And even in Scala, most HOF's are virtual
hence can't be inlined.) I for one would like to see usable inlining in a
language that /encourages/ the use of closures in every for loop.
Conclusion
This is just a quick look at the whole problem going on and the quotes and statements are not exhaustive at all. At the moment people are still in the state of "Can closures really be done in Java and if yes, how should it be done and how might it look like?".
There is no simple "OK, we just add closures to Java over the weekend".
Due to the interaction of some design mistakes like varargs as arrays, type erasure ... there are cases which just can't work. Finding all these small problems and deciding if they are fixable is quite hard.
In the end, there might be some surprises.
I'm not sure what that surprise will be, but I guess it will be either:
Closures won't get into Java 7 or
Closures in Java 7 will be what Generics were in Java 5 (Buggy, complex stuff, which looks like it might work, but breaks apart as soon as you push things a bit further)
Personal opinion
I switched to Scala a long time ago. As long as Oracle doesn't do stupid things to the JVM, I don't care anymore. In the evolutionary process of the Java language mistakes were made, partly due to backward compatibility. This created an additional burden with every new change people tried to make.
Basically: Java works, but there will be no evolution of the language anymore. Every change people make increases the cost of making the next change, making changes in the future more and more unlikely.
I don't believe that there will be any changes to the Java language after Java 7 apart from some small syntax improvements like project Coin.
http://java.dzone.com/news/closures-coming-java-7
The latest news is AFAIK still as of late Nov 2009 that closures will be in Java 7 in some form. Since this was given as the main reason for a significant delay in the release, it seems pretty unlikely that they'll drop it again.
There's been a whole lot of syntax and transparency related debating (particularly focusing on how hard to read a currying function with a particular syntax is, it seems like) going on on the lambda-dev mailing list, and there have been a couple draft proposal iterations from Sun, but I haven't seen much from them on that list in a while.
I'm at a release conference now and the speaker is saying closures are coming to Java 8.
Related
While studying the standard Java library and its classes, i couldn't help noticing that some of those classes have methods that, in my opinion, have next to no relevance to those classes' cause.
The methods i'm talking about are, for example, Integer#getInteger, which retrieves a value of some "system property", or System#arraycopy, whose purpose is well-defined by its name.
Still, both of these methods seem kinda out of place, especially the first one, which for some reason binds working with system resources to a primitive type wrapper class.
From my current point of view, such method placement policy looks like a violation of a fundamental OOP design principle: that each class must be dedicated to solving its particular set of problems and not turn itself into a Swiss army knife.
But since i don't think that Java designers are idiots, i assume that there's some logic behind a decision to place those methods right where they are. So i'd be grateful if someone could explain what that logic really is.
Thanks!
Update
A few people have hinted at the fact that Java does have its illogical things that are simply remnants of a turbulent past. I reformulate my question then: why is Java so unwilling to mark its architectural flaws as deprecated, since it's not like that the existing deprecated features are likely to be discontinued in any observable future, and making things deprecated really helps refraining from using them in newly created code?
This is a good thing to wonder about. I know about more recent features (such as generics, lambda's etc) there are several blogs and posts on mailing lists that explain the choices made by the library makers. These are very interesting to read.
In your case I expect the answer isn't too exiting. The reason they were made is hard to tell. But both classes exist since JDK1.0. In those days the quality of programming in general (and also Java and OO in particular) was perhaps lower (meaning there were fewer common practices, library makers had to invent many paradigms themselves). Also there were other constraints in those times, such as Object creation being expensive.
Many of those awkwardly designed methods and classes now have a better alternative. (See Date and the package java.time)
The arraycopy you would expect to be added to the Arrays class, but unfortunately it is not there.
Ideally the original method would be deprecated for a while and then removed. Many libraries follow this strategy. Java however is very conservative about this and only deprecates things that really should not be used (such as Thread.stop(). I don't think a method has ever been removed in Java due to deprecation. This means it is fairly easy to upgrade your software to a newer version of Java, but it comes at the cost of leaving some clutter in the libraries.
The fact that java is so conservative about keeping the new JDK/JRE versions compatible with older source code and binaries is loved and hated. For your hobby project, or a small actively developed project upgrading to a new JVM that removes deprecated functions after a few years is not too difficult. But don't forget that many projects are not actively developed or the developers have a hard time making changes securely, for instance because they lack a proper regression test. In these projects changes in APIs cost a lot of time to comply to, and run the risk of introducing bugs.
Also libraries often try to support older versions of Java as well as newer version, they will have a problem doing so when methods have been deleted.
The Integer-example is probably just a design decision. If you want to implicitly interpret a property as Integer use java.lang.Integer. Otherwise you would have to provide a getter method for each java.lang-Type. Something like:
System.getPropertyAsBoolean(String)
System.getPropertyAsByte(String)
System.getPropertyAsInteger(String)
...
And for each data type, you'd require one additional method for the default:
- System.getPropertyAsBoolean(String, boolean)
- System.getPropertyAsByte(String, byte)
...
Since java.lang-Types already have some cast abilities (Integer.valueOf(String)), I am not too surprised to find a getProperty method here. Convenience in trade for breaking principles a tiny bit.
For the System.arraycopy, I guess it is an operation that depends on the operating system. You probably copy memory from one location to another in a very efficient way. If I would want to copy an array like that, I'd look for it in java.lang.System
"I assume that there's some logic behind a decision to place those
methods right where they are."
While that is often true, I have found that when somethings off, this assumption is typically where you are mislead.
A language is in constant development, from the day someone proposes a new language to the day it is antiquated. In between those extremes are some phases that the language, go through. Especially if someone is spending money on it and wants people to use it, a very peculiar phase often occurs, just before or after the first release:
The "we need this to work yesterday" phase.
This is where stuff like this happens, you have an almost complete language, but the programmers need to do something to to show what the language can do, or a specific application needs a feature that was not designed into the language.
So where do we add this feature?
- well, where it makes most sense to that particular programmer who's task it is to "make it work yesterday".
The logic may be that, this is where the function makes the most sense, since it doesn't belong anywhere else, and it doesn't deserve a class of its own. It could also be something like: so far, we have never done an array copy, without using system.. lets put arraycopy in there, and save everyone an extra include..
in the next generation of the language, people will not move the feature, since some experienced programmers will complain. So the feature may be duplicated, and found in a place where it makes more sense.
much later, it will be marked as deprecated, and deleted, if anyone cares to clean it up..
We are in a process of changing the versioning and dependency system of ours Middleware (MW) software, and we where thinking on something like this:
a.b.c.d
a - Major version
b - Backwards compatibility break
c - New functionality
d - Bug fix
But with a little twist, since we must keep to a minimum the number of packages we send to the clients due to the size of the software and the slow network.
So the idea was to only reset the Bug Fix number on a Backwards Compatibility change. Using this logic we could create an automatic system that only generates a new package if there were any bug changes over the version that the client already has installed, and that it complies with what the new FrontEnd (FE) requires.
To better display this all scenario here are a couple of examples:
Increment logic
Requires package decision logic
Although this is a non-standard versioning logic, do you guys see any problems with this logic?
There is no problem skipping version numbers (or having complex version numbering) as long as you have internal logic that your whole company understands and abides by. (If things don't change... Microsoft is going to be skipping version 9 of it's windows systems.)
[major].[minor].[release].[build] is used quite a bit by many companies.
At our company, we added one extra beyond [build] called [private].
[major].[minor].[release].[build].[private]
In our logic, [private] is used for internal sequencing for bug testing. (We purposely break our code so we can test for bugs.) But before releasing the code, [private] must be set back to zero. Thus, no code leaves the office unless there is a .0 sitting at the end of the Version number. It's a reminder for programmers to remove (or comment out) their test coding and it's a reminder to testers not to send out code that's only meant for testing.
Back in the 80's I also read something about the psychology of version numbering. Some companies would jump directly to [minor] release 3 just to make it look like they had done more testing than they really did. They also avoided going above 7 because it made them look like they were fixing too many errors and were prone to having horribly buggy code. This psychological perception of the customers or clients can be fairly strong and can be a huge debate between programmers (who tend to rightly be literalists) and marketing people (who treat logic like some fluffy after thought).
With this in mind, to answer your question: Your logic is fantastic... now go sell it to your marketing department. (Or better yet... don't sell it to them, just implement it and hope they don't come knocking on your door, or cubicle, or hiding place.)
Best of luck with your design. :)
[major].[minor].[release].[build]
(widely accepted pattern from this post: https://stackoverflow.com/a/615277/758378)
Deviating from the pattern
You have a specific reason for doing this, so I don't see any problems. (Your specific logic seems fine to me as well.)
About not resetting the last number
This is really not a problem at all. In answer linked above, you can even see using SVN revision as a suggested number to use.
A number of years ago a Java testing tool called Agitar was popular. It appeared to do something like property based testing.
Nowadays - property based testing based on Haskell's Quickcheck is popular. There are a number of ports to Java including:
quickcheck
jcheck
junit-quickcheck
My question is: What is the difference between Agitar and Quickcheck property based testing?
To me, the key features of Haskell QuickCheck are:
It generates random data for testing
If a test fails, it repeatedly "shrinks" the data (e.g., changing numbers to zero,
reducing the size of a list) until it finds the simplest test case that still fails. This is very useful, because when you see the simplest test case, you often know exactly where the bug is and how to fix it.
It starts testing with simple data, and gradually moves on to more complex data. This is useful because it means that tests fail more quickly. Also, it ensures that edge cases (e.g., empty lists, zeroes) are properly tested.
Quickcheck for Java supports (1), but not (2) or (3). I don't know what features are supported by Agitar, but it would be useful to check.
Additionally, you might look into ScalaCheck. Since Scala is interoperable with Java, you could use it to test your Java code. I haven't used it, so I don't know which features it has, but I suspect it has more features than Java Quickcheck.
Its worth noting that as of version 0.6, junit-quickcheck now supports shrinking:
http://pholser.github.io/junit-quickcheck/site/0.6-alpha-3-SNAPSHOT/usage/shrinking.html
quickcheck doesn't look to have had any new releases since 2011:
https://bitbucket.org/blob79/quickcheck
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been a Java developer for several years now. Lately there's been quite the buzz over Groovy. I checked it out and it looks interesting and all, but I'm not seeing any inherent "wow factor" to it; meaning, I'm not seeing any intrinsic value to begin developing in it.
Now, I'm positive that I'm just not seeing the forest through the trees here. So, I ask the SO community at large: how would learning Groovy behoove any Java developer? What features/capabilities/etc. does it do (or do better) than plain ole' Java?
Things in the software world don't take off like wildfire without a reason. I'm sure Groovy has all sorts of nifty little (and big) capabilities that make it a must-know; I'm just not "getting" it. Thanks in advance.
In my opinion, some of the biggest adventages Groovy has over Java are:
terser code thanks to overloaded operators and simplified property access. Instead of writing
if (obj1.getBar().equals(obj2.getBaz())) {
obj3.setFoo("equal");
}
you can write
if (obj1.bar == obj2.baz) {
obj3.foo = "equal"
}
which I consider much more readable.
inline notation for Maps and Lists. Instead of
Map attributes = new HashMap();
attributes.put("color", "red");
attributes.put("width", 1);
you can write
def attributes = [color: "red", width: 1]
useful extensions to standard libraries. For example, you can read files and webpages like this:
def fileContents = new File('readme.txt').text
def pageContents = new URL('readme.txt').text
syntactic sugar for initializing properties - for example,
MyClass myObject = new MyClass();
myObject.setFoo("bar");
myObject.setBaz(23);
can be replaced by
def myObject = new MyClass(foo: "bar", baz: 23)
'safe' dereferencing - if (str != null) {return str.toUppercase();} else {return null;} becomes return str?.toUppercase()
Regular Expressions look nicer
Closures allow you to write code in 'functional' style, and make writing things like event listeners way easier
... since no one pitched in so far, I'd suggest looking at the Grooy mailing list. Similar questions pop up from time to time regarding Grails as well. Those are great communities of people who at some point or another faced the same type of questions.
I'll add another, since I don't see it explicitly mentioned.
For me, while terseness is nice on its own, it isn't the main point.
Whatever code you write, Java will always look like Java, and it will never read like the problem you're solving--it will read like Java, solving your problem.
I want the reverse: I want my code to read like the problem I'm solving, written in [insert language here].
Groovy is more expressive, more malleable than Java. It can look like what I'm trying to do. Canonical examples of this include internal DSLs. One I've used frequently is easyb, which is just Groovy code, but sounds like what I'm doing when I read it out loud. There are a bunch more examples, but easyb is an easy sell.
Also not mentioned are the AST transformations allowing cool compile-time tricks. You can play similar games in Java with things like AspectJ, but it's not as "baked in". Things like #Delegate, #Singleton, etc. are just plain handy.
For me, a language needs to be malleable, deformable, flexible: Java is not. Java has an impoverished model of abstraction making it frustrating to work with--the amount of extra work, writ both small and large throughout the language, is mind-boggling at times.
You're kinda late to the party. Groovy saw its dawn about 4 years ago when people fed up with Java's ridiculous verboseness were ready to give up performance in the sake of code reduction and Groovy was providing them with exactly that in a very familiar syntax. But now that it's been there for a while it gets dragged by its past as much as every language does and some not best design decisions are there to stay as much as I'm pretty sure are its performance issues. This basically was the reason of emergence of Groovy++, the whole existance of which suggests that there are some problems with its predecessor. And in fact Groovy++ addresses a lot of issues of its predecessor, but it also has its own problems: the main one being that it's basically driven only by the devotion of a one man - Alex Tkachman that is, - and no funding, so you should see all the risks involved.
Nowadays that more and more strong rivals appear (Kotlin, Ceylon) it becomes obvious to me that Groovy has already passed even its zenith - even its core team is trembling (you should see their discussions on Grumpy mode). This was the reason for me to start gradually leaving this technology in preference to Scala and to look forward to forthcoming Kotlin which both are great projects but take some effort for a "Java mind" to switch to, but are definitely worth it.
Updates due to hypercritical response:
Design mistakes:
Restricting syntax for Java paste-in compatibility, while a seemingly neat feature in practice turned out to be not used at all, 'cuz migrating Java code to Groovy is generally a bad idea for the same performance reasons. But this resulted in inheritance of some Java's ridiculousness like not being able to define a variable of the name used in the outer context
While keeping some of Java's bad practices they got rid of some of its good features too, like code blocks demarked with curly braces in preference to closures. The issue solved much neater in Scala
Dynamic typing was mainly chosen because decent type inferring was hard to realize - that was basically the reason for Groovy's creator to later state that he would have never bothered creating it if he knew about Scala back then.
Googling for "groovy performance" will give you enough results. Here is one: http://stronglytypedblog.blogspot.com/2010/02/java-vs-scala-vs-groovy-vs-groovy.html
I never stated that Groovy++ is the same project as core Groovy.
I would like you to know that I once was a huge fan of Groovy and was developing in it exclusively for 2 straight years. Moving on to a new language was a very hard step for me to take. But since I know Groovy's every corner, I know its core team and its tendencies you should understand that I know what I'm doing when I'm recommending not to consider Groovy as an alternative to Java.
We've got a fairly large amount of code that just made the jump to Java 5. We've been using generics in those components targeted at being released in the Java 5 version, but the remaining code is, of course, full of raw types. I've set the compiler to generate an error for raw types and started manually clearing them, but at the present rate it'll take a very long time to go through with it (there are about 2500 errors). And that's with Eclipse's helpful Infer Generic Type quick fix, which always gets rid of the errors, but often generates code that needs further work.
Is there any better way to dealing with this? Are there any automated tools better than Eclipse? Any way to apply the refactoring to all occurences instead of doing them one-by-one? Or do you just ignore the warnings?
I would suggest ignoring the warnings. Otherwise, you'll be putting a lot of time into updating the legacy code without making any improvements to its functionality.
Update: Great comment from Luke that I thought should get more visibility:
"Generics are a way to catch run time bugs at compile time. Unless this legacy code has bugs in it that you think are related to casting I would leave it alone (if it ain't broke, don't fix it)"
As far as I know, you're going about it as efficiently as possible. It's
obviously not perfect, but you'll finish eventually.
I recommend that you do it in stages, however; there are likely parts of the
code that would benefit more from this than others, focus on those. Trying to
do it all in one sweep runs the risk of introducing new bugs to your code. We
have one such place where we have a collection that holds context-dependent
data, and generics actually can't work for it.
Basically, do what you're doing, but do it in stages as part of other work,
instead of trying to fix it all in one throw.
Faced with a similar challenge, we opted to upgrade to Java 5 style generics only in the code that was edited for another reason. So if you have a bug to fix in DoItFast.java, update DoItFast.java to use Java 5 style generics. The areas of the code that are being edited and changed regularly will get updated quickly. After a few weeks or months of this behavior, you can reevaluate the condition of your code base.
If that doesn't get the job done fast enough for you, sometimes I'll use the slow time after lunch to just mindlessly cleanup a few classes and speed the process along.
I don't think it is necessary to update all the old code. Maybe if you could somehow identify which parts of the old code are used frequently, and only update those to use generic types? Or maybe you could only worry when the raw type is returned from a public function? A lot of these cases are probably just private/local variables, which are already written without generic types and presumably work just fine, so it's probably not worth the effort to rewrite them.
I understand with IDEA you can select all your classes, popup the context menu and select Refactor | Generify. Job done. It's certainly a more enjoyable experience to work with generified code (IMO).