The question Which #NotNull Java annotation should I use? is outdated and somewhat opinion based. Since then Java 8 came, along with newer IDEs.
While Java 8 allows type annotations by integration of JSR 308, it does not come with any. From JSR 308 Explained: Java Type Annotations by Josh Juneau:
JSR 308, Annotations on Java Types, has been incorporated as part of Java SE 8.
...
Compiler checkers can be written to verify annotated code, enforcing rules by generating compiler warnings when code does not meet certain requirements. Java SE 8 does not provide a default type-checking framework, but it is possible to write custom annotations and processors for type checking. There are also a number of type-checking frameworks that can be downloaded, which can be used as plug-ins to the Java compiler to check and enforce types that have been annotated. Type-checking frameworks comprise type annotation definitions and one or more pluggable modules that are used with the compiler for annotation processing.
Considering only solutions that offer at least some kind of #CanBeNull and #CannotBeNull, I've found information on the following (could be wrong):
Eclipse's JDT Null Analysis's org.eclipse.jdt.annotation. Other IDEs have their respective packages. Uses JSR 308, and there is still support for pre-Java 8 annotations.
FindBug's javax.annotation. Using the dormant JSR 305, it doesn't seem to be using Java 8's type annotations. Even though it's not integrated into Oracle's API, for some reason it still uses the javax domain, which implies it does.
Checker Framework's org.checkerframework.checker.nullness. Uses JSR 308.
JavaEE's javax.validation.constraints. Don't know what it uses, but there is no #CanBeNull anyways.
Some are used in static code analysis, some in runtime validation.
What are the practical differences between the above options? Is there (going to be) a standard or is it intended for everyone to write their own analysis framework?
Some other nullness analyses exist besides the ones you mentioned; for example, IntelliJ contains a nullness analysis.
Here are some key questions to ask about a nullness analysis:
Does it work at compile time or run time? A compile-time analysis gives the programmer advance warning about potential bugs. With a run-time analysis, your program still crashes, but perhaps it crashes earlier or with a more informative error message.
Is it a verifier or a bug finder? A verifier gives a correctness guarantee: if the tool doesn't report any potential errors, then the program will not suffer the given error at run time. A bug finder reports some problems, but if it doesn't report any problems, your program might still be wrong. A verifier usually requires more work from the programmer, including annotating the program. A bug finder can require less effort to start using, since it can run on an unannotated program (though it may not give very good results in that case).
How precise is the analysis? How often does it suffer false alarms, or issuing a warning when the program is actually correct? How often does it suffer missed alarms, or failing to notify you about a real bug in your program?
Is the tooling built into an IDE? If so, it may be easier to use. If not, it can be used by any programmer rather than just ones who use that particular IDE.
The three tools you mentioned all work at compile time. FindBugs is a bug finder, and the others are verifiers. The Checker Framework has better precision, but the other two have better IDE integration. FindBugs doesn't work with Java 8 type annotations (JSR 308); both of the others support both Java 8 and pre-Java-8 annotations. All of these tools have their place in a programmer's toolbox; which one is right for you depends on your needs and goals.
To answer some of your other questions:
FindBugs's annotations use the javax domain because its designer hoped that Oracle would adopt FindBugs as a Java standard (!). That never happened. You are right that the use of javax confuses many people into thinking that it is official or favored by Oracle, which it is not.
Is there (going to be) a standard or is it intended for everyone to write their own analysis framework?
For now, Oracle wants the community to experiment with creating and using a variety of analysis frameworks. They feel that they don't yet understand the pros and cons of the various approaches well enough to create a standard. They don't want to prematurely create a standard that enshrines a flawed approach. They are open to creating a standard in the future.
The information you collected pretty much describes it already:
Static analysis based on type annotations (JSR 308) is indeed much more powerful than previous approaches.
Two sets of annotations use JSR 308, both for the sake of performing static analysis (could also be considered as advanced type checking). At the core the two tools promoting these annotations are essentially compatible (and each can also consume the annotations of the other).
Differences that I know of are mainly in two areas:
IDE integration.
Interpretation of unannotated types. In a strict world, every type is either nonnull or nullable, so if an annotation is missing it could be interpreted as nonnull by default. Alternatively, the underlying type system could use a notion of "legacy types", raising warning when "unchecked conversions" are needed (similar to the combination of generic types and raw types). To the best of my knowledge the Checkers Framework applies the strict approach, whereas Eclipse lets you choose between a #NonNullByDefault strategy and admitting "legacy types" (for the sake of migration).
Also to the best of my knowledge nobody is planning to invest into standardization of these annotations at the moment.
Related
For a long time, I have been an application developer in java. Recently, Java and JVM specification piqued my interest. I wanted to know more about some of the internals of java on topics that eluded me for a long time.
I tried searching for ThreadLocal or Annotation Processors in those documents and I couldnt find them. Is there a reason behind dearth of information regarding them? I thought Threadlocal atleast was part of Java packages?
Are specifications not encyclopedias that I imagined them to be?
They are fairly huge documents, so I might have missed them completely
https://docs.oracle.com/javase/specs/jvms/se8/jvms8.pdf
https://docs.oracle.com/javase/specs/jls/se8/jls8.pdf
Why aren't ThreadLocal or AnnotationProcessor defined in the Java Language Specification (JLS)?
Because they are specified somewhere else.
The specification for ThreadLocal is in the javadocs:
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ThreadLocal.html
The specifications for annotation processors are also in the javadocs. Start here:
https://docs.oracle.com/en/java/javase/11/docs/api/java.compiler/javax/annotation/processing/package-summary.html
https://docs.oracle.com/en/java/javase/11/docs/api/java.compiler/javax/annotation/processing/Processor.html
In general, the JLS only specifies the Java programming language itself. Other aspects of the Java environment such as the Java class libraries, the JVM specifications, the Java tool specifications, and many other things are specified (or described) in various technical notes, white papers and JSRs or JEPs.
In general, all of this information is on the public web, and can be found using Google and intelligently chosen search terms. For example, I got the javadocs of ThreadLocal in Java 11 by Googling for javadoc ThreadLocal java 11.
However, if you are looking for internal documentation (e.g. some design document that explains how ThreadLocal is implemented) you are unlikely to find anything ... beyond the OpenJDK source code itself. But the source code is freely available and (generally speaking) well commented. Google for the version you are looking for; e.g. openjdk source code java 11.
I'm writing a library that inserts already unit-tested example code (its source-code, output, and any input files) into JavaDoc, with lots of customization possibilities. The main way of using this library is with inline taglets, such as
{#.codelet.and.out my.package.AGreatExample}
{#.codelet my.package.AGreatExample}
{#.file.textlet examples\doc-files\an_input_file.txt}
{#.codelet.and.out my.package.AGreatExample%eliminateCommentBlocksAndPackageDecl()}
Since custom taglets (and even doclets) require com.sun, this means they're not nearly as cross platform as Java itself. (Not sure if this is relevant, but the word "javadoc"--and even the substring "doc"--is not in the Java 8 Language Specifications.)
I don't like the idea of writing a library that's limited in this way. So what do I do? My thoughts so far are that
In order to take advantage of the existing javadoc parser, I stick with the com.sun taglets. However, I make this reliance on com.sun as "thin" as can be. That is, I put as little code in the taglet class as possible, leaving the bulk of the code elsewhere, where there is no reliance on com.sun.
I work towards creating my own parser, which only searches for my specific taglets. This is a pain, but not too horrible. You iterate through the lines of each Java source file, searching for \{#\.myTagletName (.*?)\}. Once you capture that text, it's pretty much the same as the code within the com.sun taglet.
This parser would have to be run before executing javadoc, and would therefore require a duplicate directory structure. (1) your original code, with the unparsed custom tags, (2) the duplicate of that code, with parsed-output. I'd copy all code to the duplicate directory, and then parse only those Java files known to have these taglets (classes that are "registered" in some way with the parser).
Is this a reasonable approach? Is there a more cross-platform javadoc/taglet parser out there already, so I don't have to roll my own? Is there anything cross-platform that is taglet-like already out there? Is JavaDoc itself not cross platform, or just custom taglets and doclets?
I'd like a rough perspective on how many people I'm locking out of my library because of this decision (to use inline taglets), but mostly I'm looking for a long term solution.
(Despite my Java 8 link above, I'm using Java 7.)
Credit to #fge for the taglet suggestion, which is more elegant than my original idea, and to #Michael for the ominous-but-helpful com.sun warnings.
At first, note that there is a difference between sun.* and com.sun.* dependencies. The sun.* namespace contains classes that implement Oracle's Java Virtual Machine. You should not use such dependencies because the Oracle JVM's internal API can change in future releases and because this namespace may not be provided by other, non-Oracle JVM implementations. (In practice, even Android's JVM ships with one of the more widely used sun.* classes.)
Then there is the com.sun.* namespace which was used by Sun Microsystems for implementing its Java applications. An example for legal use of com.sun.* dependencies is Sun's Jersey framework which was originally deployed in the com.sun.jersey.* namespace. (For the sake of completeness, note that recent Jersey versions are deployed in the org.glassfish.jersey.* namespace beginning with version 2.0 which is incompatible to the Jersey 1 API.) For further reference, note how Oracle does not even mention the com.sun.* namespace when discussing the problems that are imposed by using the sun.* namespace. Also, see this related question on Stack Overflow.
Therefore, using com.sun.* dependencies is a different deal compared to sun.* dependencies. By using com.sun.* classes, you rather lock yourself to a specific library's API, not to a specific JVM. For example, you can avoid direct use of the com.sun.jersey.* namespace by using the standardized JAX-RS javax.ws.rs.* namespace. In this sense, com.sun.* dependencies are product specific and proprietary and must not be confused with Java's standardized APIs which are usually found in the javax.* namespace.
If I was you, I would stick with the taglets which is a mature and recognized implementation. Oracle is pretty determined not to break APIs (otherwise, they would probably also move the taglets to com.oracle.*) and I see no reason why they would suddenly change the taglet package structure. And if they would, you merely need to update your tech. If your application breaks for a new Java release, your users will come looking for an update of your software. Because you do not run the taglet project, I agree with you that detaching your logic from a foreign API is in general a good idea as it is for any dependency. Also, using taglets for your use case pretty much recognizes the KISS and DRY principles.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Just out of curiosity, are there any (stable) open source projects for runtime java code generation other than cglib? And why should I use them?
ASM java-asm
CGLIB and almost all other libraries are built on top of ASM which itself acts on a very low level. This is a show-stopper for most people as you have to understand the byte code and a little bit of the JVMS to use it properly. But mastering ASM is most certainly very interesting. Note however that while there is a great ASM 4 guide, in some part of the API the javadoc documentation can be very concise if it is present at all, but it is being improved. It closely follows JVM versions to support new features.
However, if you need full control, ASM is your weapon of choice.
This project sees regular updates ; at the time of this edit version 5.0.4 was released on May 15th 2015.
Byte Buddy byte-buddy
Byte Buddy is a rather new library but provides any functionality that CGLIB or Javassist provides and much more. Byte Buddy can be fully customised down to the byte code level and comes with an expressive domain specific language that allows for very readable code.
It supports all JVM bytecode versions, including Java 8 semantic changes of some opcodes regarding default methods.
ByteBuddy don't seem to suffer from the drawbacks other libraries have
Highly configurable
Quite fast (benchmark code)
Type safe fluent API
Type safe callbacks
Javassist advices or custom instrumentation code is based on code in a plain String thus type check and debugging is impossible within this code, while ByteBuddy allows to write those with pure Java hence enforces type checks and allows debugging.
Annotation driven (flexible)
The user callbacks can be configured with annotations allowing to receive the wanted parameters in the callback.
Available as an agent
The nifty agent builder allows ByteBuddy to be used as a pure agent or as attaching agent. It allows different kind
Very well documented
Lots of example
Clean code, ~94% test coverage
Android DEX support
The main downside perhaps, would the API is a bit verbose for a beginner but it is designed as an opt-in API shaped as a proxy generation DSL ; there's no magic or questionable defaults. When manipulating byte code it is probably the most safe and the most reasonable choice. Also with multiple examples and a big tutorial this is not a real issue.
In October 2015 this projects received the Oracle Duke's choice award. At this times it just reached the 1.0.0 milestone, which is quite an achievement.
Note that mockito has replaced CGLIB by Byte Buddy in version 2.1.0.
Javassist javassist
The javadoc of Javassist is way better than that of CGLIB. The class engineering API is OK, but Javassist is not perfect either. In particular, the ProxyFactory which is the equivalent of the CGLIB's Enhancer suffer from some drawbacks too, just to list a few :
Bridge method are not fully supported (ie the one that are generated for covariant return types)
ClassloaderProvider is a static field instead, then it applies to all instances within the same classloader
Custom naming could have been welcome (with checks for signed jars)
There is no extension point and almost all methods of interest are private, which is cumbersome if we want to change some behavior
While Javassist offer support for annotation attributes in classes, they are not supported in ProxyFactory.
On the aspect oriented side, one can inject code in a proxy, but this approach in Javassist is limited and a bit error-prone :
aspect code is written in a plain Java String that is compiled in opcodes
no type check
no generics
no lambda
no auto-(un)boxing
Also Javassist is recognized to be slower than Cglib. This is mainly due to its approach of reading class files instead of reading loaded classes such as CGLIB does. And the implementation itself is hard to read to be fair ; if one requires to make changes in the Javassist code there's many chances to break something.
Javassist suffered from inactivity as well, their move to github circa 2013 seem to have proven useful as it shows regular commits and pull requests from the community.
These limitations still stand in the version 3.17.1. Version has been bumped to version 3.20.0, yet it seems Javassist may still have issues with Java 8 support.
JiteScript
JiteScript does seem like a new piece of nicely shaping up DSL for ASM, this is based on the latest ASM release (4.0). The code looks clean.
But the project is still in his early age so API / behavior can change, plus the documentation is dire. And updates scarce if not abandoned.
Proxetta jodd
This is a rather new tool but it offers the by far best human API. It allows for different types of proxies such as subclass proxies (cglib approach) or weaving or delegation.
Although, this one is rather rare, no information exists if it works well. There are so many corner case to deal with when dealing with bytecode.
AspectJ aspectj
AspectJ is a very powerful tool for aspect-oriented programming (only). AspectJ manipulates byte code to achieve its goals such that you might be able to achieve your goals with it. However, this requires manipulation at compile-time; spring offer weaving at load time via an agent since version 2.5, 4.1.x.
CGLIB cglib
A word about CGLIB that has been updated since that question has been asked.
CGLIB is quite fast, it is one of the main reason why it is still around, along with the fact that CGLIB worked almost better than any alternatives until now (2014-2015).
Generally speaking libraries that allow the rewriting of classes at run time have to avoid loading any types before the corresponding class is rewritten. Therefore, they cannot make use of the Java reflection API which requires that any type used in reflection is loaded. Instead, they have to read the class files via IO (which is a performance-breaker). This makes for example Javassist or Proxetta significantly slower than Cglib which simply reads the methods via the reflection API and overrides them.
However, CGLIB is no longer under active development. There were recent releases but those changes were seen as insignificant by many and most people did never update to version 3 since CGLIB introduced some severe bugs in the last releases what did not really build up confidence. Version 3.1 fixed a lot of the woes of version 3.0 (since version 4.0.3 Spring framework repackages version 3.1).
Also, the CGLIB source code is of rather poor quality such that we do not see new developers joining the CGLIB project. For an impression of CGLIB's activeness, see their mailing list.
Note that following a proposition on the guice mailing list, CGLIB is now available on github to enable the community to better help the project, it appears to be working (multiple commits and pull requests, ci, updated maven), yet most concerns still remain.
At this time there are working on version 3.2.0, and they are focusing effort on Java 8, but so far users that want that java 8 support have to use tricks at build time. But progress is very slow.
And CGLIB is still known to be plagued for PermGen memory leak. But other projects may not have been battle tested for so many years.
Compile time annotation Processing annotation-processing
This one is not runtime of course, but is an important part of the ecosystem, and most code generation usage don't need runtime creation.
This started with Java 5 that came with the separate command line tool to process annotations : apt, and starting from Java 6 annotation processing is integrated into the Java compiler.
At some time you were required to explicitly pass the processor, now with the ServiceLoader approach (just add this file META-INF/services/javax.annotation.processing.Processor to the jar) the compiler can detect automatically the annotation processor.
This approach at code generation has drawbacks too it require a lot of work and understanding of the Java language not bytecode. This API is a bit cumbersome, and as one is plugin in the compiler one must take extreme care to make this code the most resilient and user friendly error message.
The biggest advantage here is that it avoids another dependency at runtime, you may avoid permgen memory leak. And one has full control on the generated code.
Conclusion
In 2002 CGLIB defined a new standard to manipulate bytecode with ease. Many tools and methodology (CI, coverage, TDD, etc.) we have nowadays were not available or not mature at that time. CGLIB managed to be relevant for more than a decade ; that's a pretty decent achievement. It was fast and with an easy API to use than manipulating opcodes directly.
It defined new standard regarding code generation but nowadays it isn't anymore because environment and requirements have changed, so have the standards and goals.
The JVM changed and will change in recent and future Java (7/8/9/10) versions (invokedynamic, default methods, value types, etc). ASM upgraded his API and internals regularly to follow these changes but CGLIB and others have yet to use them.
While annotation processing is getting traction, it is not as flexible as runtime generation.
As of 2015, Byte Buddy — while rather new on the scene — offer the most compelling selling points for runtime generation. A decent update rate, and the author has an intimate knowledge of the Java byte code internals.
Javassist.
If you need to make proxies, take a look at commons-proxy - it uses both CGLIB and Javassit.
I prefer raw ASM, which I believe is used by cglib anyway. It's low level, but the documentation is brilliant, and once you get used to it you'll be flying.
To answer your second question, you should use code generation when your reflection and dynamic proxies are beginning to feel a bit cobbled together and you need a rock solid solution. In the past I've even added a code generation step into the build process in Eclipse, effectively giving me compile time reporting of anything and everything.
I think it's more sense to use Javassist instead of cglib. E.g. javasist perfectly works with signed jars unlike cglib. Besides, such grand as Hibernate project decided to stop using cglib in favor of Javassist.
CGLIB was designed and implemented more than ten years ago in AOP and ORM era.
Currently I see no reasons to use it and I do not maintain this library anymore (except bug fixes for my legacy applications ).
Actually all of CGLIB use cases I have ever saw are anti patterns in modern programming.
It should be trivial to implement the same functionality via any JVM scripting language e.g. groovy.
For a project with modules in Scala and Java (side by side), how to combine scaladoc with javadoc to provide a single view of the documentation for the project?
(this could be using maven, or ant, or sbt, more a general question).
Any thoughts and experiences appreciated.
With Scala 2.8's new scaladoc that will replace the one used with Scala 2.7, the differences will be even more striking. However, there was a request that a function be provided that translated scaladoc into javadoc format, for use by IDEs when displaying help.
If this function becomes available, then something that generates javadocs from scaladocs would be theoretically feasible.
But for any of that to become true, the people who have interest in such a thing would have to speak up at the appropriate fora. And, of course, if they are too small a group, it is likely nothing happens unless they do it for themselves.
What's de advantage of having Scaladoc <> Javadoc? There is a huge number of tools for Javadoc and almost anything for Scaladoc. The mainstream IDEs (Eclipse, Netbeans, Idea - real world enterprise development - not academic research) knows nothing about Scaladoc. Seems like being in Siberia: isolated.
Scaladocs and javadoc are very different, with different formats. They are just two different animals and I don't think it makes sense to combine them. So, AFAIK, Maven doesn't offer any support for that (which is not surprising), just generate both of them separately.
I just started exploring Scala in my free time.
I have to say that so far I'm very impressed. Scala sits on top of the JVM, seamlessly integrates with existing Java code and has many features that Java doesn't.
Beyond learning a new language, what's the downside to switching over to Scala?
Well, the downside is that you have to be prepared for Scala to be a bit rough around the edges:
you'll get the odd cryptic Scala compiler internal error
the IDE support isn't quite as good as Java (neither is the debugging support)
there will be breaks to backwards compatibility in future releases (although these will be limited)
You also have to take some risk that Scala as a language will fizzle out.
That said, I don't think you'll look back! My experiences are positive overall; the IDE's are useable, you get used to what the cryptic compiler errors mean and, whilst your Scala codebase is small, a backwards-compatibility break is not a major hassle.
It's worth it for Option, the monad functionality of the collections, closures, the actors model, extractors, covariant types etc. It's an awesome language.
It's also of great personal benefit to be able to approach problems from a different angle, something that the above constructs allow and encourage.
Some of the downsides of Scala are not related at all to the relative youth of the language. After all, Scala, has about 5 years of age, and Java was very different 5 years into its own lifespan.
In particular, because Scala does not have the backing of an enterprise which considers it a strategic priority, the support resources for it are rather lacking. For example:
Lack of extensive tutorials
Inferior quality of the documentation
Non-existing localization of documentation
Native libraries (Scala uses Java or .NET libraries as base for their own)
Another important difference is due to how Sun saw Java and EPFL sees Scala. Sun saw Java as a product to get enterprise customers. EPFL sees Scala as a language intended to be a better language than existing ones, in some particular respects (OOxFunctional integration, and type system design, mostly).
As a consequence, where Sun made JVM glacially-stable, and Java fully backward compatible, with very slow deprecation and removal of features (actually, removal?), JAR files generated with one version of Scala won't work at all with other versions (a serious problem for third party libraries), and the language is constantly getting new features as well as actually removing deprecated ones, and so is Scala's library. The revision history for Scala 2.x, which I think is barely 3 years old, is impressive.
Finally, because of all of the above, third party support for Scala is incipient. But it's important to note, though, that JetBrains, which makes money out of selling the IntelliJ IDEA IDE, has supported Scala for quite some time, and keeps improving its support. That means, to me, that there is demand for third party support, and support is bound to increase.
I point to the book situation. One year ago there was no Scala book on the market. Right now there are two or three introductory Scala books on the market, about the same number of books should be out before the end of the year, and there is a book about a very important web framework based on Scala, Lift.
I bet we'll see a book about ESME not too far in the future, as well as books about Scala and concurrency. The publishing market has apparently reached the tipping point. Once that happens, enterprises will follow.
I was unshackled from the J2EE leash last year wanted to do something new after 12 years of Java in the enterprise building very large system for some of the worlds biggest companies.
I had tried Ruby on Rails in the past. After building a few sample apps I did not like the feel of it or the fact that I would have to write a ton of unit tests to cover stuff that is normally done by a compiler.
Groovy on Grails was my next port of call. I have to say I do like this but it suffers from the same dynamic typing problems as ROR. Don't get me wrong I am not putting Grails down as it is an excellent framework and I will still use it. Each has its own place IMO.
I then jumped on Scala and have now built a hybrid application based on Scala and Spring MVC. At first working with Scala is difficult but it gets easier and more productive the more time you put into it. I've reached a tipping point where I now want to invest time in Lift as well.
The combination of "Programming in Scala" and David Pollak's "Beginning Scala" books is good for learning the language, the latter with a less academic bent.
Scala is still young and has some way to go. I think it has a bright future and I see momentum is already picking up. Recently one of the creators of the Groovy language said in a blog post he would never have bothered designing Groovy if Scala had been around at the time.
I think some more work on better Java API integration/wrapping will give Scala the boost it needs to win more followers. The basic integration is there already but I think its could be polished a bit more.
Yes IDE support is there but it is basic at the moment. The powerfully refactoring support of Intellij is not there yet and I miss that a lot. The compiler + IDE support with a mix of of other plugins is not mature yet. I sometimes get very weird internal compiler errors caused by how Scala sits with JDO enhancement for the Goggle app engine. However these are little things that can be easily fixed. Early adaptation of new technologies and languages always comes with a little pain. But this bit of pain can produce great pleasure in the future.
If I look at the capabilities of Scala compared to early Java its miles ahead. When I moved from C++ to Java the JVM was not ready yet regarding scalability. There used to be lots of weird crash and burn JVM core dumps on various OSes. All of this has now been fixed in Java and the JVM is rock solid. Scals runs in the JVM so it has been given a massive head start on native platform integration. Its standing on the shoulders of giants!
After years of building and supporting enterprise applications my vote is for a language where a compiler can catch most of the non functional bugs before even unit tests are built. I love the type checking mixed with the power of functional programming. I like the fact that I am doing OO++.
I think the development community will decide if Scala is the future or not. The downside of adopting Scala now would be if it did not pick up momentum and adaptation. It would be very difficult to maintain an Scala code base with very few Scala developers around. However I watched Java come from the skunk works into the enterprise to replace C++ and it was all pushed from the bottom up by the developer community. Time will tell for Scala but currently it has my vote.
Beyond learning a new language, what's the downside to switching over
to Scala?
Thinking, thinking, thinking..... nope, there is none :-)
I'll tell you my little personal experience, and how I found that it wasn't so easy to integrate Scala with existing Java libraries:
I wanted to get started with something easy, and as I thought that Scala was very well suited for scientific computation I wanted to do a little wrapper around JAMA (Java Matrix library)... My initial approach was to extend the Matrix type with a Scala class and then overload the arithmetic operators and call the Java native methods, but:
The Matrix class doesn't provide a default constructor (without arguments)
The Scala class needs one primary constructor
I thought one good primary constructor could be the one accepting an Array[Array[Double]] (first thing that sucks, that syntax is much more verbose and hard to read than Double[][])
As far as I know by reading the manuals, the parameters of the primary constructor are also implicitly fields of the class, so I would end with one Array[Array[Double]] in the Scala subclass and another double[][] in the Java superclass, which is pretty redundant.
I think I could have used an empty primary constructor that initialized the superclass with some default values (for example, a [[0]]), or just make an adapter class that used the Jama.Matrix as a delegate, but if a language is supposed to be elegant and seamless integrated with another, that kind of things shouldn't happen.
Those are my two cents.
I don't think there are any downsides. Actually learning new language is very helpful for broadening your programming knowledge. You might gain from Scala such things as generic classes, variance annotations, upper and lower type bounds, inner classes and abstract types as object members, compound types, explicitly typed self references, views, and polymorphic methods.
It consistently breaks backwards compatibilty.
Community size is small.
IDE support isn't there yet.
Otherwise its fine.
It is just a young language, it will get there eventually.
Great for hobbyists, not ready for enterprise.
The two, by which I mean four, biggest downsides I'm seeing are:
Many working as developers in the professional community aren't trained and are unwilling to learn how to use a functional language, they won't even give it a go so they can understand why it's a better approach. This means you'll always be fighting an uphill battle getting adoption until it's mandated at the corporate level.
RDBMS integration is still a bit spotty. Plenty of solutions, but nothing that really sticks out as becoming a standard. For me though, this might be an advantage rather than a disadvantage. JPA2 is a mess and causes more issues than problems it solves. Hibernate criteria queries aren't much better.
IDE support is still lagging, but mostly in the area of debugging at this point. Code inspection is doing pretty good (at least in IntelliJ).
You'll never want to write another line of Java again! Likely you'll want to punch a wall or break something when forced back into the awkward syntax of Java.
The answers here are somewhat dated circa 2022 so I thought that I would contribute with an update. I have been working in a tech company that started using Scala at about the same time this question was originally asked. I recently blogged about lessons learned in that shop when trying to teach Java developers how to work with a code base written in idiomatic Scala so this topic is top-of-mind for me right now.
Just about all of the maturity issues in tooling, educational
collateral, and integration are gone. Version 2 of Scala is just as
rock solid a programming language as version 8 or 11 of Java.
Some of the most obvious advantages in switching to Scala are no longer
relevant because those language features have been added to newer
versions of Java.
Where Scala continues to outshine Java is in terms
of modularity and readability which affords idiomatic Scala as able
to handle code complexity better than Java.
That improved code scalability comes at the cost of a higher
learning curve which makes Scala less attractive to junior developers
who tend to make up the majority of your engineering group.