Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Java has strong naming conventions for class, method, field and variable names.
For instance:
Class names should start with an upper case character
Methods, fields, and variable names should start with a lower case character
The JDK has only few exceptions to these two rules.
But a convention is not a syntax rule, and therefore classes like the following compile without errors (C# programmers would be happy anyway):
public class string {
private char[] C;
public int Length() { return C.length; }
}
But the gap between a convention and a syntax rule inevitable leads to violations of the convention by beginners which then leads to lengthy explanations about naming conventions.
If the most fundamental naming conventions (like the one cited above) would be part of the syntax then the Java compiler would enforce them and automatically educate those beginners.
So here is the question:
From the Java language designers point of view: Is there any good reason to leave a gap between syntax and naming conventions which should never be violated? Are there any meaningful use cases for namings (of classes, methods, fields, variables) which violate the convention but make sense beyond the convention?
The conventions were written long after the language was defined so they could be retrofitted without breaking compatibility. The problem with conventions are they involve taste. e.g. spaces or tabs, using $ as a variable name, starting field names with m_ or _ etc. even wither to add get and set to getters and setters (which I prefer not to)
Java actually allows you to do things which would make C programmer feel queasy. Why they allowed this I don't know, but I assume they didn't want to limit adoption by imposing more rules than really needed.
Note this is a piece of Java code is valid due to the use of a character which probably shouldn't be allowed but is.
for (char ch = 0; ch < Character.MAX_VALUE; ch++)
if (Character.isJavaIdentifierPart(ch) && !Character.isJavaIdentifierStart(ch))
System.out.printf("%04x <%s>%n", (int) ch, "" + ch);
Most IDEs will help beginners write code which follows conventions. The only problem with this is most developers don't know how to make full use of their IDEs. ;)
Well, personally I think the reason for not enforcing conventions is simply because of the fact because it's technically not really necessary. Counterexample: E.g. in Java you have to name the class file exactly like the class because the Java Class Loader could not load it otherwise. Building these checks into the compiler would bloat the source code and as the name tells it, a compiler converts source code into machine code / byte code / whatever by parsing the source files and checking the syntax. Checking whether a class starts with an uppercase or lowercase letter is simply not the compiler's job.
And of course a programming language gives you a certain degree of freedom by not enforcing such things as conventions to style your code how you like it if it matches the syntax rules of the language.
Well, if I use some Chinese characters in identifiers, there are no upper/lower cases for them:) So the convention cannot be always enforced.
Of course, it's pretty safe to bet that 99.9% Java code are in English. And you may also argue that the enforcement can be limited on some charsets only.
I agree that this naming convention has become critical, and it should be strictly followed. A java source code that does not follow the convention is practically incomprehensible.
From the Java language designers point of view: Is there any good reason to leave a gap between syntax and naming conventions which should never be violated?
Yes. "Never" is a strong word.
The language has requirements and recommendations. The language specification for identifiers is a requirement. But those strong naming conventions are recommendations.
Having some definition of identifiers is necessary for the compiler to recognize them as tokens. Leaving that definition looser than the norm gives us a little freedom for cases outside the norm.
Are there any meaningful use cases for namings (of classes, methods, fields, variables) which violate the convention but make sense beyond the convention?
Yes. Java programs can interact with other languages, which have different conventions.
Code conversion
Sometimes when hand-converting code from another language, leaving the original case is easier and more understandable.
Code generation
Sometimes we generate code from a specification that was not written for Java. For example, we might generate code from a WSDL file, or generate wrappers using SWIG.
Code wrappers
Some Java methods can wrap external functions. For example, JNA allows defining interfaces with a native function's name and signature.
JVM languages
Multiple languages can run atop the Java virtual machine. These other languages have their own conventions. It's possible to mix languages in a single program. Stepping outside the convention can be necessary to interact.
I guess this is why it's only conventions and not rules... I don't see why should it be enforced, there are many other conventions which are not enforced (e.g. putting constructors before alother methods, putting public methods before private methods and many more), it would be too strict (in my mind at least) to enforce it all.
I can think of one case you don't want this convention to be enforced - it's also common to write consts variable in uppercase- again, just a convention.
In any case, I think that in most IDE's you can configure it to give a warning when such conventions are violated. this can help you I guess
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is there any specific reason, why hashCode is the only public method of Object class which does not follow Java Code Conventions recommended by Sun and later on by Oracle? I mean they could name it toHashCode() or getHashCode() or createHashCode() right?
EDIT:
I talk about Code Conventions for the Java Programming Language ( oracle.com/technetwork/java/codeconvtoc-136057.html ). These conventions are referenced in Oracle's book 'OCA Java SE 7 Programmer I Study Guide (Exam 1Z0-803) (Oracle Press) - Liguori, Robert'
In the document we can read as follows:
"Methods should be verbs, in mixed case with
the first letter lowercase, with the first letter of
each internal word capitalized.".
AFAIK hashcode is not a verb.
I think it does follow conventions. It all depends on which convention you're talking about.
The OP is specifically interested in the Code Conventions for the Java Programming Language, and the conventions for method names are covered in chapter 9. First, it should be noted that this document is no longer maintained, and contains a large caveat:
... the information itself may no longer be valid. The last revision to this document was made on April 20, 1999
Of course, the hashCode() method predates 1999 so we can refer to the document to see if the method in question violated the conventions at the time. The document states:
Methods should be verbs, in mixed case with the first letter lowercase, with the first letter of each internal word capitalized.
With regard to hashCode(), there is no disputing that it is mixed case with the first letter lowercase. However, the OP seems to be of the opinion that it violates the convention that methods should be verbs; the implied assumption is that "hash code" or "hashcode" is a noun.
Before passing judgment, let's look at the third part of the convention: the first letter of each internal word [is] capitalized. If you briefly assume that the original authors followed these conventions, then the capitalization of hashCode() indicates that its authors considered "hash" and "code" to be separate words. If you treat them separately, the word "hash" is a verb in English. With that interpretation, all parts of the convention are met.
Admittedly, the term "hash code" has become common jargon among (at least) Java developers and is often treated as a noun -- probably in no small part due to the name of this method in the first place. (Chicken v. egg?) But only the original authors can speak to their intent.
In my original answer, I used the JavaBeans conventions as an example:
A bean is a Java class with method names that follow the JavaBeans guidelines. A bean builder tool uses introspection to examine the bean class. Based on this inspection, the bean builder tool can figure out the bean's properties, methods, and events.
In JavaBeans, properties are accessed via a "getter" method, i.e. the "foo" property is read by calling getFoo(). However, the hashCode() method is a technical requirement imposed by the Java language on all subclasses of Object but is not generally a property of the "business logic" that the object represents. If you write a class to represent fruits, it would have properties like getColor(), isSkinEdible(), etc. If not for Java's technical requirement, you would probably not consider writing a method like getHashCode() because... have you ever found a real, live banana with a hash code?
If hashCode() were named getHashCode() then, for this convention, JavaBeans would have to special-case it in order to ignore it. Or it would always inspect that "property" for what commonly has little use in the main logic of your program.
I can't cover all possible conventions in this answer, but I have these thoughts on the other examples given in the question:
createHashCode() - I would not use this, even by convention, because hashCode() returns an int (a primitive) and they're not created like reference types (Objects) are. I think this would be the wrong verb.
toHashCode() - To me, this represents a conversion. But that's not what hashCode() does. If foo.hashCode() returns 42 I should have no expectation that 42 is in any way a representation foo. It was computed from information about the foo instance, but there's no other real correlation. Plenty of other instances (of many classes) could return 42 so it's not a stand-in or analogue for any of them.
The hashCode() method was specified before there were conventions.
Also, the convention is for naming Java Bean "properties," to facilitate wiring together beans in a design tool and assist in other forms of automation. It's unlikely that you'd need the hash code to be exposed as a property in these situations. In fact, because getClass() does follow Java Bean naming conventions, it requires a special case in many such tools to exclude it from the list of object properties.
Is there a practical or historical reasoning behind languages allowing the most egregious naming convention taboos? The two most obvious examples are uppercase function names and lowercase class names, which I often see violated in stackoverflow newbie questions.
There is no style-justification that I know of where you can do these things, so why are they even allowed to compile? At the moment, my theories are
It was not such a taboo when the language was built,
It would make some important edge cases impossible, or
It's not the language's job to enforce good style.
I can find nothing on this topic (some links are below).
There are some conventions, such as underscores beginning variable names, or Hungarian notation (the latter of which I have been personally disabused of in comments) that are not overwhelmingly accepted, but are less divisive.
I'm asking this as a Java programmer, but would also be interested in answers form other language's.
Some links:
http://en.wikipedia.org/wiki/Naming_convention_(programming)#Java
http://docs.oracle.com/javase/tutorial/java/javaOO/methods.html
http://en.wikibooks.org/wiki/Java_Programming/History
How important are naming conventions for getters in Java?
Coding style is like writing style. If you write in a style that is diFFicult TO READ And does not read very well your mINd hAs GReat diFFICULTies actUally understanding what you are reading.
If, however, like in normal reading text - it is laid out in a form that matches well with what your mind expects then it is clear and easy to understand.
On the other hand, if the language actually FORCED you to write everything using EXACTLY the the right syntax then not only would it make coding slow and awkward but it would restrict your expressiveness.
Many years ago I came across a language that allowed you to add strange symbols to variable names. Users were allowed to do thing like:
var a=b = a = b;
var c<d = c > d;
if ( a=b & c<d ) ...
i.e. a=b was a perfectly acceptable variable name, so was c<d. As I am sure you would agree, this led to many mistakes of interpretation. Even if a new language allowed that I would not use it. Coding standards are for consistency and helping the human mind understand, syntax is for helping the computer mind understand.
Depending on the language designer's intent some languages are more opinionated than others when it comes to implementation and how the designers think things should be done.
The easiest example I can think of right now is Go, which has unit testing and code formatting built in. The designers are of the opinion that things should be done a certain way and they provide you the tools to do it.
Other languages do nothing of the sort like Scala where the language designers were very unopinionated in their implementation and supply you the tools to accomplish any given task in 10 different ways.
This isn't to say that some languages are built under tyrannical rule and others are extremely loose with their requirements. Its merely a choice made by the language designers which we end up having to live with.
Actually this is completely theoretic question. But it's interesting why java specification don't allow uppercase characters letters in package and cause write something like this:
com.mycompany.projname.core.remotefilesystemsynchronization.*
instead of
com.myCompanyName.projName.core.remoteFileSystemSynchronization.*
Directly from Oracle Docs
Package names are written in all lower case to avoid conflict with the
names of classes or interfaces.
But it's interesting why java specification don't allow uppercase characters letters in package and cause write something like this:
The specification allows it just fine. It's only a convention to use all-lower-case.
As gtgaxiola says, this does avoid conflict with type names... in .NET naming conventions this does happen, leading to advice that you do not name a class the same as its namespace. Of course, using camelCase packages would avoid the collision entirely.
I suspect reality is that it wasn't thoroughly considered when creating the package naming conventions. Personally I rarely find it to be a problem - if I end up seeing a package with one element of "remotefilesystemsynchronization" then the capitalization isn't the main thing I'd be concerned about :)
Its just another convention - One may ask why class name always has to start with Capital or method name starts with small case and then camel-cased. Java doesn't forces you to use that way. Its just that a set of underlined rules helps huge community as Java developers to write easily understandable code for the majority who follow conventions.
No definite reason can be assigned as such. Its just what felt good and was in practice by then majority programmers while writing the convention. But yes guidelines would definitely be there before writing conventions. I don't mean its a whimsical work. Guidelines are made so that just by looking at the various elements we should be able to tell if its class, method or package - and via following conventions it has been achieved for so long now.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So I have seen a lot of different coding styles, but I'm only going to talk about two big ones. I use a style where I just name everything like their class name when used in a general sense, like this:
String str = "This is some text";
But over at Java Practices, I see a style where they will put an 'I' in front of Interfaces class names, or they put 'f' or 'a' in front of object names. Take this snippet from "Don't subclass JDialog or JFrame"':
/**
Constructor.
<P>Called when adding a new {#link Movie}.
*/
MovieView(JFrame aParent) {
fEdit = Edit.ADD;
buildGui(aParent, "Add Movie");
fStandardDialog.display();
}
Why do programmers code in this style? Do a lot of people use it? And also, do professional programmers use this style?
Thanks in advance :)
This my personal opinion.
I prefer not to use prefixes on interface (or anything else for that matter). I just prefer to call it what it is. Interfaces are meant to represent an object (or part of it) without making any implication towards it's actual implementation.
Say you have a Car interface. And AudiA4 could be an implementation of that car. If you just bought a new Audi A4, you say, "I bought a new AudiA4" to those you think care about the kind of car you bought. To others, you can say "I bought a new Car". Certainly, you never say, I bought a new IAudiA4 or a new ICar.
The JFrame naming came about because it's a Swing Frame and Swing came after AWT (the original Java windowing toolkit, which already had a Frame class). Since both AWT and Swing where available at the same time, they used the 'J' prefix to demarcate the toolkits (note that JFrame extends Frame, btw). They could have called it SwingFrame but the 'J' prefix was apparently a good choice to represent the Swing package. So basically this prefix is just a naming choice, not a convention similar to the 'I' for interfance (or Impl suffix for implementations you see sometimes as well)
My point is you always have to name your classes and interface according to exactly what they represent. No more, no less. No point having a CarImpl class. Who cares that it's an implementation. Which implementation is it? Why does it need its own class? What more do I get when I use a CarImpl? What happens when I make a second implementation, I call it CarImpl2? All this is very constraining and doesn't bring much value.
Call it what it is. That's the only rule I'd set forth.
All this being said, the Eclipse project, amongst many others, does indeed use the I-for interface notation (WIKI). But it's their choice. I've seen professionals use it as well. I don't like it, but generally speaking, I respect the team's naming convention.
There is a book about such things - Code Complete by Steve McConnell
I might be wrong but the only universal convention I've seen when naming Java variables is using Camel-Case notation, that's regarding the format of the name.
As for the name itself, I've always found useful to name the variables according to what they actually are. In your String example, although you mention this would be in a general purpose variable, I would still give it a more meaningful name, like:
String message = "This is some text";
Or:
String msg = "This is some text";
Some of the Java libraries I've seen source code from tend to be quite verbose when naming variables, others just use single letter names when the variable is used in a reduced context:
public Rectangle setLocation(Point p) {
return setLocation(p.x(), p.y());
}
I think the main goal when naming variables (or anything else for that matter) is always to communicate in the best way possible the intent of what you were trying to do.
Code styles help make it easier for developers to read and understand each others code. Java conventions prescribe the use of short and descriptive identifiers, but unfortunately short and descriptive cannot always be achieved together so you may have to compromise shortness for clarity hence: atmosPres - still clear but short, atmosphericPressure - this can't be mistaken, atm - because everyone just knows ATM, right?, ap - WTF?
I first encountered the practice of prefixing variable names with a three letter type identifier while developing programs in C# - it helps the reader know what data type is contained in a variable without having to look for its declaration (due to short memory or maybe laziness?). Arrays are also prefixed with I e.g IList to distinguish them from other data types (and for what purpose, I just dunno).
For me, the worst code conventions are in C++ (if indeed there are any at all) - there's a mix of case types for data types and variables, conflicting method and function naming styles and endless cryptic abbreviation which all make it hard for non-regular C++ coders to read and understand C++ code.
What you're describing is sometimes referred to as "Hungarian notation", though it's not "Hungarian" in the truest sense of the term.
Basically, the idea is to differentiate between different classes of variables -- instance variables, local variables, parameters, et al. This serves two basic purposes:
It helps avoid name collisions, where, say, there might naturally (using "descriptive" variable naming) be an instance variable ralphsLeftFoot and a local variable ralphsLeftFoot. Using a prefix allows the two to co-exist, and, especially in languages where the local might (without warning message) "hide" the instance variable, prevents unintended changes in semantics from such collisions.
It makes the scope of variables obvious, so that, during maintenance, one does not accidentally assume that a local variable has instance scope or vice-versa.
Is this approach worthwhile? Many developers use a subset of the scheme, apparently to good effect. For instance, many Objective-C developers will name the instance variable behind a "property" with a leading "_" character, to clearly differentiate between the two and to avoid accidentally using the instance variable when the property was intended.
Likewise, many developers in a number of languages will prefix instance variables with a letter (often "m") to differentiate them from "normal" local/parameter variables.
What's probably most important is to pick a style that you (and your team) likes and stick with it. If the team likes the prefixes then use the prefixes. If the team prefers something else, stick with that. Of course, changing preferences, when a better choice is "revealed" to you, is OK, but don't switch back and forth willy-nilly.
So I have seen a lot of different coding styles, but I'm only going to
talk about two big ones. I use a style where I just name everything
like their class name when used in a general sense, like this:
String str = "This is some text";
That is awful. Imagine if someone were reading your code, trying to understand what it was doing, and they came across a variable named str. It doesn't convey any meaning to the person who has to read this code as to your intentions.
Conventions are used by and for people to improve readability, and thus the overall quality of software. Without a convention, any project that has more than one developer will suffer from varying styles that will only hurt the readability of the code. If you want to know what professionals do, look around on the internet for various conventions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am java professional now I like to go for ruby.Are there any similarity in both the languages? and what are the major differences? As both are object oriented.
What about these:
Similarities
As with Java, in Ruby,...
Memory is managed for you via a garbage collector.
Objects are strongly typed.
There are public, private, and protected methods.
There are embedded doc tools (Ruby’s is called RDoc). The docs generated by rdoc look very similar to those generated by javadoc.
Differences
Unlike Java, in Ruby,...
You don’t need to compile your code. You just run it directly.
All member variables are private. From the outside, you access everything via methods.
Everything is an object, including numbers like 2 and 3.14159.
There’s no static type checking.
Variable names are just labels. They don’t have a type associated with them.
There are no type declarations. You just assign to new variable names as-needed and they just “spring up” (i.e. a = [1,2,3] rather than int[] a = {1,2,3};).
There’s no casting. Just call the methods.
The constructor is always named “initialize” instead of the name of the class.
You have “mixin’s” instead of interfaces.
== and equals() are handled differently in Ruby. Use == when you want to test equivalence in Ruby (equals() is Java). Use equal?() when you want to know if two objects are the same (== in Java).
Taken from: To Ruby From Java
Besides being object oriented there are very few similarities between the two languages. Java is a statically typed compiled language while ruby is a dynamically typed interpreted language. The syntax is also very different. Java uses the c convention of terminating lines with a semi-colon while ruby uses the return character.
While Java does have some built in support for iterators ruby's uses of iterators is pervasive throughout the language.
This obviously only touches upon a comparison of the two. This is a decent write-up on the comparisons
You're asking a very broad question. I like to compare scripting languages similarly to how I'd compare spoken languages, so in this case; what are the major differences and similarities between Spanish and Italian?
If you ask that question, you're going to either get very varied or very long answers. Explaining differences between languages are difficult at best, as it's hard to pinpoint key factors.
This is proved by the responses here so far, as well as the links other people have linked to. They're either varied or very long.
Going back to the Spanish vs. Italian analogy, I could say that the languages are similar but still very different. If you (only) know one of them, you might be able to understand what's going on in the other, though you would probably not be able to use it very well. Knowing one definitely makes it easier for you to learn the other. One is used by a larger number of people, so you might benefit more from learning it.
All of the previous paragraph can be applied to Java vs. Ruby as well. Saying that both are object oriented is like saying Spanish and Italian both are members of the Romanic language family.
Of course, all of this is irrelevant. Most probably, your underlying question is whether it's "worth" learning Ruby instead of or in addition to Java. Unfortunately, there's no easy answer to that either. You have to weigh advantages and disadvantages with each language, such as popularity, demand and career opportunities. And then there's naturally the question of which language you prefer. You might like one language more than the other simply because it has a nicer syntax. (Similarly, you may prefer Italian because you think it's more "beautiful" than Spanish, even though the latter is more widespread and you'd have more "use" for it.)
Personally, I prefer Ruby. For many different reasons. Just like I prefer Italian.
The Object Oriented feature in Ruby is actually very different compared to Java.
In Ruby, everything is an object, including a primitive type (in Java) like integer.
In Ruby, new is like a property instead of a keyword. So to instantiate an object you would do this in Ruby:
animal = Animal.new
Ruby is strong typing but also dynamic. Because of its dynamicsm, Ruby enables you to do duck typing.
Ruby's answer to multiple inheritance is mixin (which is a language feature), where in Java you would implement many interfaces.
Ruby has got block, where you would use anonymous class to achieve the same thing in Java. But IMHO Ruby block is more powerful.
So I can say there's not too much similarities in Java and Ruby. Until today I can't find any similarities between the two as Ruby has gone its own path unlike many other language that derives from C language.