Why Kotlin doesn't have explicit typing? - java

I am curious about this, why would kotlin designers think this would be a good idea to drop explicit typing in Kotlin ?
To me, explicit typing is not a "pain" to write in Java (or any other strongly typed language) : all IDE can assist me to automaticaly type my variable.
It adds great understanding of the code (that's why I don't like weakly-typed languages, I have no idea what kind of variables I'm dealing with).
Also, and this is my main issue with that, it makes the code more bug-prone.
Example :
Java : Easily identified as a String, all good
String myPrice = someRepository.getPrice(); // Returns a String
myTextView.setText(myPrice);
Java : Easily identified as an int, code smell with setText()
int myPrice = someRepository.getPrice(); // Returns an int, code smell !!
myTextView.setText(String.valueOf(myPrice)); // Problem avoided
Kotlin : ????
val myPrice = someRepository.getPrice(); // What does it return ?
myTextView.setText(myPrice); // Possible hidden bug with setText(#StringRes int) instead of setText(String) !!
No explicit typing in Kotlin is the biggest drawback in Kotlin imo. I try to understand this design choice.
I'm not really looking for "patches" to fix the example / avoid the presented code smell, I try to understand the main reason behind the removal of explicit typing. It has to be more than "less typing / easier to read". It only removes a couple of characters (one still have to write val / var), and explicit typing in Kotlin adds some caracters anyway...
Static typing without explicit typing is, to me, the worst case scenario to deal with hidden bugs / spaghetti bugs : if one class (let's say "Repository") changes it return type (from String to int for example). With explicit typing, compilation would fail at the class calling "Repository". Without explicit typing, compilation may not fail and the wrong type of variable may "travel" through classes and change the behavior of the classes because of its type. This is dangerous and undetected.
The fix is easy ; explicitly type the variable. But this is Kotlin we're speaking, a language made for code golfers : people won't explicitely type their variables, as it takes even more time to do so in kotlin than turtle-java. Yikes!

First of all: Kotlin has static typing. The compiler knows exactly which type goes or comes where. And you are always free to write that down, like
fun sum(a: Int, b: Int): Int {
val c: Int = 1
The point is: you can write a lot of code that simply relies on the fact that anything you do is statically typed, and type checked, too!
Do you really need to know whether you are adding two doubles or two ints or two longs, when all you "care" to look at is a + b?
In the end, this is about balancing different requirements. Leaving out the type can be helpful: less code that needs to be read and understood by human readers.
Of course: if you write code so that people constantly turn to their IDE for the IDE to tell them the actual, inferred type of something, then you turned a helpful feature into a problem!
The real answer here is: it depends. I have been in many code reviews where C++ people discussed the use of the auto keyword. There were a lot of situations, where using auto did make the code much easier to read and understand, because you could focus on "what happens with the variable", instead of looking 1 minute at its declaration, trying to understand its type. But there were also occasional examples where auto achieved the exact opposite, and a "fully typed" declaration was easier to follow.
And given the comment by the OP: you should know what you are doing in the first place. Meaning: when you write code, you don't just invoke "any" method that you find on some object. You call a method because you have to. You better know what it does, and what it returns to you. I agree, when someone is in a rush, you quickly var-assign and then pass that thing along, that might lead to errors. But for each situation where var helps with creating a bug, there might be 10 incidents where it helps writing easier to read code. As said: life is about balancing.
Finally: languages shouldn't add features for no reason. And the Kotlin people are carefully balancing features they add. They didn't add type inference because C++ has it. They added it because they carefully researched other languages and found it to be useful to be part of the language. Any language feature can be misused. It is always up to the programmer to write code that is easy to read and understand. And when your methods have unclear signatures, so "reading" their names alone doesn't tell you what is gong on, then blame that on the method name, not on type inference!

To quote Java's Local-Variable Type Inference JEP:
In a call chain like:
int maxWeight = blocks.stream()
.filter(b -> b.getColor() == BLUE)
.mapToInt(Block::getWeight)
.max();
no one is bothered (or even notices) that the intermediate types Stream<Block> and IntStream, as well as the type of the lambda formal b, do not appear explicitly in the source code.
Are you bothered about it?
if one class (let's say "Repository") changes it return type (from String to int for example). With explicit typing, compilation would fail at the class calling "Repository".
If you have overloads like setText in your example, then
Repository repository = ...;
myTextView.setText(repository.getFormerlyStringNowInt());
won't fail without type inference either. To make it fail, your code standard has to require every operation's result to be assigned to a local variable, as in
Stream<Block> stream1 = blocks.stream();
Predicate<Block> pred = b -> {
Color c = b.getColor();
return c == BLUE;
};
Stream<Block> stream2 = stream1.filter(pred);
ToIntFunction<Block> getWeight = Block::getWeight;
IntStream stream3 = stream2.mapToInt(getWeight);
int maxWeight = stream3.max();
And at this point you make bugs easier just from decreased readability and the chance to accidentally use the wrong variable.
Finally, Kotlin wasn't created in a vacuum: the designers could see that when C# introduced local type inference in 2007, it didn't lead to significant problems. Or they could look at Scala, which had it since the beginning in 2004; it had (and has) plenty of user complaints, but local type inference isn't one of them.

Related

Java vs Scala Types Hierarchy

Currently, I am learning Scala and I noticed that Type Hierarchy in Scala is much more consistent. There is Any type which is really super type of all types, unlike Java Object which is only a super type of all reference types.
Java Examples
Java approach led to introduction of Wrapper Classes for primitives, Auto-boxing. It also led to having types which cannot be e.g. keys in HashMaps. All those things adds to complexity of the language.
Integer i = new Integer(1); // Is it really needed? 1 is already an int.
HashMap<int, String> // Not valid, as int is not an Object sub type.
Question
It seems like a great idea to have all types in one hierarchy. This leads to the question: Why there is no single hierarchy of all types in Java? There is division between primitive types and reference types. Does it have some advantages? Or was it bad design decision?
That's a rather broad question.
Many different avenues exist to explain this. I'll try to name some of them.
Java cares about older code; scala mostly does not.
Programming languages are in the end defined by their community; a language that nobody but you uses is rather handicapped, as you're forced to write everything yourself. That does mean that the way the community tends to do things does reflect rather strongly on whether a language is 'good' or not. The java community strongly prefers reasonable backwards compatibility (reasonable as in: If there is a really good reason not to be entirely backwards compatible, for example because the feature you're breaking is very rarely used, or almost always used in ways that's buggy anyway, that's okay), the scala community tends to flock from one hip new way of doing things to the other, and any library that isn't under very active development either does not work at all anymore or trying to integrate it into more modern scala libraries is a very frustrating exercise.
Java doesn't work like that. This can be observed, for example, in generics: Generics (the T in List<T>) weren't in java 1.0; they were introduced in java 1.5, and they were introduced in a way that all existing code would just continue to work fine, all libraries, even without updates, would work allright with newer code, and adapting existing code to use generics did not require picking new libraries or updating much beyond adding the generics to the right places in the source file.
But that came at a cost: erasure. And because the pre-1.5 List class worked with Objects, generics had to work with Object as an implicit bound. Java could introduce an Any type but it would be mostly useless; you couldn't use it in very many places.
Erasure means that, in java, generics are mostly a figment of the compiler's imagination: That's why, given an instance of a list, you cannot ask it what its component type is; it simply does not know. You can write List<String> x = ...; String y = x.get(0); and that works fine, but that is because the compiler injects an invisible cast for you, and it knows this cast is fine because the generics give the compiler a framework to judge that this cast will never cause a ClassCastException (barring explicit attempts to mess with it, which always comes with a warning from the compiler when you do). But you can't cast an Object to an int, and for good reason.
The Scala community appears to be more accepting of a new code paradigm that just doesn't interact with the old; they'll refactor all their code and leave some older library by the wayside more readily.
Java is more explicit than scala is.
Scalac will infer tons of stuff, that's more or less how the language is designed (see: implicit). For some language features you're forced to just straight up make a call: You're trading clarity for verbosity. There where you are forced to choose, java tends to err on the side of clarity. This shows up specifically when we're talking about silent heap and wrapper hoisting: Java prefers not to do it. Yes, there's auto-boxing (which is silent wrapping), but silently treating int which, if handled properly, is orders of magnitude faster than a wrapped variant, as the wrapped variant for an entire collection just so you can write List<int> is a bridge too far for java: Presumably it would be too difficult to realize that you're eating all the performance downsides.
That's why java doesn't 'just' go: Eh, whatever, we'll introduce an Any type and tape it all together at runtime by wrapping stuff silently.
primitives are performant.
In java (and as scala runs on the JVM, scala too), there are really only 9 types: int, long, double, short, float, boolean, char, byte, and reference. As in, when you have an int variable, in memory, it is literally just that value, but if you have a String variable, the string lives in the heap someplace and the value you're passing around everywhere is a pointer to it. Given that you can't directly print the pointer or do arithmetic on it, in java we like to avoid the term and call it a 'reference' instead, but make no mistake: That's just a pointer with another name.
pointers are inherently memory wasting and less performant. There are excellent reasons for this tradeoff, but it is what it is. However, trying to write code that can deal with a direct value just as well as a reference is not easy. Moving this complexity into your face by making it relatively difficult to writing code that is agnostic (which is what the Any type is trying to accomplish) is one way to make sure the programmers don't ever get confused about it.
The future
Add up the 3 things above and hopefully it is now clear that an Any type either causes a lot of downsides, or, that it would be mostly useless (you couldn't use it anywhere).
However, there is good news on the horizon. Google for 'Project Valhalla' and 'java value types'. This is a difficult endeavour that will attempt to allow a lot of what an Any type would bring you, including, effectively, primitives in generics. In a way that integrates with existing java code, just like how java's approach to closures meant that java did not need to make scala's infamous Function8<A,B,C,D,E,F,G,H,R> and friends. Doing it right tends to be harder, so it took quite a while, and project valhalla isn't finished yet. But when it is, you WILL be able to write List<int> list = new ArrayList<int>, AND use Any types, and it'll all be as performant as can be, and integrate with existing java code as best as possible. Project Valhalla is not part of JDK14 and probably won't make 15 either.

Is it safe to use Kotlin property access syntax to set a Java field

This is a hypothetical question.
The situation is the following:
I am calling a setter of a Java class from a Kotlin file to change the value of the private field x
javaFoo.setX(420)
The IDE suggests to change it to
javaFoo.x = 420
It works normally.
Now suppose the setter has some complicated functionality inside of it and later on the x field in the Java class is changed to public instead of private. There will be no compile error but the Kotlin call will change the value of x skipping the other stuff that happens in the setter, and it can go unnoticed causing logical errors. Therefore I am wondering: Is it safe to use Kotlin property access syntax to set a java field?
Your analysis of the language semantics is correct. The change on the target class you describe would indeed change the semantics of Kotlin's property access syntax. However, that fact is not the only one to consider when answering your question, which asks whether using that syntax is safe.
When discussing hypothetical scenarios without any real-life constraints, pretty much anything is possible and no language construct is "safe" under that view. What if, one day, the Kotlin team decided to change the semantics of x++ to mean "return x squared, not changing x"? Theoretically, that's possible. Is it likely, though?
Applying the same common-sense logic to your question, the scenario where the maintainer of a class decides to break the encapsulation of a field that has so far been hidden behind a setter with custom logic is extremely unlikely. In fact, if you make a historical analysis of all the Java library projects, you probably won't find a single instance of this having ever happened.
That said, your hypothetical scenario could be seen as a distraction from an actual problem with the shortcut syntax. It can be awkward and misleading to use it to call a setter with custom logic because it breaks our intuition.
On Android, one such example is ImageView.get/setImageMatrix. You can write
imageMatrix.rotate = 30
and expect that to have an effect, but actually, the code you wrote is broken. You should actually have written
val tmpMatrix = Matrix()
tmpMatrix.set(imageMatrix)
tmpMatrix.rotate = 30
imageMatrix = tmpMatrix
By our Java intuition, it is actually this version that looks broken, wasting an object allocation for seemingly no purpose. But if you read the contract of setImageMatrix, you'll realize it does quite a bit more than just assign your object to a field, it actually applies the transformation to the image view. Similarly, the contract of the getter disallows mutating the returned object.
I haven't seen much argument over this feature of Kotlin, but I see it as a potential source of bugs for folks migrating from Java. The way to go is to re-train your intuition, sensitizing yourself to the fact that any property access in Kotlin may mean a lot more than meets the eye.

What kinds of type errors can Haskell catch at compile time that Java cannot? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm just starting to learn Haskell and keep seeing references to its powerful type system. I see many instances in which the inference is much more powerful than Javas, but also the implication that it can catch more errors at compile time because of its superior type system. So, I'm wondering if it would be possible to explain what types of errors Haskell can catch at compile time that Java cannot.
Saying that Haskell's type system can catch more errors than Java's is a little bit misleading. Let's unpack this a little bit.
Statically Typed
Java and Haskell are both statically typed languages. By this I mean that they type of a given expression in the language is known at compile time. This has a number of advantages, for both Java and Haskell, namely it allows the compiler to check that the expressions are "sane", for some reasonable definition of sane.
Yes, Java allows certain "mixed type" expressions, like "abc" + 2, which some may argue is unsafe or bad, but that is a subjective choice. In the end it is just a feature that the Java language offers, for better or worse.
Immutability
To see how Haskell code could be argued to be less error prone than Java (or C, C++, etc.) code, you must consider the type system with respect to the immutability of the language. In pure (normal) Haskell code, there are no side effects. That is to say, no value in the program, once created, may ever change. When we compute something we are creating a new result from the old result, but we don't modify the old value. This, as it turns out, has some really convenient consequences from a safety perspective. When we write code, we can be sure nothing else anywhere in the program is going to effect our function. Side effects, as it turns out, are the cause of many programming errors. An example would be a shared pointer in C that is freed in one function and then accessed in another, causing a crash. Or a variable that is set to null in Java,
String foo = "bar";
foo = null;
Char c = foo.charAt(0); # Error!
This could not happen in normal Haskell code, because foo once defined, can not change. Which means it can not be set to null.
Enter the Type System
Now, you are probably wondering how the type system plays into all of this, that is what you asked about after all. Well, as nice as immutability is, it turns out there is very little interesting work that you can do without any mutation. Reading from a file? Mutation. Writing to disk? Mutation. Talking to a web server? Mutation. So what do we do? In order to solve this issue, Haskell uses its type system to encapsulate mutation in a type, called the IO Monad. For instance to read from a file, this function may be used,
readFile :: FilePath -> IO String
The IO Monad
Notice that the type of the result is not a String, it is an IO String. What this means, in laymans terms, is that the result introduces IO (side effects) to the program. In a well formed program IO will only take place inside the IO monad, thus allowing us to see very clearly, where side effects can occur. This property is enforced by the type system. Further IO a types can only produce their results, which are side effects, inside the main function of the program. So now we have very neatly and nicely isolated off the dangerous side effects to a controlled part of the program. When you get the result of the IO String, anything could happen, but at least this can't happen anywhere, only in the main function and only as the result of IO a types.
Now to be clear, you can create IO a values anywhere in your code. You can even manipulate them outside the main function, but none of that manipulation will actually take place until the result is demanded in the body of the main function. For instance,
strReplicate :: IO String
strReplicate =
readFile "somefile that doesn't exist" >>= return . concat . replicate 2
This function reads input from a file, duplicates that input and appends the duplicated input onto the end of the original input. So if the file had the characters abc this would create a String with the contents abcabc. You can call this function anywhere in your code, but Haskell will only actually try to read the file when expression is found in the main function, because it is an instance of the IO Monad. Like so,
main :: IO ()
main =
strReplicate >>=
putStrLn
This will almost surely fail, as the file you requested probably doesn't exist, but it will only fail here. You only have to worry about side effects, not everywhere in your code, as you do in many other languages.
There is a lot more to both IO and Monads in general than I have covered here, but that is probably beyond the scope of your question.
Type Inference
Now there is one more aspect to this. Type Inference
Haskell uses a very advanced Type Inference System, that allows for you to write code that is statically typed without having to write the type annotation, such as String foo in Java. GHC can infer the type of almost any expression, even very complex ones.
What this means for our safety discussion is that everywhere an instance of IO a is used in the program, the type system will make sure that it can't be used to produce an unexpected side effect. You can't cast it to a String, and just get the result out where/when ever you want. You must explicitly introduce the side effect in the main function.
The Safety of Static Typing with the Ease of Dynamic Typing
The Type inference system has some other nice properties as well. Often people enjoy scripting languages because they don't have to write all that boilerplate for the types like they would have to do in Java or C. This is because scripting languages are dynamically typed or the type of the expression is only computed as the expression is being run by the interpreter. This makes these languages arguably more prone to errors, because you won't know if you have a bad expression until you run the code. For example, you might say something like this in Python.
def foo(x,y):
return x + y
The problem with this is that x and y can be anything. So this would be fine,
foo(1,2) -> 3
But this would cause an error,
foo(1,[]) -> Error
And we have now way of checking that this is invalid, until it is run.
It is very important to understand that all statically type languages do not have this problem, Java included. Haskell is not safer than Java in this sense. Haskell and Java both keep you safe from this type of error, but in Haskell you don't have to write all the types in order to be safe, they type system can infer the types. In general, it is considered good practice to annotate the types for your functions in Haskell, even though you don't have to. In the body of the function however, you rarely have to specify types (there are some strange edge cases where you will).
Conclusion
Hopefully that helps illuminate how Haskell keeps you safe. And in regard to Java, you might say that in Java you have to work against the type system to write code, but in Haskell the type system works for you.
Type Casts
One difference is that Java allows dynamic type casts such as (silly example follows):
class A { ... }
static String needsA(A a) { ... }
Object o = new A();
needsA((A) o);
Type casts can lead to runtime type errors, which can be regarded as a cause of type unsafety. Of course, any good Java programmer would regard casts as a last resort, and rely on the type system to ensure type safety instead.
In Haskell, there is (roughly) no subtyping, hence no type casts. The closest feature to casts is the (unfrequently used) Data.Typeable library, as shown below
foo :: Typeable t => t -> String
foo x = case cast x :: A of -- here x is of type t
Just y -> needsA y -- here y is of type A
Nothing -> "x was not an A"
which roughly corresponds to
String foo(Object x) {
if (x instanceof A) {
A y = (A) x;
return needsA(y);
} else {
return "x was not an A";
}
}
The main difference here between Haskell and Java is that in Java we have separate runtime type checking (instanceof) and cast ((A)). This might lead to runtime errors if checks do not ensure that casts will succeed.
I recall that casts were a big concern in Java before generics were introduced, since e.g. using collections forced you to perform a lot of casts. With generics the Java type system greatly improved, and casts should be far less common now in Java, since they are less frequently needed.
Casts and Generics
Recall that generic types are erased at run time in Java, hence code such as
if (x instanceof ArrayList<Integer>) {
ArrayList<Integer> y = (ArrayList<Integer>) x;
}
does not work. The check can not be fully performed since we can not check the parameter of ArrayList. Also because of this erasure, if I remember correctly, the cast can succeed even if x is a different ArrayList<String>, only to cause runtime type errors later, even if casts do not appear in the code.
The Data.Typeable Haskell machinery does not erase types at runtime.
More Powerful Types
Haskell GADTs and (Coq, Agda, ...) dependent types extend conventional static type checking to enforce even stronger properties on the code at compile time.
Consider e.g. the zip Haskell function. Here's an example:
zip (+) [1,2,3] [10,20,30] = [1+10,2+20,3+30] = [11,22,33]
This applies (+) in a "pointwise" fashion on the two lists. Its definition is:
-- for the sake of illustration, let's use lists of integers here
zip :: (Int -> Int -> Int) -> [Int] -> [Int] -> [Int]
zip f [] _ = []
zip f _ [] = []
zip f (x:xs) (y:ys) = f x y : zip xs ys
What happens, however, if we pass lists of different lengths?
zip (+) [1,2,3] [10,20,30,40,50,60,70] = [1+10,2+20,3+30] = [11,22,33]
The longer one gets silently truncated. This may be an unexpected behaviour. One could redefine zip as:
zip :: (Int -> Int -> Int) -> [Int] -> [Int] -> [Int]
zip f [] [] = []
zip f (x:xs) (y:ys) = f x y : zip xs ys
zip f _ _ = error "zip: uneven lenghts"
but raising a runtime error is only marginally better. What we need is to enforce, at compile time, that the lists are of the same lengths.
data Z -- zero
data S n -- successor
-- the type S (S (S Z)) is used to represent the number 3 at the type level
-- List n a is a list of a having exactly length n
data List n a where
Nil :: List Z a
Cons :: a -> List n a -> List (S n) a
-- The two arguments of zip are required to have the same length n.
-- The result is long n elements as well.
zip' :: (Int -> Int -> Int) -> List n Int -> List n Int -> List n Int
zip' f Nil Nil = Nil
zip' f (Cons x xs) (Cons y ys) = Cons (f x y) (zip' f xs ys)
Note that the compiler is able to infer that xs and ys are of the same length, so the recursive call is statically well-typed.
In Java you could encode the list lengths in the type using the same trick:
class Z {}
class S<N> {}
class List<N,A> { ... }
static <A> List<Z,A> nil() {...}
static <A,N> List<S<N>,A> cons(A x, List<N,A> list) {...}
static <N,A> List<N,A> zip(List<N,A> list1, List<N,A> list2) {
...
}
but, as far as I can see, the zip code can not access the tails of the two lists and have them available as two variables of the same type List<M,A>, where M is intuitively N-1.
Intuitively, accessing the two tails loses type information, in that we do no longer know they are of even length. To perform a recursive call, a cast would be needed.
Of course, one can rework the code differently and use a more conventional approach, e.g. using an iterator over list1. Admittedly, above I am just trying to convert a Haskell function in Java in a direct way, which is the wrong approach to coding Java (as much as would be coding Haskell by directly translating Java code). Still, I used this silly example to show how Haskell GADTs can express, without unsafe casts, some code which would require casts in Java.
There are several things about Haskell that make it "safer" than Java. The type system is one of the obvious ones.
No type-casts. Java and similar OO languages let you cast one type of object to another. If you can't convince the type system to let you do whatever it is you're trying to do, you can always just cast everything to Object (although most programmers would immediately recognise this as pure evil). The trouble is, now you're in the realm of run-time type-checking, just like in a dynamically-typed language. Haskell doesn't let you do such things. (Unless you explicitly go out of your way to get it; and almost nobody does.)
Usable generics. Generics are available in Java, C#, Eiffel and a few other OO languages. But in Haskell they actually work. In Java and C#, trying to write generic code almost always leads to obscure compiler messages about "oh, you can't use it that way". In Haskell, generic code is easy. You can write it by accident! And it works exactly the way you'd expect.
Convenience. You can do things in Haskell that would be way too much effort in Java. For example, set up different types for raw user input verses sanitised user input. You can totally do that in Java. But you won't. It's too much boilerplate. You will only bother doing this if it's absolutely critical for your application. But in Haskell, it's only a handful of lines of code. It's easy. People do it for fun!
Magic. [I don't have a more technical term for this.] Sometimes, the type signature of a function lets you know what the function does. I don't mean you can figure out what the function does, I mean there is only one possible thing a function with that type could be doing or it wouldn't compile. That's an extremely powerful property. Haskell programmers sometimes say "when it compiles, it's usually bug-free", and that's probably a direct result of this.
While not strictly properties of the type system, I might also mention:
Explicit I/O. The type signature of a function tells you whether it performs any I/O or not. Functions that perform no I/O are thread-safe and extremely easy to test.
Explicit null. Data cannot be null unless the type signature says so. You must explicitly check for null when you come to use the data. If you "forget", the type signatures won't match.
Results rather than exceptions. Haskell programmers tend to write functions that return a "result" object which contains either the result data or an explanation of why no result could be produced. As opposed to throwing an exception and hoping somebody remembers to catch it. Like a nullable value, a result object is different from the actual result data, and the type system will remind you if you forget to check for failure.
Having said all of that, Java programs typically die with null pointer or array index exceptions; Haskell programs tend to die with exceptions like the infamous "head []".
For a very basic example, while this is allowable in Java:
public class HelloWorld {
public static void main(String[] args) {
int x = 4;
String name = "four";
String test = name + x;
System.out.println(test);
}
}
The same thing will produce a compile error in Haskell:
fourExample = "four" + 4
There is no implicit type casting in Haskell which helps in preventing silly errors like "four" + 4. You have to tell it explicitly, that you want to convert it to String:
fourExample = "four" ++ show 4

How to avoid confusing when using scripting language?

I used to write a very strong type language, for example, java. I need to tell the complier what type of variable I will put in... for example...
public static void sayHello(String aName)
I can ensure that the user will pass a string to me...
But if I use php, I can do that...
function sayHello($aName)
I still can call the sayHello, but I don't know what the param type......I can let the name more informative like this:
function sayHelloWithString($aName)
But I can't stop the user pass in a int to me..... the user can still pass the int to me... ...it may cause lot of errors....How can I stop it? any ideas or experience shared? Thank you.
How about not stopping the user from passing in an int?
In php, you could check is_string, but of course, you'll miss out on objects that have __toString set, or the implicit conversion of numbers to strings.
If you must make your program cry in pain when a developer tries something different, you could specify a type in the later versions of PHP (i.e. function foo(ObjectType $bar)...)*
In most loosely typed languages, you want to set up fall-backs for the major types:
number
array
string
generic object
Be liberal in what you accept, be strict in what you send.
* Primitive types are not supported for type hinting
There's a few ways to deal with this...
Use an IDE that supports docblocks. This deals with the pre-runtime type checking when writing code.
Use type checking within your function This only helps with the runtime type checking, and you won't know when writing your code.
Depending on the type you can use built-in type hinting. This however only works for non-scalar values, specifically array and a class name.
1 - To implement #1 using a good IDE, you can docblock your function as such:
/**
* Say hello to someone.
*
* #param string $aName
**/
public function sayHello($aName) {
2 - To implement #2 use the is_ methods..
public function sayHello($aName) {
if (!is_string($aName)) {
throw new ArgumentException("Type not correct.");
}
// Normal execution
3 - You can't do this with your method above, but something like this.. Kindof the same as #2 apart from will throw a catchable fatal error rather than ArgumentException.
public function manipulateArray(array $anArray) {
It's worth noting that most of this is pretty irrelevant unless you're writing publicly usable library code.. You should know what your methods accept, and if you're trying to write good quality code in the first place, you should be checking this before hand.
Using a good IDE (I recommend phpStorm a thousand times over) you can and should utilise DocBlocks everywhere you can for all of your classes. Not only will it help when writing APIs and normal code, but you can use it to document your code, what if you need to look at the code 6 months later, chances are you're not going to remember it 100% :-)
Additionally, there's a lot more you can do with docblocks than just define parameter types, look it up.
You can check if what they passed is a string using:
http://php.net/manual/en/function.is-string.php
Then provide appropriate error handling.
function sayHello($aName) {
if (is_string($aName)) {
//string OK!
} else {
echo "sayHello() only takes strings!";
}
}
In PHP you can check whether the variable that has been passes is a string by using the is_string function:
<?php
if (is_string($aName)) {
echo "Yes";
} else {
echo "No";
}
?>
Hope that helps.
Or alternatively /additionally use Type Casting to convert the variable to the required type
http://us3.php.net/manual/en/language.types.type-juggling.php
You have the option of checking to make sure the parameter is of the right type. However, it's worth considering what you'd do if it isn't. If you're just going to throw an exception, you might be better off just assuming it's the right type and the the exception be thrown when something you do isn't allowed. If you're not going to add any more useful information to the exception/error that would already be thrown, then there's not much point in checking it in the first place.
As to giving the user an indication of what type you want, I generally stick with including it in the variable name:
function sayHello($aNameStr)
function addItems($itemList)
...etc...
That, plus reasonable documentation, will mean the user can look at the function and figure out what they should be passing in in the first place.
Some scripting languages have tools that can help you. For example use strict in perl requires declaration of each variable before using. But still the language is weakly typed by definition.
Sometimes naming conventions help. For example we inherited from good old Fortran tradition that int variables' names should start from i, j, k, l, m, n. And this convention is used now at least for indexes.

Technical reason for no default parameters in Java

I've been looking around to try to find what the reasoning is behind not including default parameters for functions in Java.
I'm aware that it's possible to simulate the behavior, either with varargs or else by creating several overloaded functions that accept fewer parameters, and call the real function that takes all parameters. However, neither of these options match the clarity and ease-of-use of, e.g. C++'s syntax.
Does anyone know if there's a solid technical reason that would make something like
void myFunc(int a=1, int b=2) {...}
undesirable or undo-able in a new version of Java?
It was not in the initial version of Java because they decided they did not need it, probably to keep things simple.
Adding it now would be tricky, because it needs to be done in a backwards-compatible fashion. Adding varargs, autoboxing and generics in Java5 was a major undertaking, and it could only be done with limited functionality (such as type erasure) and at the cost of increased complexity (the new method resolution rules make for good exam trick questions).
Your best shot would be with a non-Java language on the JVM. Maybe one of them already has this.
I am not aware of a technical reason, apart from it being complicated which values are being omitted and which ones are not.
For example, in your sample, if only one integer was passed through then is it a or b that should be defaulted? Most probably a but it does add that level of ambiguity.
A simple solution would be to
void myFunc(Integer a, Integer b) {
if (a == null) a = 1;
if (b == null) b = 2;
}
Yes it is more long winded, and yes it hides the defaulting within the code, rather than the method signature (which could then be shown in JavaDoc), but it does enforce the consistency.
I agree that optional arguments would add huge clarity and save the huge work of defining loads of overloaded methods (called telescoping), which do nothing than call each other. However, the enabler for this neat feature is passing arguments by name.
Named association is self-documenting. In contrast, positional argument association is concise but it makes you to refer the definition of method all the time to check which argument is expected in nth position at every invocation. This is ridiculous and motivates us to search for solutions like Builder pattern. The Builder actually solves both problems at once because named association is a synonym for optional arguments. But Builder is useful only for user. API designer still must waste space/time to create the Builder class. Pattern bigots might disagree but it is an overkill to create a Builder class for every method with named/optional arguments. Language design should obviate this stupid pattern. But, I do not know how compatible they are with the variable argument list.
To Avoid Ambiguity. Java Support Method Override.
We assume the code below:
public int add(int a) {
// do something
}
public int add(int a, int b = 0) {
// do something
}
When we call add(12), Can you tell me which function is invoked?

Categories