Is Java orthogonal? - java

I am wondering if Java is orthogonal or not, and if yes, then which are its features that make it orthogonal. How can you determine if a language is orthogonal or not? For example, I found on some website that C++ is not orthogonal, but no explanations, why not. What other languages are orthogonal? Please help me, because there is almost no information on the internet about this topic.
Thanks

The Art of UNIX Programming, Chapter 4. Modularity, Orthogonality, Page 89:
Orthogonality
Orthogonality is one of the most
important properties that can help
make even complex designs compact. In
a purely orthogonal design, operations
do not have side effects; each action
(whether it's an API call, a macro
invocation, or a language operation)
changes just one thing without
affecting others. There is one and
only one way to change each property
of whatever system you are
controlling.
Programming Language Pragmatics, Chapter 6, Page 228:
Orthogonality means that features can
be used in any combination, that the
combinations all make sense, and that
the meaning of a given feature is
consistent, regardless of the other
features with which it is combined.
On Lisp, 5.2 Orthogonality:
An orthogonal language is one in which
you can express a lot by combining a
small number of operators in a lot of
different ways.
I think an orthogonal programming language would be one where each of its features have minimal or no side effects, so they can be used without thinking about how that usage will affect other features. I borrow this from the definition of an orthogonal API.
In Java you'd have to evaluate for example if there is a combination of keywords/constructs that could affect each other when used simultaneously on an identifier. For example when applying public and static to a method, they do not interfere with each other, so these two are orthogonal (no side effects besides what the keyword is intended to do)
You'd have to do that to all its features to prove the orthogonality. That is one way to go about it. I do not think there exists a clear cut is or is not orthogonal in this matter either.

Using the term orthogonal programming language is unusual. Typically, in computer science you are really talking about orthogonal instruction-sets. However, if we are to extend the meaning to the grammar of a language:
"...meaning [the language] has a relatively small number of basic constructs and a set of rules for combining those constructs. Every construct has a type associated with it and there are no restrictions on those types...." see ALGOL
Then we can assume that if not all instructions in the language can work on all datatypes will yield non-orthogonality. This however does not mean that the converse is true, that is to say if all language instructions do work on all data types, it does not necessarily mean that the language is orthogonal.
More formally, an orthogonal language would have exactly ONE way to do a given operation. Non-orthogonal languages would have more than one way to achieve the same effect.
Simplest example:
for loop; vs. while loop;
for and while are non-orthogonal.

Orthogonality is feature of your design independent of the language. Sure some language make it easier for you to have an orthogonal design for your system but you shouldn't focus on a specific language to keep your system's design as orthogonal as possible.

Orthogonality is not really a language feature as such, even though some languages have features that promote orthogonality (such as annotations, built-in AOP, ..). Regarding orthogonality in Java: I have written a little case study about this using log4j as example: "Orthogonality By Example" - you might find this useful.

Lack of Orthogonality in C:
An array can contain any data type except void
Parameters are passed by value but array are passed by reference
A programming language is consider orthogonal if:
there is a relatively small set of primitive constructs that can combine in a relatively small number of ways to build data and control structures
every possible structure is legal
For example a language with 4 primitive data types (Int, Float, Double, Char) and two
type operators (Array and Pointer) can create a relatively large number of data structures.
So the advantage of orthogonality is to make the language simple and regular because there are less exceptions.

Related

Does the Android Framework utilize Imperative or Object Oriented design?

I know that Java is mostly object oriented language since you can do things like encapsulation, inheritance, and run-time polymorphism.
But when I watch a lot of talks on youtube about RxJava they say under Android you work with imperative rules? Does this relate to the life-cycle methods?
When I work with POJO's isn't that Object Oriented? Does this have to with how we handle data through our architecture layers? I'm getting confused with all these 'paradigms' and 'styles' especially since RxAndroid is getting thrown into the mix with 'functional-reactive' style.
First of all: Android is an operating system, not a programming language. That language is mainly object oriented, but lately a lot of effort is going into making java more suitable for functional programming. Frameworks such as RxJava emphasize that, too.
Of course, there are different programming models that can be used on the Android platform.
Coming from there: there is simply no sense in assuming that this large, complex environment can be reduced to some simple, always correct single word description. It is a combination of many different aspects.
Or as the US citizens say: in pluribus unum.
Android itself is a platform, not a language, so the question contains a category mistake.
In general, the only way this kind of question can be definitively answered is by resort to fundamental definitions. These were stated by Peter Wegner in 1987 in the paper 'Dimensions of Object-based Language Design'.
Wegner provides the following definitions:
Object-based: a language is object-based if it supports objects as a language feature.
Class-based: an object-based language is class-based (classical) if every object has a class.
Object-oriented: an object-based language is object-oriented if its objects belong to classes and class hierarchies can be incrementally designed by an inheritance mechanism.
I think you've got it a little bit wrong.
Java is an imperative language. You'd be better to ask what the difference between declarative vs imperative programming, or the difference between object oriented and functional programming.
Here is a great article on imperative vs declarative:
https://tylermcginnis.com/imperative-vs-declarative-programming/
Here is a stack overflow answer explaining the difference between object oriented and functional:
Functional programming vs Object Oriented programming
Android provides a framework written in java. Android isn't a language, but Java is. Java is an object oriented language.
I hope that clears things up a bit.
Some people say that Java is more like a hybrid, taking this piece from this article:
That said, Java is not a pure Object-Oriented language. Someone said Java is a hybrid, which, IMO, is an accurate description. I would posit Java is a dirty hybrid of an OO language. Consider:
String s = string2.trim();
First, since "String" is immutable, the above code reeks of functional programming. The "trim()" operation should cause the whitespace to be trimmed off both ends of "string2", without needing reassignment. That is to say, operations should act on the data as close to the object as possible. This, to me, makes Java feel dirty (it also leads to tightly-coupled systems due to the prevalence of "get" accessor methods, but that's another topic entirely). Ahem, what? That example is perfectly OO. Object-orientation does not make mutable state necessary. Actually, since strings are passed around so often, the lack of mutator methods really just saves a lot of headache.
Second, Java cannot alter the behaviour of all messages. It mixes the types of "operations" available to objects, depending on their type. The "+" is equally applicable to ints as it is Strings, but not to Matrices, or Colors. This isn't so bad, because you can do matrix.add( matrix ), but serves to illustrate the point about Java being 'dirty' (or 'impure', if you prefer).
Lastly, it is a hybrid to provide performance gains. Even though Smalltalk has an advanced virtual machine, its inability (when I was using it) to provide a machine-correlated bytecode for integer math placed a significant performance impact on its entire environment. Being a hybrid, Java cannot be called a true Object-Oriented language. But then, why does it matter?Use the right tool for the job and life will be happy!*
So:
You can work all like procedural programming if you want, and not use anything of OOP, but also, is not pure OOP programming, because not everything in Java is an object.
Also:
Java8 introduces some concepts about Functional programming, one of them is the use of lambdas.
In resume, Java is imperative, OOP and functional language(dep on version).

I want to know the meaning of compile-time decisions

What does it mean to say "with inheritance you're locked into compile-time decisions about code behavior".
I suggest this post from Donal Fellows on Programmers,
Some languages are pretty strongly static, and only allow the
specification of the inheritance relationship between two classes at
the time of definition of those classes. For C++, definition time is
practically the same as compilation time. (It's slightly different in
Java and C#, but not very much.) Other languages allow much more
dynamic reconfiguration of the relationship of classes (and class-like
objects in Javascript) to each other; some go as far as allowing the
class of an existing object to be modified, or the superclass of a
class to be changed. (This can cause total logical chaos, but can also
model real world nasties quite well.)
But it is important to contrast this to composition, where the
relationship between one object and another is not defined by their
class relationship (i.e., their type) but rather by the references
that each has in relation to the other. General composition is a very
powerful and ubiquitous method of arranging objects: when one object
needs to know something about another, it has a reference to that
other object and invokes methods upon it as necessary. As soon as you
start looking for this super-fundamental pattern, you'll find it
absolutely everywhere; the only way to avoid it is to put everything
in one object, which would be massively dumb! (There's also stricter
UML composition/aggregation, but that's not what the GoF book is
talking about there.)
One of the things about the composition relationship is that
particular objects do not need to be hard-bound to each other. The
pattern of concrete objects is very flexible, even in very static
languages like C++. (There is an upside to having things very static:
it is possible to analyse the code more closely and — at least
potentially — issue better code with less overhead.) To recap,
Javascript, as with many other dynamic languages, can pretend it
doesn't use compilation at all; just pretence, of course, but the
fundamental language model doesn't require transformation to a fixed
intermediate format (e.g., a “binary executable on disk”). That
compilation which is done is done at runtime, and can be easily redone
if things vary too much. (The fascinating thing is that such a good
job of compilation can be done, even starting from a very dynamic
basis…)
Some GoF patterns only really make sense in the context of a language
where things are fairly static. That's OK; it just means that not all
forces affecting the pattern are necessarily listed. One of the key
points about studying patterns is that it helps us be aware of these
important differences and caveats. (Other patterns are more universal.
Keep your eyes open for those.)

What are the subphases of the semantics analysis compiler phase?

I took an interest in finding out how a compiler really works. I looked through several books and all of them agree on the fact that the compiler phases are roughly as this(correct me if I'm wrong): lexical analysis, syntax analysis, semantic analysis, intermediate code, code optimization, code generation. The lexical and syntax phases look pretty clear and straightforward as methods(but this does not mean easy of course). However, I'm still not able to find what the semantic phase really consist of. For one, I know that there should be some subphases like scope checking, declaration checking and type checking but question that has been bothering me is: are there other things that have to be done. Can you tell me what are the mandatory steps that have to taken during this phase. I know this strongly depends on the programming language and the compiler implementation but could you give me some examples concerning C/C++, Java. And could you please point me to a book/page/article where can I read those things in-depth. Thanks.
Edit:
The books I look through were "Compilers: Principles, Techniques, and Tools",Aho and "Modern Compiler Design", Grune, Reeuwijk. I haven't been able to answer this question using them. If you find this question too broad could you please give an answer considering an compiler implementation of your choice for either C,C++ or Java.
There are typical "semantic analysis" phases that many compilers go through in one form or another. After lexing and parsing, the following actions typically occur in this order:
Name and type resolution. Determines lexical scopes, identifiers declared in such scopes, the type information for those identifiers, and for each non-declaration use of an identifier, the declaration to which it refers
Control flow analysis. The construction of a control flow graph over the computations explicit and/or implied (e.g., constructors) by the code.
Data flow analysis. Determines where variables recieve new values, and where those values are read by other parts of the program. (This often has a local analysis done within procedures, followed possibly by one across the procedures).
Also often done, as part of data flow analysis:
Points-to analysis. Determination for each pointer, at each location in the code, which entities that pointer might reference
Call graph. Construction of a call graph across the procedures, often taking into account indirect function pointers whose estimated values occur during the points-to analysis.
As a practical matter, some of these need to be interleaved to produce better results.
Beyond this, there are many analyses used to support various optimizations and code generation passes. If you really want to know more, consult any decent compiler book.
As already mentioned by templatetypedef, semantic analysis is language specific. For C++ it would among other things involve what template instantiations are required (the C++ language tends towards more and more semantic analysis), and for Java there would need to be some checked exception analysis.
Even for C the GNU C compiler can be configured to check arguments given to string-interpolations. I guess there are hundres of semi semantic analysis-related options for GCC to choose from. If you are doing a paper on the subject, you could spend an afternoon counting them :)
Besides availability, I find that the semantic analysis is what differentiates the statically typed imperative object-oriented languages of today.
You can't necessarily divide it into sub-phases at all. There are a number of things that have to be done, but at least conceptually they are all done while walking the parse tree from top to bottom and back up again. What exactly they are and how exactly it all happens depends on the language, the statement being processed, the specific compiler writer, ...
You could start to make a list:
Build symbol table.
Find the declarations of variables referenced.
Check compatibility of variable datatypes.
Establish subexpression types.
...
You can see that already these must be somewhat intermingled in practice, rather than constitute separable sub-phases.

Why ADTs are good and Inheritance is bad?

I am a long time OO programmer and a functional programming newbie. From my little exposure algebraic data types only look like a special case of inheritance to me where you only have one level hierarchy and the super class cannot be extended outside the module.
So my (potentially dumb) question is: If ADTs are just that, a special case of inheritance (again this assumption may be wrong; please correct me in that case), then why does inheritance gets all the criticism and ADTs get all the praise?
Thank you.
I think that ADTs are complementary to inheritance. Both of them allow you to create extensible code, but the way the extensibility works is different:
ADTs make it easy to add new functionality for working with existing types
You can easily add new function that works with ADT, which has a fixed set of cases
On the other hand, adding new case requires modifying all functions
Inheritance makes it easy to add new types when you have fixed functionality
You can easily create inherited class and implement fixed set of virtual functions
On the other hand, adding a new virtual function requires modifying all inherited classes
Both object-oriented world and functional world developed their ways to allow the other type of extensibility. In Haskell, you can use typeclasses, in ML/OCaml, people would use dictionary of functions or maybe (?) functors to get the inhertiance-style extensibility. On the other hand, in OOP, people use the Visitor pattern, which is essentially a way to get something like ADTs.
The usual programming patterns are different in OOP and FP, so when you're programming in a functional language, you're writing the code in a way that requires the functional-style extensibility more often (and similarly in OOP). In practice, I think it is great to have a language that allows you to use both of the styles depending on the problem you're trying to solve.
Tomas Petricek has got the fundamentals exactly right; you might also want to look at Phil Wadler's writing on the "expression problem".
There are two other reasons some of us prefer algebraic data types over inheritance:
Using algebraic data types, the compiler can (and does) tell you if you have forgotten a case or if a case is redundant. This ability is especially useful when there are many more operations on things than there are kinds of thing. (E.g., many more functions than algebraic datatypes, or many more methods than OO constructors.) In an object-oriented language, if you leave a method out of a subclass, the compiler can't tell whether that's a mistake or whether you intended to inherit the superclass method unchanged.
This one is more subjective: many people have noted that if inheritance is used properly and aggressively, the implementation of an algorithm can easily be smeared out over a half a dozen classes, and even with a nice class browser at can be hard to follow the logic of the program (data flow and control flow). Without a nice class browser, you have no chance. If you want to see a good example, try implementing bignums in Smalltalk, with automatic failover to bignums on overflow. It's a great abstraction, but the language makes the implementation difficult to follow. Using functions on algebraic data types, the logic of your algorithm is usually all in one place, or if it is split up, its split up into functions which have contracts that are easy to understand.
P.S. What are you reading? I don't know of any responsible person who says "ADTs good; OO bad."
In my experience, what people usually consider "bad" about inheritance as implemented by most OO languages is not the idea of inheritance itself but the idea of subclasses modifying the behavior of methods defined in the superclass (method overriding), specifically in the presence of mutable state. It's really the last part that's the kicker. Most OO languages treat objects as "encapsulating state," which amounts to allowing rampant mutation of state inside of objects. So problems arise when, for example, a superclass expects a certain method to modify a private variable, but a subclass overrides the method to do something completely different. This can introduce subtle bugs which the compiler is powerless to prevent.
Note that in Haskell's implementation of subclass polymorphism, mutable state is disallowed, so you don't have such issues.
Also, see this objection to the concept of subtyping.
I am a long time OO programmer and a functional programming newbie. From my little exposure algebraic data types only look like a special case of inheritance to me where you only have one level hierarchy and the super class cannot be extended outside the module.
You are describing closed sum types, the most common form of algebraic data types, as seen in F# and Haskell. Basically, everyone agrees that they are a useful feature to have in the type system, primarily because pattern matching makes it easy to dissect them by shape as well as by content and also because they permit exhaustiveness and redundancy checking.
However, there are other forms of algebraic datatypes. An important limitation of the conventional form is that they are closed, meaning that a previously-defined closed sum type cannot be extended with new type constructors (part of a more general problem known as "the expression problem"). OCaml's polymorphic variants allow both open and closed sum types and, in particular, the inference of sum types. In contrast, Haskell and F# cannot infer sum types. Polymorphic variants solve the expression problem and they are extremely useful. In fact, some languages are built entirely on extensible algebraic data types rather than closed sum types.
In the extreme, you also have languages like Mathematica where "everything is an expression". Thus the only type in the type system forms a trivial "singleton" algebra. This is "extensible" in the sense that it is infinite and, again, it culminates in a completely different style of programming.
So my (potentially dumb) question is: If ADTs are just that, a special case of inheritance (again this assumption may be wrong; please correct me in that case), then why does inheritance gets all the criticism and ADTs get all the praise?
I believe you are referring specifically to implementation inheritance (i.e. overriding functionality from a parent class) as opposed to interface inheritance (i.e. implementing a consistent interface). This is an important distinction. Implementation inheritance is often hated whereas interface inheritance is often loved (e.g. in F# which has a limited form of ADTs).
You really want both ADTs and interface inheritance. Languages like OCaml and F# offer both.

Java -> Python?

Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
List comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")] is really nice.
Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name="Eli", age=25)
Functions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
Built-in syntax for lists and dictionaries.
Operator Overloading.
Generally better designed libraries. For example, to parse an XML document in Java, you say
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");
and in Python you say
doc = parse("test.xml")
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.
I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features).
Python is Not Java
Java is Not Python, either
One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.
From a personal perspective, Python has the following benefits over Java:
No Checked Exceptions
Optional Arguments
Much less boilerplate and less verbose generally
Other than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.
With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.
Apart from what Eli Courtwright said:
I find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.
Introspection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java

Categories