Java and Cobol differences [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can anyone help comparing and contrasting between Java and cobol in terms of technical differences as well as architectural design styles

Similarities
Cobol and Java were going to change the world and solve the problem of programming.
Neither lived up to the initial hype.
There are now very large, bloated Cobol and Java programs that are used by banks and are "legacy" ... too large and critical to rewrite or throw away.
Cobol introduce the idea of having long, readable names in their code. Java recommends long, readable names.
Differences
Cobol was invented by an American, Grace Murray Hopper, who received the highest award by the Department of Defense, the Defense Distinguished Service Medal.
Java was invented by a Canadian, James Gosling, who received Canada's highest civilian honor, an Officer of the Order of Canada.
3 COBOL convention uses a "-" to separate words in names, Java convention uses upper/lower CamelCase.

COBOL was popular for the simple reason, to develop business applications.
Since the syntax was so clear and human-like, written in procedural style, it was for that reason, that made adapting to the changes in the business environment much easier, for example, to assign a value of pi to a variable, and then subtract zero from it - simple example to show the actual COBOL statements/sentences (it is years since I last programmed in Cobol)
MOVE 3.14 INTO VARPI.
SUBTRACT ZERO FROM VARPI GIVING VARPIRESULT.
IF VARPIRESULT AND ZERO EQUALS VARPI THEN DISPLAY 'Ok'.
If I remember, the COBOL sentences have to be at column 30...
And it is that, hence easier to troubleshoot because any potential business logic error can be easily pin-pointed as a result. Not alone that, since COBOL ran on mainframe systems, it was for a reason, the data transfer from files were shifted at a speed that is light-years ahead of the other systems such as PC's and that is another reason why data processing in COBOL was blindingly fast.
I have worked on the Y2k stuff on the mainframe (IBM MVS/360) and it was incredible at the dawn of the 21st century, praying that the fixes I put in wouldn't bring the business applications to their knees...that was hype, aside from that..to this day, it is still used because of the serious transfer speed of data shuffling around within mainframes and ease of maintainability.
I know for starters, Java would not just be able to do that, has Java got a port available for these mainframes (IBM MVS/360, 390, AS400)?
Now, businesses cannot afford to dump COBOL as they would effectively be 'committing suicide' as that is where their business applications resides on, which is the reason why the upgrade, migration, porting to a different language is too expensive and would cause a serious headache in the world of businesses today...
Not alone that, imagine having to rewrite procedural code which are legacy code and could contain vital business logic, to take advantage of the OOP style of Java, the end result would be 'lost in translation' and requiring a lot of patience, stress and pressure.
Imagine, a healthcare system (I have worked for one, which ran on the system I mentioned above), was to ditch all their claims processing,billing etc (written in COBOL) to Java, along with the potential for glitches and not to mention, serious $$$ amount of money to invest which would cost the healthcare company itself far more, the end result would be chaos and loss of money, and customers (corporations that offer employee benefits) would end up dumping the company for a better one.
So to answer your question, I hope I have illustrated the differences - to summarize:
COBOL is:
Procedural language
Simple human like syntax
very fast on mainframe systems
Easy to maintain code due to syntax
In contrast,
Java is:
Object Oriented
Syntax can get complicated
Requires a Java Virtual Machine to run and execute the compiled bytecode.
Hope this helps,

It is easier to point out what they have in common instead of listing their differences.
So here is the list:
You can use both to make the computer do things
They both get compiled to yet a different language (machine code, byte-code)
That is it!

Similarities:
Both extremely verbose and created with pointy-haired bosses, not programmers, in mind.
Both used primarily for boring business software.
Both have huge legacy and are going to be around a while.

Both languages target the "Write Once, Run Anywhere" idea. If vendor specific extensions are avoided, Cobol is very portable.
Cobol is very much a procedural language, while Java is very much an object oriented language. That said, there have been vendor specific OO extensions to Cobol for decades, and the new specification contains a formal specification. It is also possible to write procedural code in Java, you can easily make a program out of a single main() method.
Both are widely used in enterprise computing for their relative ease of use. Both languages are somewhat hard to shoot yourself in the foot with, compared with other common languages like C and C++.
The most significant difference is that Cobol supports native fixed point arithmetic. This is very important when dealing with financals. Most languages, Java included, only support this via add on libraries, thus they are many orders of magnitude slower when dealing with fixed point data and prone to (potentially very expensive) errors in that library code.

Cobol is a pure procedural language, not even functions in it (I used cobol in the 90s, so it might have changed since).
Java is OO (Although I heared there is a OO version for Cobol too), Oh...And the syntax is different.
Excelent list of similarities and differences : http://www.jsrsys.com/jsrsys/s8383sra.htm
It'swhat we do!
COBOL: COBOL Concept Description
Java: Java/OO Similar Concept
++: What Java/OO adds to Concept
When I began Java, I used to think the OO (Object Orientation) was "just like" good programming practices, except it was more formal, and the compiler enforced certain restrictions.
I no longer think that way. However, when you are beginning I think certain "is similar to" examples will help you grasp the concepts.
COBOL: Load Module/Program
Java: Class
COBOL: PERFORM
Java: method
++: can pass parameters to method, more like FUNCTION
other programs/classes can call methods in different classes if declared public. public/private gives designer much control over what other classes can see inside a class.
COBOL: Working Storage, statically linked sub-routine
Java: instance variables
++: (see next)
COBOL: Working Storge, dynamically loaded sub-routine
Java: Class variables
++: Java can mix both Class variables (called static, just the reverse of our COBOL example, and instance variables (the default).
Class variables (static) occur only once per Class (really in one JVM run-time environment).
Instance variables are unique to each instance of a class.
Here is an example from class JsrSysout. From my COBOL background I like to debug my code by DISPLAYing significant data to the SYSOUT data set. There is a Java method for this, System.out.prinln(...). The problem with this method is that the data you want just scrolls off the Java console, the equivalent of SYSOUT or perhaps DISPLAY UPON CONSOLE if you had your own stand-alone machine. I needed a way to easily do displays that would stop when the screen was full. Since there is only one Java console, the line-count for the screen clearly needs to be a class variable, so all instances (each program/class that logs here has its own instance of JsrSysout) stop at the bottom of the screen.
Multiple Instances of same class:
One (calling program) class can create multiple instances of the same class. Why would you want to do this? One good COBOL example is I/O routines. In COBOL you would need to code one I/O routine for each file you wish to access. If you want to open a particular file twice in one run-time environment you would need a different I/O routine with a different name, even if the logic was identical.
With Java you could code just one class for a particular logical file type. Then for each file you wish to read (or write) you simply create another instance of that class using the new operator. Here are some snippets of code from program IbfExtract that do exactly that. This program exploits the fact that I have written a class for Line Input, and another class for Line Output. These are called JsrLineIn and JsrLineOut.
This illustrates another dynamic feature of Java. When output is first created, it is an array of null pointers, it takes very little space. Only when a new object is created, and the pointer to it implicitly put in the array does storage for the object get allocated. That object can be anything from a String to an very complex Class.
COBOL: PICTURE
Java: No real equivalent.
I therefore invented a method to mymic a ZZZ,ZZZ,... mask for integer input. I have generally grouped my utility functions in JsrUtil. These are methods that really don't related to any type of object. Here is an example of padLeft that implements this logic. padLeft is also a good example of polymorphism. In COBOL, if you have different parameter lists you need different entry points. In Java, the types of parameters are part of the definition. For example:
COBOL: Decimal arithmetic
Java: Not in native Java, but IBM has implemented some BigDecimal classes.
I consider this the major weakness of Java for accounting type applications. I would have liked to see the packed decimal data type as part of the native JVM byte architecture. I guess it is not there because it is not in C or C++. I have only read about the BigDecimal classes, so I can't realy comment on their effectiveness.
COBOL: COPY or INCLUDE
Java: Inheritance
++: Much more powerfull!
In COBOL, if you change a COPY or INCLUDE member, you must recompile all the programs that use it. In Java, if program B inherits from program A, a change in program A is automatically inherited by program B without recompiling! Yes, this really works, and lends great power to Java applications. I exploited this for my Read/Sort/Report system. Class IbfReport contains all the basic logic common to the report programs. It has appropriate defaults for all of its methods. Classes IbfRP#### extend IbfReport, and contain only those methods unique to a particular report. If a change is made in IbfReport, it is reflected in the IbfRP#### programs (classes) the next time they are run.
COBOL: ON EXCEPTION
Java: try/throw/catch
++: can limit scope of error detection (see following)
COBOL: OPEN
Java: Input Streams
++: Automatic error detection, both a blessing and a curse.
COBOL: WRITE
Java: write (yes, really).
COBOL: CLOSE
Java: close method
COBOL: READ
Java: read...

Related

Java Program Specialization - What is it? I don't understand it

I'm reading about program specialization - specifically java and I don't think I quite understand it to be honest. So far what I understand is that it is a method for optimizing efficiency of programs by constraining parameters or inputs? How is that actually done? Can someone maybe explain to me how it helps, and maybe an example of what it actually does and how its done?
Thanks
I have been reading:
Program Specialization - java
Program specialization is the process of specializing a program when you know in advance what are the arguments you're going to have.
One example is if you have a test and you know that with your arguments, you're never going to enter the block, you can eliminate the test.
You create a specialized version of the program for a certain kind of input.
Basically, it helps to get rid off useless with your input. However, with the modern architectures and compilers (at least in C), you're not going to win a lot in terms of performance.
From the same authors, i would recommend the Tempo work.
EDIT
From the Toplas paper:
Program specialization is a program
transformation technique that
optimizes a pro- gram fragment with
respect to information about a context
in which it is used, by generating an
implementation dedicated to this usage
context. One approach to automatic
program specialization is partial
evaluation, which performs aggressive
inter-procedural constant propagation
of values of all data types, and
performs constant folding and
control-flow simplifications based on
this information [Jones et al. 1993].
Partial evaluation thus adapts a
program to known (static) informa-
tion about its execution context, as
supplied by the user (the programmer).
Only the program parts controlled by
unknown (dynamic) data are
reconstructed. Par- tial evaluation
has been extensively investigated for
functional languages [Bondorf 1990;
Consel 1993], logic languages [Lloyd
and Shepherdson 1991], and imperative
languages [Andersen 1994; Baier et al.
1994; Consel et al. 1996].
Interesting.
It's not a very common term, at least I haven't come across it before.
I don't have time to read the whole paper, but it seems to refer to the potential to optimise a program depending on the context in which it will be run. An example in the paper shows an abstract "power" operation being optimised through adding a hard-coded "cube" operation. These optimisations can be done automatically, or may require programmer "hints".
It's probably worth pointing out that specialization isn't specific to Java, although the paper you link to describes "JSpec", a Java code specializer.
It looks like Partial Evaluation applied to Java.
That idea is if you have a general function F(A,B) having two parameters A and B, and (just suppose) every time it is called, A is always the same. Then you could transform F(A,B) into a new function FA(B) that only takes one parameter, B. This function should be faster because it is not having to process the information in A - it already "knows" it. It can also be smaller, for the same reason.
This is closely related to code generation.
In code generation, you write a code generator G to take input A and write the small, fast specialized function FA. G(A) -> FA.
In specialization, you need three things, the general program F, the specializer S, and the input A: S(F,A) -> FA.
I think it's a case of divide-and-conquer.
In code generation, you only have to write G(A), which is simple because it only has to consider all As, while the generated program considers all the Bs.
In Partial Evaluation, you have to get an S somewhere, and you have to write F(A,B) which is more difficult because it has to consider the cross product of all possible As and Bs.
In personal experience, a program F(A,B) had to be written to bridge real-time changes from an older hierarchical database to a newer relational one. A was the meta-description of how to map the old database to the new, in the form of another database. B was the changes being made to the original database, and F(A,B) computed the corresponding changes to the newer database. Since A changed at low frequency (weekly), F(A,B) did not have to be written. Instead a generator G(A) was written (in C) to generate FA(B) (in C). Time saved was roughly an order of magnitude of development time, and two orders of magnitude of run time.

As a Java programmer learning Python, what should I look out for? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Much of my programming background is in Java, and I'm still doing most of my programming in Java. However, I'm starting to learn Python for some side projects at work, and I'd like to learn it as independent of my Java background as possible - i.e. I don't want to just program Java in Python. What are some things I should look out for?
A quick example - when looking through the Python tutorial, I came across the fact that defaulted mutable parameters of a function (such as a list) are persisted (remembered from call to call). This was counter-intuitive to me as a Java programmer and hard to get my head around. (See here and here if you don't understand the example.)
Someone also provided me with this list, which I found helpful, but short. Anyone have any other examples of how a Java programmer might tend to misuse Python...? Or things a Java programmer would falsely assume or have trouble understanding?
Edit: Ok, a brief overview of the reasons addressed by the article I linked to to prevent duplicates in the answers (as suggested by Bill the Lizard). (Please let me know if I make a mistake in phrasing, I've only just started with Python so I may not understand all the concepts fully. And a disclaimer - these are going to be very brief, so if you don't understand what it's getting at check out the link.)
A static method in Java does not translate to a Python classmethod
A switch statement in Java translates to a hash table in Python
Don't use XML
Getters and setters are evil (hey, I'm just quoting :) )
Code duplication is often a necessary evil in Java (e.g. method overloading), but not in Python
(And if you find this question at all interesting, check out the link anyway. :) It's quite good.)
Don't put everything into classes. Python's built-in list and dictionaries will take you far.
Don't worry about keeping one class per module. Divide modules by purpose, not by class.
Use inheritance for behavior, not interfaces. Don't create an "Animal" class for "Dog" and "Cat" to inherit from, just so you can have a generic "make_sound" method.
Just do this:
class Dog(object):
def make_sound(self):
return "woof!"
class Cat(object):
def make_sound(self):
return "meow!"
class LolCat(object):
def make_sound(self):
return "i can has cheezburger?"
The referenced article has some good advice that can easily be misquoted and misunderstood. And some bad advice.
Leave Java behind. Start fresh. "do not trust your [Java-based] instincts". Saying things are "counter-intuitive" is a bad habit in any programming discipline. When learning a new language, start fresh, and drop your habits. Your intuition must be wrong.
Languages are different. Otherwise, they'd be the same language with different syntax, and there'd be simple translators. Because there are not simple translators, there's no simple mapping. That means that intuition is unhelpful and dangerous.
"A static method in Java does not translate to a Python classmethod." This kind of thing is really limited and unhelpful. Python has a staticmethod decorator. It also has a classmethod decorator, for which Java has no equivalent.
This point, BTW, also included the much more helpful advice on not needlessly wrapping everything in a class. "The idiomatic translation of a Java static method is usually a module-level function".
The Java switch statement in Java can be implemented several ways. First, and foremost, it's usually an if elif elif elif construct. The article is unhelpful in this respect. If you're absolutely sure this is too slow (and can prove it) you can use a Python dictionary as a slightly faster mapping from value to block of code. Blindly translating switch to dictionary (without thinking) is really bad advice.
Don't use XML. Doesn't make sense when taken out of context. In context it means don't rely on XML to add flexibility. Java relies on describing stuff in XML; WSDL files, for example, repeat information that's obvious from inspecting the code. Python relies on introspection instead of restating everything in XML.
But Python has excellent XML processing libraries. Several.
Getters and setters are not required in Python they way they're required in Java. First, you have better introspection in Python, so you don't need getters and setters to help make dynamic bean objects. (For that, you use collections.namedtuple).
However, you have the property decorator which will bundle getters (and setters) into an attribute-like construct. The point is that Python prefers naked attributes; when necessary, we can bundle getters and setters to appear as if there's a simple attribute.
Also, Python has descriptor classes if properties aren't sophisticated enough.
Code duplication is often a necessary evil in Java (e.g. method overloading), but not in Python. Correct. Python uses optional arguments instead of method overloading.
The bullet point went on to talk about closure; that isn't as helpful as the simple advice to use default argument values wisely.
One thing you might be used to in Java that you won't find in Python is strict privacy. This is not so much something to look out for as it is something not to look for (I am embarrassed by how long I searched for a Python equivalent to 'private' when I started out!). Instead, Python has much more transparency and easier introspection than Java. This falls under what is sometimes described as the "we're all consenting adults here" philosophy. There are a few conventions and language mechanisms to help prevent accidental use of "unpublic" methods and so forth, but the whole mindset of information hiding is virtually absent in Python.
The biggest one I can think of is not understanding or not fully utilizing duck typing. In Java you're required to specify very explicit and detailed type information upfront. In Python typing is both dynamic and largely implicit. The philosophy is that you should be thinking about your program at a higher level than nominal types. For example, in Python, you don't use inheritance to model substitutability. Substitutability comes by default as a result of duck typing. Inheritance is only a programmer convenience for reusing implementation.
Similarly, the Pythonic idiom is "beg forgiveness, don't ask permission". Explicit typing is considered evil. Don't check whether a parameter is a certain type upfront. Just try to do whatever you need to do with the parameter. If it doesn't conform to the proper interface, it will throw a very clear exception and you will be able to find the problem very quickly. If someone passes a parameter of a type that was nominally unexpected but has the same interface as what you expected, then you've gained flexibility for free.
The most important thing, from a Java POV, is that it's perfectly ok to not make classes for everything. There are many situations where a procedural approach is simpler and shorter.
The next most important thing is that you will have to get over the notion that the type of an object controls what it may do; rather, the code controls what objects must be able to support at runtime (this is by virtue of duck-typing).
Oh, and use native lists and dicts (not customized descendants) as far as possible.
The way exceptions are treated in Python is different from
how they are treated in Java. While in Java the advice
is to use exceptions only for exceptional conditions this is not
so with Python.
In Python things like Iterator makes use of exception mechanism to signal that there are no more items.But such a design is not considered as good practice in Java.
As Alex Martelli puts in his book Python in a Nutshell
the exception mechanism with other languages (and applicable to Java)
is LBYL (Look Before You Leap) :
is to check in advance, before attempting an operation, for all circumstances that might make the operation invalid.
Where as with Python the approach is EAFP (it's easier to Ask for forgiveness than permission)
A corrollary to "Don't use classes for everything": callbacks.
The Java way for doing callbacks relies on passing objects that implement the callback interface (for example ActionListener with its actionPerformed() method). Nothing of this sort is necessary in Python, you can directly pass methods or even locally defined functions:
def handler():
print("click!")
button.onclick(handler)
Or even lambdas:
button.onclick(lambda: print("click!\n"))

Metaprogramming - self explanatory code - tutorials, articles, books [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code.
I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples.
Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)?
EDIT:
Something of the likes of this example from the Pragmatic Programmer:
...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle:
P 2 # select pen 2
D # pen down
W 2 # draw west 2cm
N 1 # then north 1
E 2 # then east 2
S 1 # then back south
U # pen up
Thank you!
Welcome to the wonderful world of meta-programming :) Meta programming relates actually to many things. I will try to list what comes to my mind:
Macro. The ability to extend the syntax and semantics of a programming language was explored first under the terminology macro. Several languages have constructions which resemble to macro, but the piece of choice is of course Lisp. If you are interested in meta-programming, understanding Lisp and the macro system (and the homoiconic nature of the languge where code and data have the same representation) is definitively a must. If you want a Lisp dialect that runs on the JVM, go for Clojure. A few resources:
Clojure mini language
Beating the Averages (why Lisp is a secret weapon)
There is otherwise plenty of resource about Lisp.
DSL. The ability to extend one language syntax and semantics is now rebranded under the term "DSL". The easiest way to create a DSL is with the interpreter pattern. Then come internal DSL with fluent interface and external DSL (as per Fowler's terminology). Here is a nice video I watched recently:
DSL: what, why, how
The other answers already pointed to resources in this area.
Reflection. Meta-programming is also inseparable form reflection. The ability to reflect on the program structure at run-time is immensely powerful. It's important then to understand what introspection, intercession and reification are. IMHO, reflection permits two broad categories of things: 1. the manipulation of data whose structure is not known at compile time (the structure of the data is then provided at run-time and the program stills works reflectively). 2. powerful programming patterns such as dynamic proxy, factories, etc. Smalltalk is the piece of choice to explore reflection, where everything is reflective. But I think Ruby is also a good candidate for that, with a community that leverage meta programming (but I don't know much about Ruby myself).
Smalltalk: a reflective language
Magritte: a meta driven approach to empower developpers and end-users
There is also a rich literature on reflection.
Annotations. Annotations could be seen as a subset of the reflective capabilities of a language, but I think it deserves its own category. I already answered once what annotations are and how they can be used. Annotations are meta-data that can be processed at compile-time or at run-time. Java has good support for it with the annotation processor tool, the Pluggable Annotation Processing API, and the mirror API.
Byte-code or AST transformation. This can be done at compile-time or at run-time. This is somehow are low-level approach but can also be considered a form of meta-programming (In a sense, it's the same as macro for non-homoiconic language.)
DSL with Groovy (There is an example at the end that shows how you can plug your own AST transformation with annotations).
Conclusion: Meta-programming is the ability for a program to reason about itself or to modify itself. Just like meta stack overflow is the place to ask question about stack overflow itself. Meta-programming is not one specific technique, but rather an ensemble of concepts and techniques.
Several things fall under the umbrella of meta-programming. From your question, you seem more interested in the macro/DSL part. But everything is ultimately related, so the other aspects of meta-programming are also definitively worth looking at.
PS: I know that most of the links I've provided are not tutorials, or introductory articles. These are resources that I like which describe the concept and the advantages of meta-programming, which I think is more interesting
I've mentioned C++ template metaprogramming in my comment above. Let me therefore provide a brief example using C++ template meta-programming. I'm aware that you tagged your question with java, yet this may be insightful. I hope you will be able to understand the C++ code.
Demonstration by example:
Consider the following recursive function, which generates the Fibonacci series (0, 1, 1, 2, 3, 5, 8, 13, ...):
unsigned int fib(unsigned int n)
{
return n >= 2 ? fib(n-2) + fib(n-1) : n;
}
To get an item from the Fibonacci series, you call this function -- e.g. fib(5) --, and it will compute the value and return it to you. Nothing special so far.
But now, in C++ you can re-write this code using templates (somewhat similar to generics in Java) so that the Fibonacci series won't be generated at run-time, but during compile-time:
// fib(n) := fib(n-2) + fib(n-1)
template <unsigned int n>
struct fib // <-- this is the generic version fib<n>
{
static const unsigned int value = fib<n-2>::value + fib<n-1>::value;
};
// fib(0) := 0
template <>
struct fib<0> // <-- this overrides the generic fib<n> for n = 0
{
static const unsigned int value = 0;
};
// fib(1) := 1
template <>
struct fib<1> // <-- this overrides the generic fib<n> for n = 1
{
static const unsigned int value = 1;
};
To get an item from the Fibonacci series using this template, simply retrieve the constant value -- e.g. fib<5>::value.
Conclusion ("What does this have to do with meta-programming?"):
In the template example, it is the C++ compiler that generates the Fibonacci series at compile-time, not your program while it runs. (This is obvious from the fact that in the first example, you call a function, while in the template example, you retrieve a constant value.) You get your Fibonacci numbers without writing a function that computes them! Instead of programming that function, you have programmed the compiler to do something for you that it wasn't explicitly designed for... which is quite remarkable.
This is therefore one form of meta-programming:
Metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data, or that do part of the work at compile time that
would otherwise be done at runtime.
-- Definition from the Wikipedia article on metaprogramming, emphasis added by me.
(Note also the side-effects in the above template example: As you make the compiler pre-compute your Fibonacci numbers, they need to be stored somewhere. The size of your program's binary will increase proportionally to the highest n that's used in expressions containing the term fib<n>::value. On the upside, you save computation time at run-time.)
From your example, it seems you are talking about domain specific languages (DSLs), specifically, Internal DSLs.
Here is a large list of books about DSL in general (about DSLs like SQL).
Martin Fowler has a book that is a work in progress and is currently online.
Ayende wrote a book about DSLs in boo.
Update: (following comments)
Metaprogramming is about creating programs that control other programs (or their data), sometimes using a DSL. In this respect, batch files and shell scripts can be considered to be metaprogramming as they invoke and control other programs.
The example you have shows a DSL that may be used by a metaprogram to control a painting program.
Tcl started out as a way of making domain-specific languages that didn't suck as they grew in complexity to the point where they needed to get generic programming capabilities. Moreover, it remains very easy to add in your own commands precisely because that's still an important use-case for the language.
If you're wanting an implementation integrated with Java, Jacl is an implementation of Tcl in Java which provides scriptability focussed towards DSLs and also access to access any Java object.
(Metaprogramming is writing programs that write programs. Some languages do it far more than others. To pick up on a few specific cases, Lisp is the classic example of a language that does a lot of metaprogramming; C++ tends to relegate it to templates rather that permitting it at runtime; scripting languages all tend to find metaprogramming easier because their implementations are written to be more flexible that way, though that's just a matter of degree..)
Well, in the Java ecosystem, i think the simplest way to implement a mini-language is to use scripting languages, like Groovy or Ruby (yes, i know, Ruby is not a native citizen of the java ecosystem). Both offer rather good DSL specification mechanism, that will allow you to do that with far more simplicity than the Java language would :
Writing DSL in Groovy
Creating Ruby DSL
There are however pure Java laternatives, but I think they'll be a little harder to implement.
You can have a look at the eclipse modeling project, they've got support for meta-models.
There's a course on Pluralsight about Metaprogramming which might be a good entry point https://app.pluralsight.com/library/courses/understanding-metaprogramming/table-of-contents

what are the major differences and similarity in java and ruby? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am java professional now I like to go for ruby.Are there any similarity in both the languages? and what are the major differences? As both are object oriented.
What about these:
Similarities
As with Java, in Ruby,...
Memory is managed for you via a garbage collector.
Objects are strongly typed.
There are public, private, and protected methods.
There are embedded doc tools (Ruby’s is called RDoc). The docs generated by rdoc look very similar to those generated by javadoc.
Differences
Unlike Java, in Ruby,...
You don’t need to compile your code. You just run it directly.
All member variables are private. From the outside, you access everything via methods.
Everything is an object, including numbers like 2 and 3.14159.
There’s no static type checking.
Variable names are just labels. They don’t have a type associated with them.
There are no type declarations. You just assign to new variable names as-needed and they just “spring up” (i.e. a = [1,2,3] rather than int[] a = {1,2,3};).
There’s no casting. Just call the methods.
The constructor is always named “initialize” instead of the name of the class.
You have “mixin’s” instead of interfaces.
== and equals() are handled differently in Ruby. Use == when you want to test equivalence in Ruby (equals() is Java). Use equal?() when you want to know if two objects are the same (== in Java).
Taken from: To Ruby From Java
Besides being object oriented there are very few similarities between the two languages. Java is a statically typed compiled language while ruby is a dynamically typed interpreted language. The syntax is also very different. Java uses the c convention of terminating lines with a semi-colon while ruby uses the return character.
While Java does have some built in support for iterators ruby's uses of iterators is pervasive throughout the language.
This obviously only touches upon a comparison of the two. This is a decent write-up on the comparisons
You're asking a very broad question. I like to compare scripting languages similarly to how I'd compare spoken languages, so in this case; what are the major differences and similarities between Spanish and Italian?
If you ask that question, you're going to either get very varied or very long answers. Explaining differences between languages are difficult at best, as it's hard to pinpoint key factors.
This is proved by the responses here so far, as well as the links other people have linked to. They're either varied or very long.
Going back to the Spanish vs. Italian analogy, I could say that the languages are similar but still very different. If you (only) know one of them, you might be able to understand what's going on in the other, though you would probably not be able to use it very well. Knowing one definitely makes it easier for you to learn the other. One is used by a larger number of people, so you might benefit more from learning it.
All of the previous paragraph can be applied to Java vs. Ruby as well. Saying that both are object oriented is like saying Spanish and Italian both are members of the Romanic language family.
Of course, all of this is irrelevant. Most probably, your underlying question is whether it's "worth" learning Ruby instead of or in addition to Java. Unfortunately, there's no easy answer to that either. You have to weigh advantages and disadvantages with each language, such as popularity, demand and career opportunities. And then there's naturally the question of which language you prefer. You might like one language more than the other simply because it has a nicer syntax. (Similarly, you may prefer Italian because you think it's more "beautiful" than Spanish, even though the latter is more widespread and you'd have more "use" for it.)
Personally, I prefer Ruby. For many different reasons. Just like I prefer Italian.
The Object Oriented feature in Ruby is actually very different compared to Java.
In Ruby, everything is an object, including a primitive type (in Java) like integer.
In Ruby, new is like a property instead of a keyword. So to instantiate an object you would do this in Ruby:
animal = Animal.new
Ruby is strong typing but also dynamic. Because of its dynamicsm, Ruby enables you to do duck typing.
Ruby's answer to multiple inheritance is mixin (which is a language feature), where in Java you would implement many interfaces.
Ruby has got block, where you would use anonymous class to achieve the same thing in Java. But IMHO Ruby block is more powerful.
So I can say there's not too much similarities in Java and Ruby. Until today I can't find any similarities between the two as Ruby has gone its own path unlike many other language that derives from C language.

Besides dynamic typing, what makes Ruby "more flexible" than Java? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been using Java almost since it first came out but have over the last five years gotten burnt out with how complex it's become to get even the simplest things done. I'm starting to learn Ruby at the recommendation of my psychiatrist, uh, I mean my coworkers (younger, cooler coworkers - they use Macs!). Anyway, one of the things they keep repeating is that Ruby is a "flexible" language compared to older, more beaten-up languages like Java but I really have no idea what that means. Could someone explain what makes one language "more flexible" than another? Please. I kind of get the point about dynamic typing and can see how that could be of benefit for conciseness. And the Ruby syntax is, well, beautiful. What else? Is dynamic typing the main reason?
Dynamic typing doesn't come close to covering it. For one big example, Ruby makes metaprogramming easy in a lot of cases. In Java, metaprogramming is either painful or impossible.
For example, take Ruby's normal way of declaring properties:
class SoftDrink
attr_accessor :name, :sugar_content
end
# Now we can do...
can = SoftDrink.new
can.name = 'Coke' # Not a direct ivar access — calls can.name=('Coke')
can.sugar_content = 9001 # Ditto
This isn't some special language syntax — it's a method on the Module class, and it's easy to implement. Here's a sample implementation of attr_accessor:
class Module
def attr_accessor(*symbols)
symbols.each do |symbol|
define_method(symbol) {instance_variable_get "##{symbol}"}
define_method("#{symbol}=") {|val| instance_varible_set("##{symbol}", val)}
end
end
end
This kind of functionality allows you a lot of, yes, flexibility in how you express your programs.
A lot of what seem like language features (and which would be language features in most languages) are just normal methods in Ruby. For another example, here we dynamically load dependencies whose names we store in an array:
dependencies = %w(yaml haml hpricot sinatra couchfoo)
block_list %w(couchfoo) # Wait, we don't really want CouchDB!
dependencies.each {|mod| require mod unless block_list.include? mod}
It's also because it's a classless (in the Java sense) but totally object oriented (properties pattern) so you can call any method, even if not defined, and you still get a last chance to dynamically respond to the call, for example creating methods as necessarry on the fly. Also Ruby doesn't need compilation so you can update a running application easily if you wanted to. Also an object can suddenly inherit from another class/object at anytime during it's lifetime through mixins so it's another point of flexibility. Anyways I agree with the kids that this language called Ruby , which has actually been around as long as Java, is very flexible and great in many ways, but I still haven't been able to agree it's beatiful (syntax wise), C is more beatiful IMHO (I'm a sucker for brackets), but beauty is subjective, the other qualities of Ruby are objective
Blocks, closures, many things. I'm sure some much better answers will appear in the morning, but for one example here's some code I wrote ten minutes ago - I have an array of scheduled_collections, some of which have already happened, others which have been voided, canceled, etc. I want to return an array of only those that are pending. I'm not sure what the equivalent Java would be, but I imagine it's not this one-line method:
def get_all_pending
scheduled_collections.select{ |sc| sc.is_pending? }
end
A simpler example of the same thing is:
[0,1,2,3].select{|x| x > 1}
Which will produce [2,3]
Things I like
less code to get your point across
passing around code blocks (Proc, lambdas) is fun and can result in tinier code. e.g. [1, 2, 3].each{|x| puts "Next element #{x}"}
has the scripting roots of PERL.. very nice to slice n dice routine stuff like parsing files with regexps, et. all
the core data structure class API like Hash and Array is nicely done.
Metaprogramming (owing to its dynamic nature) - ability to create custom DSLs (e.g. Rails can be termed a DSL for WebApps written in Ruby)
the community that is spawning gems for just about anything.
Mixins. Altering a Ruby class to add new functionality is trivially easy.
Duck typing refers to the fact when types are considered equivalent by what methods them implement, not based on their declared type. To take a concrete example, many methods in Ruby take a IO-like object to operate on a stream. This means that the object has to implement enough functions to be able to pass as an IO type object (it has to sound enough like a duck).
In the end it means that you have to write less code than in Java to do the same thing. Everything is not great about dynamic languages, though. You more or less give up all of the compile-time typechecking that Java (and other strongly/statically typed languages) gives you. Ruby simply has no idea if you're about to pass the wrong object to a method; that will give you a runtime error. Also, it won't give you a runtime error until the code is actually called.
Just for laughs, a fairly nasty example of the flexibility of the language:
class Fixnum
def +(other)
self - other
end
end
puts 5 + 3
# => 2

Categories