I had a job interview for probation(? I'm not sure if that's the word)
and interviewer asked me to tell him what are the differences between structure and class.
So I told him everything I knew and everything that I've read on msdn.
And the guy said "not enough", I didn't have a clue.
So he said:
The struct are optimized, so if there is and integer and float, which have some bites identical, then it will save this space, so struct with
int=0 and float=0 is half the size of int=int.MAX, float=float.MIN.
Okay. So I was like - didn't hear about that.
But then, after the interview I was thinking about it and it doesn't really make sense to me. It would mean, that the struct size differs while we're changing the value of some variable in it. And it can't really be in the same place in memory, what if there is a collision while expanding. And we would have to write somewhere what bits we're skiping, not sure if it would give any optimization.
Also, he asked me at the begging what is the difference in struct and class in Java. To which I have answered, that there is no struct in Java and he said "Not for programmers, but Numeric types are structures" I was kind of like WTF.
Basically the question is:
Did this guy know something, which is very hard to get to know (I mean, I was looking for it on the web, can't find a thing)
or maybe he doesn't know anything about his job and tries to look cool at the job interviews.
The guy seems to be confused about the StructLayoutAttribute that can be applied to C# structs. It allows you to specify how the struct's memory is laid out, and you can, in fact, create a struct that has fields of varying types that all start at the same memory address. The part he seems to have missed is that you're going to only use one of those fields at a time. MSDN has more info here. Look at the example struct TestUnion toward the bottom of the page. It contains four fields, all with FieldOffset(0). If you run it, you can set an integer value to the i field and then check the d field and see that it has chnaged.
To me it looks like (one of you) was not talking about C# structs/classes but rather about more low-level or more general structs.
There is this special sort of memory optimization used e.g. in
1. C (unions)
and in
2. Pascal (variant records)
see e.g. article How do I translate a C union into Delphi? for an example.
Special form of this "structure" with dynamic polymorphic memory allocation is known as
3. http://en.wikipedia.org/wiki/Variant_type
and it was used heavily for inter-process data exchange in OLE automation APIs, in the pre-C# era (for decades in multitude of languages).
4. (s)he might be also talking about structure serialization format vs class in-memory format (see e.g. https://developers.google.com/protocol-buffers/docs/encoding for example of C# structure serialization)
5. you might be also talking about differences of internal JVM memory allocation (see e.g. http://blog.jamesdbloom.com/JVMInternals.html) which reminds me that you might be talking about the class file format and encoding of structs and special numeric literals vs encoding of classes (http://blog.jamesdbloom.com/JVMInternals.html#constant_pool)
So after 5 guesses I believe there is something lost in your translation of your talk with the interviewer and (s)he probably browsed into an area you claimed you know and it turned out you don't. It also might be that (s)he started talking bullshit and checked your reactions. Lying about your skills on the resume is not recommended in any job (e.g. http://www.softwaretestinghelp.com/5-common-interview-mistakes/). I'd vote for the interviewer knew the inteviewing job well enough
Related
Is there a canonical way to define a data structure in Java (and by extension Kotlin) which can be serialized into a byte array or sequence of bits in the order the bytes are defined in the structure?
Similar to a Struct in C.
Edit: So, further to this, I would like a simple and expressive way of defining a data structure and then accessing any part of it. So, for instance, in pseudocode:
DataStructure Message {
Bit newFlag;
Bit genderFlag;
Bit sizeFlag;
Bit activeFlag;
Bit[4] operation;
Byte messageSize;
Byte[] message;
}
So then we do:
Message firstMessage = new Message(1, 0, 1, 0, b'0010', 11, "Hello there");
And we can say:
ByteArray serialisedMessage = firstMessage.toBytes();
Which would give us an array which looked like:
[b'10100010', b'00001011', "Hello there" (but in bytes)]
Then we could do:
firstMessage.genderFlag = 1;
..and just rerun .toBytes on the object.
Obviously there are a million ways of doing stuff like this in Java, but nothing syntactically simple, as far as I can see - pretty much anything would involve having to write a custom serialisation (and not object serialisation as per Java) method for each object. Perhaps that's the canonical way to do this, but it would be nice to have it simpler, as per C, Rust, and, erm, COBOL.
I do not know the answer to your actual question.
I will offer, however, a thought or two about the nature of the question itself. C was not developed as a high-level language -- the best description I've heard of it is a "structured assembler", it has operators that were based on addressing modes available on
the 16-bit machines on which it was first developed, and was not developed as a standard to be used in applications but as something (much) easier than assembler that still allowed the programmer enough control to write very efficient code. The first two things done with it were a compiler and an operating system, so runtime efficiency was vital (in the early 70s) in ways that no one but mobile and embedded developers can begin to appreciate these days.
The "order the bytes are defined in the structure" is not, to my mind, a good way to think of the data in Java -- the programmer does not know or care the order in which the fields in his objects are stored, or whether they're stored at all -- it isn't part of the language definition. It seems to me any library or whatever that claimed to do this would have to put in disclaimers and/or have its own compiler; while I don't know of any reason it could not do so and follow the Java specification, I don't know why someone would bother.
So ask yourself why you want to do this. And if you have a good answer, put it in here; I'd be curious.
In the companies I've been working, I've seen a lot the use of prefixes to indicate the scope or the origin of variables, for example m for classes members, i for methods intern variables and a (or p) for methods parameters:
public class User {
private String mUserName;
public String setUserName(final String aUserName) {
final String iUserName = "Mr " + aUserName;
mUserName = iUserName;
}
}
What do you think about it? Is it recommended (or precisely not)? I found it quite ugly in a first phase, but the more I use it, the more I find it quite convenient when working on big methods for example.
Please note that I'm not talking about the Hungarian notation, where prefixes indicate the type rather than the scope.
I've also worked in shops that had rigid prefix notation requirements, but after awhile this became a "smell" that the code had grown out-of-control and global variables were leaking from everywhere indicating poor code/review.
Java's "this." notation is the prefered way to reference a field, over a local. The use of "m" prefix for variables was popularized by that "Micro.." company as a branding gimmick (they even said "don't use that because we do").
The general rule I follow is to name the variable according to what it is used to store. The variable name is simply an alias. If it stores a user name, then userName is valid. If it is a list of user names, then userNames or userNameList is valid. However, I avoid including the "type" in the variable-name now, because the type changes quite often (shouldn't a collection of user names be a set, in practice? and so on...)
At the end of the day, if the variable name is useful to you to remember what the code was doing down the road, it is probably a good idea. Maintainability and Readability trump "perceived" efficiency and terse syntax, especially because modern compilers are rewriting your code according to macro usage patterns.
I hope this helps somewhat, and am happy to supply more details of any claims herein.
ps. I highly recommend the Elements of Java Style for these types of questions. I used to work with the authors and they are geniuses when it comes to style!
Note: yours is a very opinion-based question (generally frowned on in StackOverflow these days), but I still think it's a worthwhile topic.
So here's my perspective, for what it's worth:
Personally, I think an indicator of scope in variable names can be helpful, both when writing and reading code. Some examples:
If I am reading a class method and I don't see any "m_XXX" being used, I can conclude "this function might as well be static — it doesn't use instance data." This can be done with a quick scan of variables, if the names have that information.
Any time I see "g_XXX" (global), I can start being worried, and pay closer attention (: Especially writing to a global is a big red flag, and especially especially if there is any concurrency/threading involved.
Speaking of concurrency, there is a pretty clear ordering of "safeness" for mutable data: locals are okay, members are dangerous, globals are very dangerous. So, when thinking about such code, having variable scope in mind is important. For this reason, in C/C++ I think having a prefix for function-static variables is useful, too (they're essentially "global" across invocations of that function). More an indication of lifetime than scope, in that case.
It can help junior developers think about the above issues more actively.
The popularity of this convention varies by language. I see it in C++ and C most often. Java somewhat frequently. Not very much in Python, Perl, Bash or other "scripting" languages. I wonder if there is some correlation between "high performance" code and benefit from such a scheme. Maybe just historical happenstance, though. Also, some languages have syntax that already includes some of this info (such as Python's self.xxx).
I say disregard any arguments along the lines of "oh, Microsoft invented that for XYZ, ignore it" or "it looks clunky." I don't care who invented it or why or what it looks like, as long as it's useful (:
Side note: Some IDEs can give you scope information (by hovering your mouse, doing special highlighting, or otherwise), and I can understand that people using such systems find putting that info in the variable names redundant. If your whole team uses a standard environment like that, then great; maybe you don't need a naming scheme. Usually there is some variation across people though, or maybe your code review and diff tools don't offer similar features, so there are still often cases where putting the info inside the text itself is useful.
In an ideal world, we would have only small functions that don't use lots of variables, and the problem such naming prefixes try to solve would not exist (or be small enough to not warrant "corrupting" all your code with such a scheme just to improve some corner cases).
But we do not live in an ideal world.
Small functions are great, but sometimes that is not practical. Your algorithm may have inherent complexity that cannot be expressed succinctly in your language, or you may have other constraints (such as performance or available development time) that require you to write "ugly" code. For the above-mentioned reasons, a naming scheme can help in such cases, and others.
Having variable naming convention works great for teams, and IMO must be used for languages that are not string typed. e.g. JScript.
In Jscript the only type is a ‘var’ which is basically evaluated at run-time. It becomes more important in Jscript to decorate the variables with the type of the data that is expected to be in them. Use consistent prefixes, that your team can decide, e.g. s or str, or txt for string, i/j/l/l/m/n for integers ( like FORTRAN :-) ), q for double, obj or o for non primitive data etc. basically using common sense. Do not use variable names without any prefix unless the variable name clearly indicates the data type.
e.g.
variable name “answer” is a bad name since answer could be text, or a number, so it must be
strAnswer or sAnswer , jAnswer, qAnswer etc.
But
“messageText” or "msgTxt" is a good enough because it is clear that the content is some text.
But naming a variable "dataResponse" or "context" is confusing.
Sometimes on the server one needs to fix or debug something and the only editors are vi or notepad in worst cases nano / sed, where there is no contextual help from the editor, having a coding convention can be really helpful.
Reading the code is also faster if the convention is followed. It is like having a prefix Mr. or Ms. to determine the gender. Mr.Pat or Ms.Pat... when Pat itself does not tell the gender of Patrick or Patricia...
I had experience on Java, because of some results, I need to code in C, is it difficult to switch from Java to C? And what is the biggest different between these two languages?
Without question, first and foremost, it's the manual memory management.
Second is that C has no objects so C code will tend to be structured very differently to Java code.
Edit: a little anecdote: back 15 or so years ago when it was common to log on to your local ISP on a UNIX command prompt when PPP was still pretty new and when university campuses still had an abundance of dumb terminals to UNIX servers, many had a program called fortune that would run when we logged on and output a random geeky platitude. I literally laughed out loud one day when I logged in an read:
C -- a programming language that
combines the power of assembly
language with the flexibility of
assembly language.
It's funny because it's true: C is the assembly language of modern computing. That's not a criticism, merely an observation.
Perhaps the most difficult concept is learning how to handle pointers and memory management yourself. Java substantially abstracts many concepts related to pointers, but in C, you'll have to understand how pointers are related to one another, and to the other concepts in the language.
Besides pointers and memory management, C is not an object-oriented language. You can organize your code to meet some object based concepts, but you will miss some features like inheritance, interfaces and polimorphism.
Java has a massive standard library, while C's is tiny. You'll find yourself re-inventing every kind of wheel over and over again. Even if you're writing the fifteenth linked list library of your career, you might very well make the same mistakes you've made before.
C has no standard containers besides arrays, few algorithms, no standard access to networking, graphics, web anything, xml anything, and so on. You have to really know what you're doing not to accidentally invoke undefined behaviour, cause memory corruption, resource leaks, crashes, etc. It's not for tourists.
Good luck.
Apart from Standard C being a non-OO language, the standard library being very small (and in some places, just plain bad), the goriness of manual memory handling, the complete lack of threading utilities (or even awareness of multithreading), the lax "type system" and being built for single-byte character sets, I think the greatest conceptual difference is that you have to have a clear notion of ownership of objects (or memory chunks, as it becomes in C).
It's always good practice to specify ownership of objects, but for non-GCed languages it is paramount. When you pass a pointer to another function, will that function assume ownership of the pointer or will it just "borrow" it from you for the duration of the call? When you write a function taking a pointer argument, does it make sense for you to assume ownership of the pointer, or does the pointee continue living after the function terminates?
People have covered the big differences: memory management, pointers, no fancy objects (just plain structs). So I will list a couple of minor things:
You have to declare things at the beginning of a block of code, not just when you want to use them for the first time
Automatic type conversion between pointers, booleans, ints and just about anything. This is typical C code: if (!ptr) { /* null pointer detected */ }
No bounds-checking in any sense, and fancy pointer arithmetic allowed. It is explicitly legal to reference things outside their bounds: ptr2 = ptr + 10; ptr2[-10] ++; is equivalent to ptr[0] ++;.
Strings are zero-terminated char arrays. Forgetting this and the previous point causes all sorts of bugs and security holes.
Headers should be separated from implementation code (.h and .c), and must explicitly reference each other via #include statements, avoiding any circular dependencies. There is a preprocessor, and compile-time macros (such as #includes) are a vital part of the language.
And finally -- C has a goto statement. But, if you ever use it, Dijkstra will rise from the grave and haunt you.
Java is a garbage collected environment, C is not. C has pointers, Java does not. Java methods use pass by reference more often, implicitly, whereas C you must explicitly indicate when you are passing by reference. Lot more considerations to be sure.
I'm reading about program specialization - specifically java and I don't think I quite understand it to be honest. So far what I understand is that it is a method for optimizing efficiency of programs by constraining parameters or inputs? How is that actually done? Can someone maybe explain to me how it helps, and maybe an example of what it actually does and how its done?
Thanks
I have been reading:
Program Specialization - java
Program specialization is the process of specializing a program when you know in advance what are the arguments you're going to have.
One example is if you have a test and you know that with your arguments, you're never going to enter the block, you can eliminate the test.
You create a specialized version of the program for a certain kind of input.
Basically, it helps to get rid off useless with your input. However, with the modern architectures and compilers (at least in C), you're not going to win a lot in terms of performance.
From the same authors, i would recommend the Tempo work.
EDIT
From the Toplas paper:
Program specialization is a program
transformation technique that
optimizes a pro- gram fragment with
respect to information about a context
in which it is used, by generating an
implementation dedicated to this usage
context. One approach to automatic
program specialization is partial
evaluation, which performs aggressive
inter-procedural constant propagation
of values of all data types, and
performs constant folding and
control-flow simplifications based on
this information [Jones et al. 1993].
Partial evaluation thus adapts a
program to known (static) informa-
tion about its execution context, as
supplied by the user (the programmer).
Only the program parts controlled by
unknown (dynamic) data are
reconstructed. Par- tial evaluation
has been extensively investigated for
functional languages [Bondorf 1990;
Consel 1993], logic languages [Lloyd
and Shepherdson 1991], and imperative
languages [Andersen 1994; Baier et al.
1994; Consel et al. 1996].
Interesting.
It's not a very common term, at least I haven't come across it before.
I don't have time to read the whole paper, but it seems to refer to the potential to optimise a program depending on the context in which it will be run. An example in the paper shows an abstract "power" operation being optimised through adding a hard-coded "cube" operation. These optimisations can be done automatically, or may require programmer "hints".
It's probably worth pointing out that specialization isn't specific to Java, although the paper you link to describes "JSpec", a Java code specializer.
It looks like Partial Evaluation applied to Java.
That idea is if you have a general function F(A,B) having two parameters A and B, and (just suppose) every time it is called, A is always the same. Then you could transform F(A,B) into a new function FA(B) that only takes one parameter, B. This function should be faster because it is not having to process the information in A - it already "knows" it. It can also be smaller, for the same reason.
This is closely related to code generation.
In code generation, you write a code generator G to take input A and write the small, fast specialized function FA. G(A) -> FA.
In specialization, you need three things, the general program F, the specializer S, and the input A: S(F,A) -> FA.
I think it's a case of divide-and-conquer.
In code generation, you only have to write G(A), which is simple because it only has to consider all As, while the generated program considers all the Bs.
In Partial Evaluation, you have to get an S somewhere, and you have to write F(A,B) which is more difficult because it has to consider the cross product of all possible As and Bs.
In personal experience, a program F(A,B) had to be written to bridge real-time changes from an older hierarchical database to a newer relational one. A was the meta-description of how to map the old database to the new, in the form of another database. B was the changes being made to the original database, and F(A,B) computed the corresponding changes to the newer database. Since A changed at low frequency (weekly), F(A,B) did not have to be written. Instead a generator G(A) was written (in C) to generate FA(B) (in C). Time saved was roughly an order of magnitude of development time, and two orders of magnitude of run time.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can anyone help comparing and contrasting between Java and cobol in terms of technical differences as well as architectural design styles
Similarities
Cobol and Java were going to change the world and solve the problem of programming.
Neither lived up to the initial hype.
There are now very large, bloated Cobol and Java programs that are used by banks and are "legacy" ... too large and critical to rewrite or throw away.
Cobol introduce the idea of having long, readable names in their code. Java recommends long, readable names.
Differences
Cobol was invented by an American, Grace Murray Hopper, who received the highest award by the Department of Defense, the Defense Distinguished Service Medal.
Java was invented by a Canadian, James Gosling, who received Canada's highest civilian honor, an Officer of the Order of Canada.
3 COBOL convention uses a "-" to separate words in names, Java convention uses upper/lower CamelCase.
COBOL was popular for the simple reason, to develop business applications.
Since the syntax was so clear and human-like, written in procedural style, it was for that reason, that made adapting to the changes in the business environment much easier, for example, to assign a value of pi to a variable, and then subtract zero from it - simple example to show the actual COBOL statements/sentences (it is years since I last programmed in Cobol)
MOVE 3.14 INTO VARPI.
SUBTRACT ZERO FROM VARPI GIVING VARPIRESULT.
IF VARPIRESULT AND ZERO EQUALS VARPI THEN DISPLAY 'Ok'.
If I remember, the COBOL sentences have to be at column 30...
And it is that, hence easier to troubleshoot because any potential business logic error can be easily pin-pointed as a result. Not alone that, since COBOL ran on mainframe systems, it was for a reason, the data transfer from files were shifted at a speed that is light-years ahead of the other systems such as PC's and that is another reason why data processing in COBOL was blindingly fast.
I have worked on the Y2k stuff on the mainframe (IBM MVS/360) and it was incredible at the dawn of the 21st century, praying that the fixes I put in wouldn't bring the business applications to their knees...that was hype, aside from that..to this day, it is still used because of the serious transfer speed of data shuffling around within mainframes and ease of maintainability.
I know for starters, Java would not just be able to do that, has Java got a port available for these mainframes (IBM MVS/360, 390, AS400)?
Now, businesses cannot afford to dump COBOL as they would effectively be 'committing suicide' as that is where their business applications resides on, which is the reason why the upgrade, migration, porting to a different language is too expensive and would cause a serious headache in the world of businesses today...
Not alone that, imagine having to rewrite procedural code which are legacy code and could contain vital business logic, to take advantage of the OOP style of Java, the end result would be 'lost in translation' and requiring a lot of patience, stress and pressure.
Imagine, a healthcare system (I have worked for one, which ran on the system I mentioned above), was to ditch all their claims processing,billing etc (written in COBOL) to Java, along with the potential for glitches and not to mention, serious $$$ amount of money to invest which would cost the healthcare company itself far more, the end result would be chaos and loss of money, and customers (corporations that offer employee benefits) would end up dumping the company for a better one.
So to answer your question, I hope I have illustrated the differences - to summarize:
COBOL is:
Procedural language
Simple human like syntax
very fast on mainframe systems
Easy to maintain code due to syntax
In contrast,
Java is:
Object Oriented
Syntax can get complicated
Requires a Java Virtual Machine to run and execute the compiled bytecode.
Hope this helps,
It is easier to point out what they have in common instead of listing their differences.
So here is the list:
You can use both to make the computer do things
They both get compiled to yet a different language (machine code, byte-code)
That is it!
Similarities:
Both extremely verbose and created with pointy-haired bosses, not programmers, in mind.
Both used primarily for boring business software.
Both have huge legacy and are going to be around a while.
Both languages target the "Write Once, Run Anywhere" idea. If vendor specific extensions are avoided, Cobol is very portable.
Cobol is very much a procedural language, while Java is very much an object oriented language. That said, there have been vendor specific OO extensions to Cobol for decades, and the new specification contains a formal specification. It is also possible to write procedural code in Java, you can easily make a program out of a single main() method.
Both are widely used in enterprise computing for their relative ease of use. Both languages are somewhat hard to shoot yourself in the foot with, compared with other common languages like C and C++.
The most significant difference is that Cobol supports native fixed point arithmetic. This is very important when dealing with financals. Most languages, Java included, only support this via add on libraries, thus they are many orders of magnitude slower when dealing with fixed point data and prone to (potentially very expensive) errors in that library code.
Cobol is a pure procedural language, not even functions in it (I used cobol in the 90s, so it might have changed since).
Java is OO (Although I heared there is a OO version for Cobol too), Oh...And the syntax is different.
Excelent list of similarities and differences : http://www.jsrsys.com/jsrsys/s8383sra.htm
It'swhat we do!
COBOL: COBOL Concept Description
Java: Java/OO Similar Concept
++: What Java/OO adds to Concept
When I began Java, I used to think the OO (Object Orientation) was "just like" good programming practices, except it was more formal, and the compiler enforced certain restrictions.
I no longer think that way. However, when you are beginning I think certain "is similar to" examples will help you grasp the concepts.
COBOL: Load Module/Program
Java: Class
COBOL: PERFORM
Java: method
++: can pass parameters to method, more like FUNCTION
other programs/classes can call methods in different classes if declared public. public/private gives designer much control over what other classes can see inside a class.
COBOL: Working Storage, statically linked sub-routine
Java: instance variables
++: (see next)
COBOL: Working Storge, dynamically loaded sub-routine
Java: Class variables
++: Java can mix both Class variables (called static, just the reverse of our COBOL example, and instance variables (the default).
Class variables (static) occur only once per Class (really in one JVM run-time environment).
Instance variables are unique to each instance of a class.
Here is an example from class JsrSysout. From my COBOL background I like to debug my code by DISPLAYing significant data to the SYSOUT data set. There is a Java method for this, System.out.prinln(...). The problem with this method is that the data you want just scrolls off the Java console, the equivalent of SYSOUT or perhaps DISPLAY UPON CONSOLE if you had your own stand-alone machine. I needed a way to easily do displays that would stop when the screen was full. Since there is only one Java console, the line-count for the screen clearly needs to be a class variable, so all instances (each program/class that logs here has its own instance of JsrSysout) stop at the bottom of the screen.
Multiple Instances of same class:
One (calling program) class can create multiple instances of the same class. Why would you want to do this? One good COBOL example is I/O routines. In COBOL you would need to code one I/O routine for each file you wish to access. If you want to open a particular file twice in one run-time environment you would need a different I/O routine with a different name, even if the logic was identical.
With Java you could code just one class for a particular logical file type. Then for each file you wish to read (or write) you simply create another instance of that class using the new operator. Here are some snippets of code from program IbfExtract that do exactly that. This program exploits the fact that I have written a class for Line Input, and another class for Line Output. These are called JsrLineIn and JsrLineOut.
This illustrates another dynamic feature of Java. When output is first created, it is an array of null pointers, it takes very little space. Only when a new object is created, and the pointer to it implicitly put in the array does storage for the object get allocated. That object can be anything from a String to an very complex Class.
COBOL: PICTURE
Java: No real equivalent.
I therefore invented a method to mymic a ZZZ,ZZZ,... mask for integer input. I have generally grouped my utility functions in JsrUtil. These are methods that really don't related to any type of object. Here is an example of padLeft that implements this logic. padLeft is also a good example of polymorphism. In COBOL, if you have different parameter lists you need different entry points. In Java, the types of parameters are part of the definition. For example:
COBOL: Decimal arithmetic
Java: Not in native Java, but IBM has implemented some BigDecimal classes.
I consider this the major weakness of Java for accounting type applications. I would have liked to see the packed decimal data type as part of the native JVM byte architecture. I guess it is not there because it is not in C or C++. I have only read about the BigDecimal classes, so I can't realy comment on their effectiveness.
COBOL: COPY or INCLUDE
Java: Inheritance
++: Much more powerfull!
In COBOL, if you change a COPY or INCLUDE member, you must recompile all the programs that use it. In Java, if program B inherits from program A, a change in program A is automatically inherited by program B without recompiling! Yes, this really works, and lends great power to Java applications. I exploited this for my Read/Sort/Report system. Class IbfReport contains all the basic logic common to the report programs. It has appropriate defaults for all of its methods. Classes IbfRP#### extend IbfReport, and contain only those methods unique to a particular report. If a change is made in IbfReport, it is reflected in the IbfRP#### programs (classes) the next time they are run.
COBOL: ON EXCEPTION
Java: try/throw/catch
++: can limit scope of error detection (see following)
COBOL: OPEN
Java: Input Streams
++: Automatic error detection, both a blessing and a curse.
COBOL: WRITE
Java: write (yes, really).
COBOL: CLOSE
Java: close method
COBOL: READ
Java: read...