VUgen: Recording trivial RMI interaction records invalid script? - java

After recording just the appearance of the logon window of our Java app in LR/VUgen 9.51 using the RMI protocol, the resulting script replays with a java.lang.ArrayIndexOutOfBoundsException. The code fragment looks like this:
_hashtable2 = new Hashtable();
_object_array3 = ((java.util.Collection)_hashtable2.values()).toArray();
_hashtable2.put("sessionId",(java.lang.String)_object_array3[0]); //yields exception!
_boolean1 = _mopsconstantserverif1.psi_requiresHostCommunication((java.util.Hashtable)_hashtable2, (java.util.Vector)null);
Of course generating an empty hashtable, converting it to an array, and referencing its first array element must yield an ArrayIndexOutOfBoundsException, right? But why does LR generate this kind of code at all? Is this a bug, or what am I doing wrong? I have never seen problems like this one when using RMI and LoadRunner.
Since the cause of the playback error is quite obvious and independent of the remainder of the recorded code (i.e. limited to the four statements shown), I try to ask without showing the whole script...

Ahh, RMI, the bane of my existence. I so dislike the RMI/Java combination in LoadRunner I do as much RMI work in Winsock as I can. You might consider the use of Winsock as a plan B option to avoid the Java issues you are experiencing today as Winsock is a straight C virtual user type. The use of a Windows sockets virtual user avoids the complications of the dark magic of Java and LoadRunner, plus it's lighter weight on the resource front and it's faster as a result. And, I am just a glutton for punishment on the Winsock front plus it keeps the C skills razor sharp!

Related

ScreenDevice.getIDstring() strange value

I am creating a Java application that will run on both Linux and Windows. The application will execute native code for multiple-monitor configurations. Because of this, I want to pass the monitor's ID from Java to the native code. This has lead me to Java's ScreenDevice.getIDstring().
Javadocs claim that this method is useful for debugging, but that it also uniquely identifies a monitor. For Linux devices, it returns a usable ID string that allows the native code to quickly retrieve the desired monitor object.
On Windows, however, the method simply returns /display0 for the first display, and then counts upwards for every following display (regardless if a graphics card contains both monitors, or if the monitors are on separate graphics cards).
When it comes to Windows C++ code, I have tried using EnumDisplayDevices and looking at the respective DeviceIDs and DeviceStrings. These do not match the values of Java:
First Monitor - Java = /display0, Windows = \\.\DISPLAY1
Second Monitor - Java = /display1, Windows = \\.\DISPLAY7
I am really confused as to where this /display# ID value is coming from... Where can I acquire the same /display# value when running native Windows C++ code?
Note: It would technically be possible to cycle through all existing displays to try and match the X/Y/width/height between the OS native code and Java, but this would be a lot of work and not very efficient. I would see this as a possible solution, but it is not very ideal.

Matlab segmentation fault when iterating vector assignment

I've been vectorizing some matlab code I'd previously written, and during this process matlab started crashing due to segmentation faults. I narrowed the problem down to a single type of computation: assigning to multiple struct properties.
For example, even self assignment of this form eventually causes a seg fault when executed several thousand times:
[my_class_instance.my_struct_vector.my_property] = my_class_instance.my_struct_vector.my_property;
I initially assumed this must be a memory leak of some sort, so tried printing out java's free memory after every iteration, but this remained fairly constant.
So yeah, completely at a loss now as to why this breaks :-/
UPDATE: the following changes fixes the seg faulting:
temp = [my_class_instance.my_struct_vector];
[temp.my_property] = temp.my_property;
[my_class_instance.my_struct_vector] = temp;
The question is now why this would fix anything. Something about repeated accessing a handle class rather than a struct list perhaps?
UPDATE 2: THE PLOT THICKENS
I've finally replicated the problem and the work around using a dummy program simple enough to post here:
a simple class:
classdef test_class
properties
test_prop
end
end
And a program that makes a bunch of vector assignments with the class, and will always crash.
test_instance = test_class();
test_instance.test_prop = struct('test_field',{1 1});
for i=1:10000
[test_instance.test_prop.test_field] = test_instance.test_prop.test_field;
end
UPDATE 3: THE PLOT THINS
Turns out I found a bug. According to Matlab tech support, repeated vector assignment of class properties simply won't work in R2011a (and presumably in earlier version). He told me it works fine in R2012a, and then mentioned the same workaround I discovered: use a temporary variable.
So yeah...
pretty sure this question ends with that support ticket, but if any daring individuals want to take a shot as to WHY this bug exists at all, I'd definitely still be interested in such an answer. (learning is fun!)
By far the most likely cause is that the operation is internally using self-modifying code. The problem with this is that modern processors have CPU caches, so if you change code in memory, but the code has already been committed to a cache, it will generate a seg fault.
The reason why it is random is because it depends on whether the modified code is in the cache at the time of modification and other factors.
To avoid this the programmer has to be sure to have the code flush the cache before doing a self-modification.

Parsing IBM 3270 data in java

I was wondering if anyone had experience retrieving data with the 3270 protocol. My understanding so far is:
Connection
I need to connect to an SNA server using telnet, issue a command and then some data will be returned. I'm not sure how this connection is made since I've read that a standard telnet connection won't work. I've also read that IBM have a library to help but not got as far as finding out any more about it.
Parsing
I had assumed that the data being returned would be a string of 1920 characters since the 3278 screen was 80x24 chars. I would simply need to parse these chars into the appropriate fields. The more I read about the 3270 protcol the less this seems to be the case - I read in the documentation provided with a trial of the Jagacy 3270 Java library that attributes were marked in the protocol with the char 'A' before the attribute and my understanding is that there are more chars denoting other factors such as whether fields are editable.
I'm reasonably sure my thinking has been too simplistic. Take an example like a screen containing a list of items - pressing a special key on one of the 24 visible rows drills down into more detailed information regarding that row.
Also it's been suggested to me that print commands can be issued. This has some positive implications - if the format of the string returned is not 1920 since it contains these characters such as 'A' denoting how users interact with the terminal, printing would eradicate these. Also it would stop having to page through lots of data. The flip side is I wouldn't know how to retrieve the data from the print command back to Java.
So..
I currently don't have access to the SNA server but have some screen shots of what the terminal will look like once I get a connection and was therefore going to start work on parsing. With so many assumptions and not a lot of idea on what the data will look like I feel really stumped. Does anyone have any knowledge of these systems that might help me back on track?
You've picked a ripper of a problem there. 3270 is a very complex protocol indeed. I wouldn't bother about trying to implement it, it's a fool's errand, and I'm speaking from painful personal experience. Try to find a TN3270 (Telnet 3270) client API.
This might not specifically answer your question, but...
If you are using Rational Developer for z/OS, your java code should be able to use the integrated HATS product to deal with the 3270 stream. It might not fit your project, but I thought I would mention it if all you are trying to do is some simple screen scraping, it makes things very easy.

MS Access - Can't Open Any More Tables

at work we have to deal with several MS Access mdb files, so we use the default JdbcOdbcBridge Driver which comes with the Sun JVM and, for most cases, it works great.
The problem is that when we have to deal with some larger files, we face several times exceptions with the message "Can't open any more tables". How can we avoid that?
We already close all our instances of PreparedStatements and RecordSets, and even set their variables to null, but even so this exception continues to happen. What should we do? How can we avoid these nasty exceptions? Does someone here knows how?
Is there any additional configuration to the ODBC drivers on Windows that we can change to avoid this problem?
"Can't open any more tables" is a better error message than the "Can't open any more databases," which is more commonly encountered in my experience. In fact, that latter message is almost always masking the former.
The Jet 4 database engine has a limit of 2048 table handles. It's not entirely clear to me whether this is simultaneous or cumulative within the life of a connection. I've always assumed it is cumulative, since opening fewer recordsets at a time in practice seems to make it possible to avoid the problem.
The issue is that "table handles" doesn't just refer to table handles, but to something much more.
Consider a saved QueryDef with this SQL:
SELECT tblInventory.* From tblInventory;
Running that QueryDef uses TWO table handles.
What?, you might ask? It only uses one table! But Jet uses a table handle for the table and a table handle for the saved QueryDef.
Thus, if you have a QueryDef like this:
SELECT qryInventory.InventoryID, qryAuthor.AuthorName
FROM qryInventory JOIN qryAuthor ON qryInventory.AuthorID = qryAuthor.AuthorID
...if each of your source queries has two tables in it, you're using these table handles, one for each:
Table 1 in qryInventory
Table 2 in qryInventory
qryInventory
Table 1 in qryAuthor
Table 2 in qryAuthor
qryAuthor
the top-level QueryDef
So, you might think you have only four tables involved (because there are only four base tables), but you'll actually be using 7 table handles in order to use those 4 base tables.
If in a recordset, you then use the saved QueryDef that uses 7 table handles, you've used up yet another table handle, for a total of 8.
Back in the Jet 3.5 days, the original table handles limitation was 1024, and I bumped up against it on a deadline when I replicated the data file after designing a working app. The problem was that some of the replication tables are open at all times (perhaps for each recordset?), and that used up just enough more table handles to put the app over the top.
In the original design of that app, I was opening a bunch of heavyweight forms with lots of subforms and combo boxes and listboxes, and at that time I used a lot of saved QueryDefs to preassemble standard recordsets that I'd use in many places (just like you would with views on any server database). What fixed the problem was:
loading the subforms only when they were displayed.
loading the rowsources of the combo boxes and listboxes only when they were onscreen.
getting rid of all the saved QueryDefs and using SQL statements that joined the raw tables, wherever possible.
This allowed me to deploy that app in the London office only one week later than planned. When Jet SP2 came out, it doubled the number of table handles, which is what we still have in Jet 4 (and, I presume, the ACE).
In terms of using Jet from Java via ODBC, the key point would be, I think:
use a single connection throughout your app, rather than opening and closing them as needed (which leaves you in danger of failing to close them).
open recordsets only when you need them, and clean up and release their resources when you are done.
Now, it could be that there are memory leaks somewhere in the JDBC=>ODBC=>Jet chain where you think you are releasing resources and they aren't getting released at all. I don't have any advice specific to JDBC (as I don't use it -- I'm an Access programmer, after all), but in VBA we have to be careful about explicitly closing our objects and releasing their memory structures because VBA uses reference counting, and sometimes it doesn't know that a reference to an object has been released, so it doesn't release the memory for that object when it goes out of scope.
So, in VBA code, any time you do this:
Dim db As DAO.Database
Dim rs As DAO.Recordset
Set db = DBEngine(0).OpenDatabase("[database path/name]")
Set rs = db.OpenRecordset("[SQL String]")
...after you've done what you need to do, you have to finish with this:
rs.Close ' closes the recordset
Set rs = Nothing ' clears the pointer to the memory formerly used by it
db.Close
Set db = Nothing
...and that's even if your declared variables go out of scope immediately after that code (which should release all the memory used by them, but doesn't do so 100% reliably).
Now, I'm not saying this is what you do in Java, but I'm simply suggesting that if you're having problems and you think you're releasing all your resources, perhaps you need to determine if you're depending on garbage collection to do so and instead need to do so explicitly.
Forgive me if I'd said anything that's stupid in regard to Java and JDBC -- I'm just reporting some of the problems that Access developers have had in interacting with Jet (via DAO, not ODBC) that report the same error message that you're getting, in the hope that our experience and practice might suggest a solution for your particular programming environment.
Recently I tried UCanAccess - a pure java JDBC Driver for MS Access. Check out: http://sourceforge.net/projects/ucanaccess/ - works on Linux too ;-) For loading the required libraries, some time is needed. I have not tested it for more than read-only purposes yet.
Anyway, I experienced problems as described above with the sun.jdbc.odbc.JdbcOdbcDriver. After adding close() statements following creation of statement objects (and calls to executeUpdate on those) as well as System.gc() statements, the error messages stopped ;-)
There's an outside chance that you're simply running out of free network connections. We had this problem on a busy system at work.
Something to note is that network connections, though closed, may not release the socket until garbage collection time. You could check this with NETSTAT /A /N /P TCP. If you have a lot of connections in the TIME_WAIT state, you could try forcing a garbage collection on connection closes or perhaps regular intervals.
You should also close your Connection object.
Looking into an alternative for the jdbc odbc driver would also be a good idea. Don't have any experience with an alternative myself but this would be a good place to start:
Is there an alternative to using sun.jdbc.odbc.JdbcOdbcDriver?
I had the same problem but none of the above was working. I eventualy locataed the issue.
I was using this to read the value of a form to put back into a lookup list record source.
LocationCode = [Forms]![Support].[LocationCode].Column(2)
ContactCode = Forms("Support")("TakenFrom")
Changed it to the below and it works.
LocationCode = Forms("Support")("LocationCode")
ContactCode = Forms("Support")("TakenFrom")
I know I should have written it better but I hope this helps someone else in the same situation.
Thanks
Greg

Forth Interpreter in Java

Here I found a Simple Forth Interpreter implemented in Java.
However I don't understand the significance of it if I want to use it?
What could be the advantage of the Forth Interpreter:
If the final compiled code to be
executed by the JVM is still "Byte
code" what would we the Forth
Interpreter be doing?
Will it help in writing
efficient/tight programs?
Will I be writing my code in Forth
and the interpreter will convert it
to Java?
Your thoughts...
The author on the page describes at as implementing a subset of FORTH and being suitable for incorporationg in other applications; presumably it is intended to provide a scripting capability for an application. It's fairly unlikely that the system works by spitting out java or JVM byte codes; it almostly certainly uses an interpreter written in Java.
Traditionally, a FORTH interpreter can be implemented in a very small memory footprint. I know someone that implemented one on a COSMAC and the core interpreter was 30 bytes long. The stack oriented byte code was also very compact as it did not need to specify the location of operands - it just read from the stack and deposited the result on the top of the stack. This made it popular in embedded systems circles where the small overhead of the interpreter was more than offset by the compact representation of the program logic.
These days it's less important as machines tend to be much larger, although digitalross makes a good point about other situations where FORTH is still used.
Will it help in writing
efficient/tight programs?
That's debatable.
I'm sure that FORTH folks will tell you that it is fast. But I doubt that the execution speed of a FORTH program running on a FORTH interpreter implemented in Java will line up against the speed of an equivalent program implemented directly in Java. For a start, the JIT compiler won't be able to do as a good job of optimizing the FORTH interpreter as it can for the plain Java version.
If by "tight" you mean "using less memory", I think that the difference will be marginal. Remember that in both the "FORTH in Java" and "plain Java" cases you have all of the memory overheads of a Java JVM. This is likely to swamp any comparison of FORTH code density versus equivalent compiled Java code density.
Not a bytecode translator
The answers to your questions are: "see below, sort of, and no".
It's just a program that takes some input and produces some output. The input is a Forth script. Except for some very major systems, it's rare to actually produce bytecode. jRuby, Clojure, Scala .. big systems like that do produce bytecode.
However your Forth interpreter is probably just that: a script interpreter that happens to be written in java. The input it accepts is a program of sorts, so you do end up with a nice double-indirect execution. Forth executing via bytecode interpreter executing via jvm running on the CPU.
Now, if you ran that on a CPU emulator, or wrote an interpreter in Forth, you could make it triple-indirect. (And in a sense it is already, because your Intel CPU is translating most x86 opcodes into micro-ops before executing them. :-)
Anyway, the point is that a program written in a rather static language like java might want to take some complex user input and execute it, or perhaps the author of the program has things that are more easily done in forth, and this allows him to write in both java and forth.
You should think about all of this until you understand it.
It does allow you to write efficient/tight programs. Partly because the ability to define defining words (words executing at compile time) can have the effect of effectively defining a Domain Specific Language (DSL). Forth also encourage refactoring (otherwise the stack stuff simply becomes incomprehensible ...) and thus the code will be tight.
7th is IMHO closer to the original design of forth than any other RPN language on the JVM. There is an editor with line-numbering and code beautyfier. There is a matching implementation of the "interpiler" and the dictionary with vocabulary/current/context.
Conditionally most of the hardware dependent words – to store to or fetch from a specific memory address – are missing. Any words for memory calculating on the JVM would be quite senseless anyway.
Some useful additions over the forth syntax has been made in 7th:
the word help
7th is object oriented
there is a perl like pattern matching
complex numbers and arrays are part of the language
a redirectable/nopeable mechanismus to optionally send output to stack or console and prevent execution of file-io words eg. while testing
There are several Forth systems that implement an Forth interpreter in Java. There are two that I know of that actually compile the forth source into a JVM class and allow you to execute the Forth code directly without the need for the interpreter.
HolonJ by Wolf Wejgaard
Misty Beach Forth
The main advantage of a FORTH alike interpreter is its interactivity – means I enter a word and get a response immediately. If it needs an intermediate step first, to generate a file or something, this advantage is gone. The second thing is the POL (Problem Oriented Language) aspect: the language must be expandable seamlessly. So, if a FORTH alike language is unable to compile new words, its worthless.

Categories