MS Access - Can't Open Any More Tables - java

at work we have to deal with several MS Access mdb files, so we use the default JdbcOdbcBridge Driver which comes with the Sun JVM and, for most cases, it works great.
The problem is that when we have to deal with some larger files, we face several times exceptions with the message "Can't open any more tables". How can we avoid that?
We already close all our instances of PreparedStatements and RecordSets, and even set their variables to null, but even so this exception continues to happen. What should we do? How can we avoid these nasty exceptions? Does someone here knows how?
Is there any additional configuration to the ODBC drivers on Windows that we can change to avoid this problem?

"Can't open any more tables" is a better error message than the "Can't open any more databases," which is more commonly encountered in my experience. In fact, that latter message is almost always masking the former.
The Jet 4 database engine has a limit of 2048 table handles. It's not entirely clear to me whether this is simultaneous or cumulative within the life of a connection. I've always assumed it is cumulative, since opening fewer recordsets at a time in practice seems to make it possible to avoid the problem.
The issue is that "table handles" doesn't just refer to table handles, but to something much more.
Consider a saved QueryDef with this SQL:
SELECT tblInventory.* From tblInventory;
Running that QueryDef uses TWO table handles.
What?, you might ask? It only uses one table! But Jet uses a table handle for the table and a table handle for the saved QueryDef.
Thus, if you have a QueryDef like this:
SELECT qryInventory.InventoryID, qryAuthor.AuthorName
FROM qryInventory JOIN qryAuthor ON qryInventory.AuthorID = qryAuthor.AuthorID
...if each of your source queries has two tables in it, you're using these table handles, one for each:
Table 1 in qryInventory
Table 2 in qryInventory
qryInventory
Table 1 in qryAuthor
Table 2 in qryAuthor
qryAuthor
the top-level QueryDef
So, you might think you have only four tables involved (because there are only four base tables), but you'll actually be using 7 table handles in order to use those 4 base tables.
If in a recordset, you then use the saved QueryDef that uses 7 table handles, you've used up yet another table handle, for a total of 8.
Back in the Jet 3.5 days, the original table handles limitation was 1024, and I bumped up against it on a deadline when I replicated the data file after designing a working app. The problem was that some of the replication tables are open at all times (perhaps for each recordset?), and that used up just enough more table handles to put the app over the top.
In the original design of that app, I was opening a bunch of heavyweight forms with lots of subforms and combo boxes and listboxes, and at that time I used a lot of saved QueryDefs to preassemble standard recordsets that I'd use in many places (just like you would with views on any server database). What fixed the problem was:
loading the subforms only when they were displayed.
loading the rowsources of the combo boxes and listboxes only when they were onscreen.
getting rid of all the saved QueryDefs and using SQL statements that joined the raw tables, wherever possible.
This allowed me to deploy that app in the London office only one week later than planned. When Jet SP2 came out, it doubled the number of table handles, which is what we still have in Jet 4 (and, I presume, the ACE).
In terms of using Jet from Java via ODBC, the key point would be, I think:
use a single connection throughout your app, rather than opening and closing them as needed (which leaves you in danger of failing to close them).
open recordsets only when you need them, and clean up and release their resources when you are done.
Now, it could be that there are memory leaks somewhere in the JDBC=>ODBC=>Jet chain where you think you are releasing resources and they aren't getting released at all. I don't have any advice specific to JDBC (as I don't use it -- I'm an Access programmer, after all), but in VBA we have to be careful about explicitly closing our objects and releasing their memory structures because VBA uses reference counting, and sometimes it doesn't know that a reference to an object has been released, so it doesn't release the memory for that object when it goes out of scope.
So, in VBA code, any time you do this:
Dim db As DAO.Database
Dim rs As DAO.Recordset
Set db = DBEngine(0).OpenDatabase("[database path/name]")
Set rs = db.OpenRecordset("[SQL String]")
...after you've done what you need to do, you have to finish with this:
rs.Close ' closes the recordset
Set rs = Nothing ' clears the pointer to the memory formerly used by it
db.Close
Set db = Nothing
...and that's even if your declared variables go out of scope immediately after that code (which should release all the memory used by them, but doesn't do so 100% reliably).
Now, I'm not saying this is what you do in Java, but I'm simply suggesting that if you're having problems and you think you're releasing all your resources, perhaps you need to determine if you're depending on garbage collection to do so and instead need to do so explicitly.
Forgive me if I'd said anything that's stupid in regard to Java and JDBC -- I'm just reporting some of the problems that Access developers have had in interacting with Jet (via DAO, not ODBC) that report the same error message that you're getting, in the hope that our experience and practice might suggest a solution for your particular programming environment.

Recently I tried UCanAccess - a pure java JDBC Driver for MS Access. Check out: http://sourceforge.net/projects/ucanaccess/ - works on Linux too ;-) For loading the required libraries, some time is needed. I have not tested it for more than read-only purposes yet.
Anyway, I experienced problems as described above with the sun.jdbc.odbc.JdbcOdbcDriver. After adding close() statements following creation of statement objects (and calls to executeUpdate on those) as well as System.gc() statements, the error messages stopped ;-)

There's an outside chance that you're simply running out of free network connections. We had this problem on a busy system at work.
Something to note is that network connections, though closed, may not release the socket until garbage collection time. You could check this with NETSTAT /A /N /P TCP. If you have a lot of connections in the TIME_WAIT state, you could try forcing a garbage collection on connection closes or perhaps regular intervals.

You should also close your Connection object.
Looking into an alternative for the jdbc odbc driver would also be a good idea. Don't have any experience with an alternative myself but this would be a good place to start:
Is there an alternative to using sun.jdbc.odbc.JdbcOdbcDriver?

I had the same problem but none of the above was working. I eventualy locataed the issue.
I was using this to read the value of a form to put back into a lookup list record source.
LocationCode = [Forms]![Support].[LocationCode].Column(2)
ContactCode = Forms("Support")("TakenFrom")
Changed it to the below and it works.
LocationCode = Forms("Support")("LocationCode")
ContactCode = Forms("Support")("TakenFrom")
I know I should have written it better but I hope this helps someone else in the same situation.
Thanks
Greg

Related

Resource considerations for a Java program using an SQL DB

I'm fairly new to programming, at least when it comes to anything substantial. I am about to start work on a management software for my employer which draws it's data from, and stores it's data to, an SQL database. I will likely be using JDBC to interact with it.
To try and accurately describe the problem I am going to focus on a very small portion of the program. In the database, there is a table that stores Job records. There are a couple of thousand of them. I want to display all available Jobs (as a text reference from the table) in a scroll-able panel in the program with a search function.
So, my question is... Should I create Job objects from each record in one go and have the program work with the objects to display them, OR should I simply display strings taken directly from the records? The first method would mean that other details of each job are stored in advanced so that when I open a record in the UI the load times should be minimal, however it also sounds like it would take a great deal of resources when it initially populates the panel and generates the objects. The second method would mean issuing a large quantity of queries to the Database, but might avoid the initial resource overhead, but I don't want to put too much strain on the SQL Server because other software in-house relies on it.
Really, I don't know anything about how I should be doing this. But that really is my question. Apologies if I am displaying my ignorance in this post, and thank you in advanced for any help you can offer.
"A couple thousand" is a very small number for modern computers. If you have any sort of logic to perform on these records (they're not all modified solely via stored procedures), you're going to have a much easier time using an object-relational mapping (ORM) tool like Hibernate. Look into the JPA specification, which allows you to create Java classes that represent database objects and then simply annotate them to describe how they're stored in the database. Using an ORM like this system does have some overhead, but it's nearly always worthwhile, since computers are fast and programmers are expensive.
Note: This is a specific example of the rule that you should do things in the clearest and easiest-to-understand way unless you have a very specific reason not to, and in particular that you shouldn't optimize for speed unless you've measured your program's performance and have determined that a specific section of the code is causing problems. Use the abstractions that make the code easy to understand and come back later if you actually have to speed things up.

how to resolve plenty of concurrent write operation in Oracle?

I am maintaining a lottery website with more than millions of users. Some active user(Perhaps more than 30,000) will buy more than 1000 lotteries within 1 second.
Now the current logics use select .... for update to make sure the account balance, but meantime the database server is over-loaded and very slow to deal with? We have to process them in real-time.
Have anyone met the similar scene before?
First, you need to design a transactional system that satisfies your business rules. For the moment, forget about disk and memory, and what goes where. Try to design a system that is as lightweight as possible, that does the minimum required amount of locking, that satisfies your business rules.
Now, run the system, what happens? If performance is acceptable, congratulations, you're done.
If performance is not acceptable, avoid the temptation to guess at the problem, and start making adjustments. You need to profile the system. You need to understand where the most time is being spent, so that you know what areas to focus your tuning efforts on. The easiest way to do this, is to trace it, using SQL_TRACE. You've not made any mention of Oracle edition, version, or platform. So, I'll assume you're at least on some version of 10gR2. So, use DBMS_MONITOR to start/end traces. Now, scoping is important here. What I mean is, it's critically important that you start the trace, run the code that you want to profile and then immediately shut off the trace. This way, you trace only what you're interested in, and the profile won't contain any extraneous information. Once you have the trace file, you need to process it. There are several tools. The most common is TkProf, which is provided by Oracle, but really doesn't do a very good job. The best free profiler that I'm aware of, is OraSRP. Download a copy of OraSRP, and check your results. The data in the report should point you in the right direction.
Once you've done all that, if you still have questions, ask a new question here, and I'm sure we can help you interpret the output of OraSRP, to help you understand where your bottlenecks are.
Hope that helps.
Personally, I would lock/update the accounts in memory and update the database as a background task. Using this approach you can easily support thousands of updates and accounts.
A. Speed up things without modifying the code:
1 - You can keep the table entirely in the memory(that is SGA - because it is also on disks):
alter table t storage ( buffer_pool keep )
(discuss with your dba before to do this)
2 - if the table is too big and you update same rows again and again, probably it is sufficient to use the cache attribute:
alter table t cache
This command put the blocks of your table when they are used with best priority in the LRU list, so it is less chance to be aged from the SGA.
Here is it a discusion about differences: ask tom
3 - Another solution, advanced, that need more analysis and resources is TimesTen
B.Speed up your database operations:
Identify top querys and:
create indexes where you update or select only one row or a small set of rows.
partition large tables scanned for only a segment of data.
Have you identified a top query?

Best way to store small, alternating, public data that updates every couple of hours?

The essence of my problem is that there are too many solutions, and I would like to find which one wins out in pros and cons before I build an infrastructure around it.
(Simplified for the purpose of this forum) This is an auction site where five auctions are stored in a rank #1-5, #1 being the currently featured auction. The other four are simply "on deck." After either a couple hours or the completion of that auction, #2-5 move up to #1-4 and a new one is chosen to be #5
I'm using a dedicated server and I've been considering just storing the data in the servlet or maybe adding a column in the database as a boolean for each auction...like "isFeatured = 1"
Suffice it to say the data is read about 5 times+ more often than it is written, which is why I'm leaning towards good old SQL.
When you can retrieve the relevant auctions from DB with a simple query with ORDER BY and TOP or something similar then try this. If no performance issues occur then KISS and you're done.
Otherwise when these 5 auctions are valid for a while then cache them in memory. Have a singleton holding these auctions and provide methods for updating for example. Maybe you want to use a caching lib instead. Update these Top5 whenever necessary but serve them directly out of memory without hiting a DB or something similar expensive.
What kind of scale are you looking for? How many application servers need access to the data?
I think you're probably making this more complicated than it is. Just use a database, take a hit of ACID, and move onto whatever else you need to work on. :P
Have you taken a look at SQLite? It allows for "good old SQL" without all of the hassles of setting up a separate database server. As long as the data isn't too huge (to be fair, I haven't tested the size limits, but I've skimmed blog entries mentioning the use of SQLite to process files of several dozen MB in size quickly and with no problems), you should be fine.
It isn't a perfect solution for all needs (frankly, I sometimes find the dynamic typing to be a pain), but since it relies on locally stored files, reads will be much faster than firing up a network connection to talk to a more "traditional" RDBMS.

How to reduce the number of file writes when there are multiple threads?

here's the situation.
In a Java Web App i was assigned to mantain, i've been asked to improve the general response time for the stress tests during QA. This web app doesn't use a database, since it was supposed to be light and simple. (And i can't change that decision)
To persist configuration, i've found that everytime you make a change to it, a general object containing lists of config objects is serialized to a file.
Using Jmeter i've found that in the given test case, there are 2 requests taking up the most of the time. Both these requests add or change some configuration objects. Since the access to the file must be sinchronized, when many users are changing config, the file must be fully written several times in a few seconds, and requests are waiting for the file writing to happen.
I have thought that all these serializations are not necessary at all, since we are rewriting the most of the objects again and again, the changes in every request are to one single object, but the file is written as a whole every time.
So, is there a way to reduce the number of real file writes but still guarantee that all changes are eventually serialized?
Any suggestions appreciated
One option is to do changes in memory and keep one thread on the background, running at given intervals and flushing the changes to the disk. Keep in mind, that in the case of crash you'll lost data that wasn't flushed.
The background thread could be scheduled with a ScheduledExecutorService.
IMO, it would be better idea to use a DB. Can't you use an embedded DB like Java DB, H2 or HSQLDB? These databases support concurrent access and can also guarantee the consistency of data in case of crash.
If you absolutely cannot use a database, the obvious solution is to break your single file into multiple files, one file for each of config objects. It would speedup serialization and output process as well as reduce lock contention (requests that change different config objects may write their files simultaneously, though it may become IO-bound).
One way is to to do what Lucene does and not actually overwrite the old file at all, but to write a new file that only contains the "updates". This relies on your updates being associative but that is usually the case anyway.
The idea is that if your old file contains "8" and you have 3 updates you write "3" to the new file, and the new state is "11", next you write "-2" and you now have "9". Periodically you can aggregate the old and the updates. Any physical file you write is never updated, but may be deleted once it is no longer used.
To make this idea a bit more relevant consider if the numbers above are records of some kind. "3" could translate to "Add three new records" and "-2" to "Delete these two records".
Lucene is an example of a project that uses this style of additive update strategy very successfully.

Java code to import CSV into Access

I posted the code below to the Sun developers forum since I thought it was erroring (the true error was before this code was even hit). One of the responses I got said it would not work and to throw it away. But it is actually working. It might not be the best code (I am new to Java) but is there something inherently "wrong" with it?
=============
CODE:
private static void ImportFromCsvToAccessTable(String mdbFilePath, String accessTableName
, String csvDirPath , String csvFileName ) throws ClassNotFoundException, SQLException {
Connection msConn = getDestinationConnection(mdbFilePath);
try{
String strSQL = "SELECT * INTO " + accessTableName + " FROM [Text;HDR=YES;DATABASE=" + csvDirPath + ";].[" + csvFileName + "]";
PreparedStatement selectPrepSt = msConn.prepareStatement(strSQL );
boolean result = selectPrepSt.execute();
System.out.println( "result = " + result );
} catch(Exception e) {
System.out.println(e);
} finally {
msConn.close();
}
}
The literal answer is no - there is never anything "inherently wrong" with code, it's a matter of whether it meets the requirements - which may or may not include being maintainable, secure, robust or fast.
The code you are running is actually a JET query purely within Access - the Java code is doing nothing except telling Access to run the query.
On the one hand, if it ain't broke don't fix it. On the other hand, there's a good chance it will break in the near future so you could try fixing it in advance.
The two likely reasons it might break are:
SQL injection risk. Depending on where csvDirPath and csvFileName come from (e.g. csvFileName might come from the name of the file uploaded by a user?), and on how clever the Access JDBC driver is, you could be open to someone breaking or deleting your data by inserting a semicolon (or some brackets to make a subquery) and some additional SQL commands into the query.
You are relying on the columns of the CSV file being compatible with the columns of the Access table. If you have unchecked CSV being uploaded, or if the CSV generator has a particular way of handling nulls, or if you one day get an unusual date or number format, you may get an error on inserting into the Access table.
Having said all that, we are all about pragmatism here. If the above code is from a utility class which you are going to use by hand a few times a week/month/year/ever, then it isn't really a problem.
If it is a class which forms part of a web application, then the 'official' Java way to do it would be to read records out of the CSV file (either using a CSV parser or a CSV/text JDBC driver), get the columns out of the recordset, do some validation or sanity checking on them, and then use a new PreparedStatement to insert them into the Access database. Much more trouble but much more robust.
You can probably find a combination of tools (e.g. object-relational layers or other data access tools) which will do a lot of that for you, but setting up the tools is going to be as much hassle as writing the code. Then again, you'll learn a lot from either one.
One word of warning - jdbc -> Access queries (which bridge using odbc) do not work on 64 bit systems, as there exist no 64 bit Access database drivers (The driver is included into 32 bit copies of Windows and can only be accessed by 32 bit processes. You can run "odbcad32" or look at the ODBC control panel to see that the driver is present)
While I don't see the code with the connection string in your code snippet, I am not aware of any noncommercial Access JDBC drivers for Java, only jdbc->odbc bridging and relying on Windows to have the Access (*.mdb) driver. Microsoft no longer supports this driver and has no plans to port it to 64bit, so infrastructure wise it is something to think about.
#david.w.fenton.myopenid.com: "Can you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet?"
David, I found a post on Microsoft's Connect Feedback about that.
http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=125117
"At the moment there are no plans to ship a 64-bit version of JET driver by Office team. We may considere alternate options and will update you when we have a concrete plan."
Thanks,
SSIS team.
Posted by Microsoft on 10/3/2007 at 9:47 PM
There's been no update from Microsoft in that feedback thread.
Question to Joshua McKinnon:
Can you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet? This sounds reasonable, so I'm not doubting you at all, I would just like to know if you have a source for it that you can point to.
Surely MS is providing access to Jet on 64-bit systems through OLEDB, though, right? That doesn't help with JDBC, but certainly provides a method to use Jet data (they have to provide something, since Jet 4 is part of the OS, as it is used as the data store for Active Directory, and has been used thus since Windows 2000).

Categories