java.lang.OutOfMemoryError: Java heap space at executeQuery - java

I know there are lots of similar questions and I've read every one of 'em (at least I believe so) and I was not able to resolve my issue with the java.lang.OutOfMemoryError: Java heap space. Let me describe my problem.
I'm working on a simple Java program which queries DB and generates a CSV file.
Everything was fine and I was able to generate CSV files for queries with huge data and with around 320+ columns.
And sometime later I faced this issue where I queried a table with exactly 309 columns.
Query is something like this SELECT * FROM TABLE_A. And it had no rows. So the query will return 0 records.
Ideally this should create a empty file which happens for all my other queries which I've tried except this one where I got this error and console pointed to this line where executeQuery is executed. Even with data in the table I get the same error.
The CSV is getting generated only when I explicitly increase the heap size to more than 3gb whereas for others it was working with default heap size. (no idea why it needs so much heap space for table containing 0 records)
And for default heap size this particular report gets generated successfully only when I have less number of columns like 100-150 columns.
Why do I get the out of memory issue for this alone? Is there something to do with the table? To my knowledge the table is similar to all other tables. And will it be because of the column size for this table? For most of the columns I've 255 as the column size.
I've spent 2-3 days to analysis why this is happening and no luck.
Can someone help me out with this? I think this is not similar to any other out of memory issues out there. Its kinda weird.

Related

H2 Database File Size is too big (7x larger than expected) [duplicate]

I have an H2 database that has ballooned to several Gigabytes in size, causing all sorts of operational problems. The database size didn't seem right. So I took one little slice of it, just one table, to try to figure out what's going on.
I brought this table into a test environment:
The columns add up to 80 bytes per row, per my calculations.
The table has 280,000 rows.
For this test, all indexes were removed.
The table should occupy approximately
80 bytes per row * 280,000 rows = 22.4 MB on disk.
However, it is physically taking up 157 MB.
I would expect to see some overhead here and there, but why is this database a full 7x larger than can be reasonably estimated?
UPDATE
Output from CALL DISK_SPACE_USED
There's always indices, etc. to be taken into account.
Can you try:
CALL DISK_SPACE_USED('my_table');
Also, I would also recommend running SHUTDOWN DEFRAG and calculating the size again.
Setting MV_STORE=FALSE on database creation solves the problem. Whole database (not the test slice from the example) is now approximately 10x smaller.
Update
I had to revisit this topic recently and had to run a comparison to MySQL. On my test dataset, when MV_STORE=FALSE, the H2 database takes up 360MB of disk space, while the same data on MySQL 5.7 InnoDB with default-ish configurations takes up 432MB. YMMV.

Getting OOM error while fetching 200,000 records from Oracle DB in Java

My java app fetches about 200,000 records in its result set.
While trying to fetch the data from Oracle DB, the server throws java.lang.OutOfMemoryError: Java heap space
One way to solve this, IMO, is to fetch the records from the DB in smaller chunks (say 100,000 records in each fetch or even smaller count). How can I do this (meaning what API method to use)?
Kindly suggest how to do this or if you think there's a better way to overcome this memory space problem, do suggest. I do not want to use JVM params like -Xmx because I read that that's not a good way to handle OutOfMemory errors.
If you are using Oracle DB you may set AND ROWNUM < XXX to your SQL query. This will cause that only XXX-1 rows will be fetched in query.
Other way is to call statement.setFetchSize(xxx) method before executing statement.
Setting larger JVM memory pool is poor idea, because in future there may be larger data set which will cause OOM.

MySQL memory exhausted error

Today I was using a simple Java application to load a large size data into MySQL DB, and got a error below:
java.sql.SQLException: Syntax error or access violation message from server: "memory exhausted near ''Q1',2.34652631E10,'000','000',5.0519608E9,5.8128358E9,'000','000',8.2756818E9,2' at line 5332"
I've tried to modified the my.ini file to increase some point, however it doesn't work at all and actually the size of file is not so large, it's just a 14mb xls file, almost running out of idea, awaiting for any suggestion. Appreciate your help!
(Without the relevant parts of your code I can only guess, but here we go...)
From the error message, I will take a shot in the dark and guess that you are trying to load all of 300,000 rows in a single query, which is probably produced by concatenating a whole bunch of INSERT statements in a single string. A 14MB XLS file can become a lot bigger when translated into SQL statements and your server runs out of memory trying to parse the query.
To resolve this (in order of preference):
Convert your file to CSV and use mysqlimport.
Convert your file to CSV and use LOAD DATA INFILE.
Use multiple transactions of moderate size with only a few thousand INSERT statements each. This is the recommended option if you cannot simply import the file.
Use a single transaction - InnoDB MySQL databases should handle transaction sizes in this size range.

Sql returning 3 million records and JVM outofmemory Exception

I am connecting oracle db thru java program. The problem is i am getting Outofmemeory exception because the sql is returning 3 million records. I cannot increase the JVM heapsize for some reason.
What is the best solution to solve this?
Is the only option is to run the sql with LIMIT?
If your program needs to return 3 mil records at once, you're doing something wrong. What do you need to do that requires processing 3 mil records at once?
You can either split the query into smaller ones using LIMIT, or rethink what you need to do to reduce the amount of data you need to process.
In my opinion is pointless to have queries that return 3 million records. What would you do with them? There is no meaning to present them to the user and if you want to do some calculations it is better to run more than one queries that return considerably fewer records.
Using LIMIT is one solution, but a better solution would be to restructure your database and application so that you can have "smarter" queries that do not return everything in one go. For example you could return records based on a date column. This way you could have the most recent ones.
Application scaling is always an issue. The solution here will to do whatever you are trying to do in Java as a stored procedure in Oracle PL/SQL. Let oracle process the data and use internal query planners to limit amount of data flowing in an out and possibly causing major latencies.
You can even write the stored procedure in Java.
Second solution will be to indeed make a limited query and process from several java nodes and collate results. Look up map-reduce.
If each record is around 1 kilobyte that means 3gb of data, do you have that amount of memory available for your application?
Should be better if you explain the "real" problem, since OutOfMemory is not your actual problem.
Try this:
http://w3schools.com/sql/sql_where.asp
There could be three possible solutions
1. If retreiving 3million records at once is not necessary.. Use LIMIT
Consider using meaningful where clause
Export database entries into txt or csv or excel format with the tool that oracle provides and use that file for your use..
Cheers :-)
reconsider your where clause. see if you can make it more restrictive.
and/or
use limit
Just for reference, In Oracle queries, LIMIT is ROWNUM
Eg., ... WHERE ROWNUM<=1000
If you get that large a response then take care to process the result set row by row so the full result does not need to be in memory. If you do that properly you can process enormous data sets without problems.

Performance problem on Java DB Derby Blobs & Delete

I’ve been experiencing a performance problem with deleting blobs in derby, and was wondering if anyone could offer any advice.
This is primarily with 10.4.2.0 under windows and solaris, although I’ve also tested with the new 10.5.1.1 release candidate (as it has many lob changes), but this makes no significant difference.
The problem is that with a table containing many large blobs, deleting a single row can take a long time (often over a minute).
I’ve reproduced this with a small test that creates a table, inserts a few rows with blobs of differing sizes, then deletes them.
The table schema is simple, just:
create table blobtest( id integer generated BY DEFAULT as identity, b blob )
and I’ve then created 7 rows with the following blob sizes : 1024 bytes, 1Mb, 10Mb, 25Mb, 50Mb, 75Mb, 100Mb.
I’ve read the blobs back, to check they have been created properly and are the correct size.
They have then been deleted using the sql statement ( “delete from blobtest where id = X” ).
If I delete the rows in the order I created them, average timings to delete a single row are:
1024 bytes: 19.5 seconds
1Mb: 16 seconds
10Mb: 18 seconds
25Mb: 15 seconds
50Mb: 17 seconds
75Mb: 10 seconds
100Mb: 1.5 seconds
If I delete them in reverse order, the average timings to delete a single row are:
100Mb: 20 seconds
75Mb: 10 seconds
50Mb: 4 seconds
25Mb: 0.3 seconds
10Mb: 0.25 seconds
1Mb: 0.02 seconds
1024 bytes: 0.005 seconds
If I create seven small blobs, delete times are all instantaneous.
It thus appears that the delete time seems to be related to the overall size of the rows in the table more than the size of the blob being removed.
I’ve run the tests a few times, and the results seem reproducible.
So, does anyone have any explanation for the performance, and any suggestions on how to work around it or fix it? It does make using large blobs quite problematic in a production environment…
I have exact the same issue you have.
I found that when I do DELETE, derby actually "read through" the large segment file completely. I use Filemon.exe to observe how it run.
My file size it 940MB, and it takes 90s to delete just a single row.
I believe that derby store the table data in a single file inside. And some how a design/implementation bug that cause it read everything rather then do it with a proper index.
I do batch delete rather to workaround this problem.
I rewrite a part of my program. It was "where id=?" in auto-commit.
Then I rewrite many thing and it now "where ID IN(?,.......?)" enclosed in a transaction.
The total time reduce to 1/1000 then it before.
I suggest that you may add a column for "mark as deleted", with a schedule that do batch actual deletion.
As far as I can tell, Derby will only store BLOBs inline with the other database data, so you end up with the BLOB split up over a ton of separate DB page files. This BLOB storage mechanism is good for ACID, and good for smaller BLOBs (say, image thumbnails), but breaks down with larger objects. According to the Derby docs, turning autocommit off when manipulating BLOBs may also improve performance, but this will only go so far.
I strongly suggest you migrate to H2 or another DBMS if good performance on large BLOBs is important, and the BLOBs must stay within the DB. You can use the SQuirrel SQL client and its DBCopy plugin to directly migrate between DBMSes (you just need to point it to the Derby/JavaDB JDBC driver and the H2 driver). I'd be glad to help with this part, since I just did it myself, and haven't been happier.
Failing this, you can move the BLOBs out of the database and into the filesystem. To do this, you would replace the BLOB column in the database with a BLOB size (if desired) and location (a URI or platform-dependent file string). When creating a new blob, you create a corresponding file in the filesystem. The location could be based off of a given directory, with the primary key appended. For example, your DB is in "DBFolder/DBName" and your blobs go in "DBFolder/DBName/Blob" and have filename "BLOB_PRIMARYKEY.bin" or somesuch. To edit or read the BLOBs, you query the DB for the location, and then do read/write to the file directly. Then you log the new file size to the DB if it changed.
I'm sure this isn't the answer you want, but for a production environment with throughput requirements I wouldn't use Java DB. MySQL is just as free and will handle your requirements a lot better. I think you are really just beating your head against a limitation of the solution you've chosen.
I generally only use Derby as a test case, and especially only when my entire DB can fit easily into memory. YMMV.
Have you tried increasing the page size of your database?
There's information about this and more in the Tuning Java DB manual which you may find useful.

Categories