Memory Leak with Pentaho Kettle Looping? - java

I have an ETL requirement like:
I need to fetch around 20000 records from a table and process each record separately.(The processing of each record involves a couple of steps like creating a table for each record and inserting some data into it). For prototype I implemented it with two Jobs(with corresponding transformations). Rather than table I created a simple empty file. But this simple case also doesn't seem to work smoothly. (When I do create a table for each record the Kettle exits after 5000 reocrds)
Flow
When I run this the Kettle goes slow and then hangs after 2000-3000 files though processing is complete after a long time though Kettle seems to stop at some time. Is my design approach right?. When I replace the write to file with actual requirement like creating a new table(through sql script step) for each id and inserting data into it, the kettle exits after 5000 records. What do I need to do so that the flow works. increasing the Java memory(Xmx is already at 2gb)?. Is there any other configuration I can change? Or is there any other way? Extra Time shouldn't be a constraint but the flow should work.
My initial guess was since we are not storing any data the prototype atleast should work smoothly. I am using Kettle 3.2.

I seem to remember this is a known issue/restriction, hence why job looping is deprecated these days.
Are you able to re-build the job using the transformation and/or job executor steps? You can execute any number of rows via those stops.
These steps have their own issues - namely you have to explicitly handle errors, but it's worth a try just to see if you can achieve what you want. It's a slightly different mindset, but a nicer way to build loops than the job approach.

Related

How to efficiently export/import database data with JDBC

I have a JAVA application that can use a SQL database from any vendor. Right now we have tested Vertica and PostgreSQL. I want to export all the data from one table in the DB and import it later on in a different instance of the application. The size of the DB is pretty big so there are many rows in there. The export and import process has to be done from inside the java code.
What we've tried so far is:
Export: we read the whole table (select * from) through JDBC and then dump it to an SQL file with all the INSERTS needed.
Import: The file containing those thousands of INSERTS is executed in the target database through JDBC.
This is not an efficient process. Firstly, the select * from part is giving us problems because of the size of it and secondly, executing a lot if inserts one after another gives us problems in Vertica (https://forum.vertica.com/discussion/235201/vjdbc-5065-error-too-many-ros-containers-exist-for-the-following-projections)
What would be a more efficient way of doing this? Are there any tools that can help with the process or there is no "elegant" solution?
Why not do the export/import in a single step with batching (for performance) and chunking (to avoid errors and provide a checkpoint where to start off after a failure).
In most cases, databases support INSERT queries with many values, e.g.:
INSERT INTO table_a (col_a, col_b, ...) VALUES
(val_a, val_b, ...),
(val_a, val_b, ...),
(val_a, val_b, ...),
...
The number of rows you generate into a single such INSERT statement is then your chunk-size, which might need tuning for the specific target database (big enough to speed things up but small enough to make the chunk not exceed some database limit and create failures).
As already proposed, each of this chunk should then be executed in a transaction and your application should remember which chunk it successfully executed last in case some error occurs so it can continue at the next run there.
For the chunks itself, you really should use LIMIT OFFSET .
This way, you can repeat any chunk at any time, each chunk by itself is atomic and it should perform much better than with single row statements.
I can only speak about PostgreSQL.
The size of the SELECT is not a problem if you use server-side cursors by calling setFetchSize with a value greater than 0 (perhaps 10000) on the statement.
The INSERTS will perform well if
you run them all in a single transaction
you use a PreparedStatement for the INSERT
Each insert into Vertica goes into WOS (memory), and periodically data from WOS gets moved to ROS (disk) into a single container. You can only have 1024 ROS containers per projection per node. Doing many thousands of INSERTs at a time is never a good idea for Vertica. The best way to do this is to copy all that data into a file and bulk load the file into Vertica using the COPY command.
This will create a single ROS container for the contents of the file. Depending on how many rows you want to copy it will be many times (sometimes even hundreds of times) faster.
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Statements/COPY/COPY.htm
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/ConnectingToVertica/ClientJDBC/UsingCOPYLOCALWithJDBC.htm

How to Persist Millions of records using Spring-Data JPA

I have a csv file with more than 1 Million records. I want to do some processing on these records and persist all records in DB.
I tried few options like.
Save all entities in one GO
jpaepository.save(entities);
This method takes forever and never compete. works good for smaller no of records.
Save all Entities one by one
entities.forEach(jpaRepository::save);
This method completes but takes hell lot of time and memory usage is on the sky.
Here is what I would recommend, based just on your question -
Create a service that reads the file, say FileReaderService
Create a services that writes a set number of records say 1000 at a time, let us call it StorageService.Inject this into FileReaderService
Put #Transactional annotation on the save_N_records method.
Repeatedly call StorageService.save_N_records from FileReaderService.Each time you call it make sure you write a log to monitor progress.
If it is at all possible, I would disable indexing on the table, so inserts are faster, then turn it back on when I am done inserting. Of course, this is never possible on an on-line system, only on off-line reporting systems. Hope this helps!

Are batchlets the correct way of implementing ETL steps in JavaEE Batch?

I am studying Javaee Batch API (jsr-352) in order to test the feasibility of changing out current ETL tool for our own solution using this technology.
My goal is to build a job in which I:
get some (dummy) data from a datasource in step1,
some other data from other data-source in step2 and
merge them in step3.
I would like to process each item and not write to a file, but send it to the next step. And also store the information for further use. I could do that using batchlets and jobContext.setTransientUserData().
I think I am not getting the concepts right: as far as I understood, JSR-352 is meant for this kind of ETL tasks, but it has 2 types of steps: chunk and batchlets. Chunks are "3-phase-steps", in which one reads, processes and writes the data. Batchlets are tasks that are not performed on each item on the data, but once (as calculating totals, sending email and others).
My problem is that my solution is not correct if I consider the definition of batchlets.
How could one implement this kinf od job using Javaee Batch API?
I think you better to use chunk rather than batchlet to implement ETLs. typical chunk processing with a datasource is something like following:
ItemReader#open(): open a cursor (create Connection, Statement and ResultSet) and save them as instance variables of ItemReader.
ItemReader#readItem(): create and return a object that contains data of a row using ResultSet
ItemReader#close(): close JDBC resources
ItemProcessor#processItem(): do calculation and create and return a object which contains result
ItemWriter#writeItems(): save calculated data to database. open Connection, Statement and invoke executeUpdate() and close them.
As to your situation, I think you have to choose one data which considerble as primary one, and open a cursor for it in ItemReader#open(). then get another one in ItemProcessor#processItem() for each item.
Also I recommend you to read useful examples of chunk processing:
http://www.radcortez.com/java-ee-7-batch-processing-and-world-of-warcraft-part-1/
http://www.radcortez.com/java-ee-7-batch-processing-and-world-of-warcraft-part-2/
My blog entries about JBatch and chunk processing:
http://www.nailedtothex.org/roller/kyle/category/JBatch

Not allow DML operations during Packages exec

i need a little help here because i'm struggling a little bit to find the best solution for my problem. i googled and dont have any enlightening answer.
So, first of all, i'll explain the idea.
1 - i've a java application that insert data in my database (Oracle DB) using jdbc.
2 - My database is logically splited in two. One part that contains table with exported information (from another application) and another part with table that represents some reports.
3 - my java app only insert information in export table.
4 - I've developed some packages that makes the transformation of data from export table to report table (generate some reports).
5 - This packages are scheduled to execute 2, 3 times a day
So, my problem is that when transformation task starts, i want to prevent new DML operations. Then, when transformation stops, all new data that was supposed to be inserted/updated during that time, shall be inserted again in the export tables.
i tought in two approaches:
1 - during transformation time deviate the DML ops to temporary table
2 - lock the tables but i've not so many experience using this. My main question is, can i force DML operations in jdbc to wait until the lock is finished? Not tried yet, but read here and there that after some that is thrown a lockwaittimeout exception or something like that.
Can anyone more experienced give me some advices?
Any doubts on what i'm trying to do just ask.
Do not try locking tables as a solution. Sadly, that is common but rarely necessary. Just a few ideas:
at start of transformation select * data from export table into global_temp table. Then execute your transformation packages on that temp table
create a materialized view like select * data from export table. Investigate the options to refresh on commit but it seems you require to refresh the table just before your transformation
analyze your exported data. If it is like many other cases most of the data will never change once imported. Only new data needs to be analyzed. To aid in processing add a timestamp field called date_last_modified and a trigger on the table. When a row is updated then update the date_last_modified. This allows you to choose the smallest data set possible of "only changed records"
you should also investigate using bulk collect to optimize your cursor. This will allow you get a group of records all at once, sort of a snapshot of the data at a point in time
I believe you are over thinking this. If you get a group of records one at a time then Oracle will get the state of the record as of the last commit by any user. If you bulk collect a group of records they go into memory and will, again, represent the state as of a point in time.
The best way to feel more comfortable about this is to set up a test case. Set up a cursor that sleeps during every processing cycle. Open another session and change the data that is being processed. See what happens....

is spring-batch for me, even though I don't have a usage for itemReader and itemWriter?

spring-batch newbie: I have a series of batches that
read all new records (since the last execution) from some sql tables
upload all the new records to hadoop
run a series of map-reduce (pig) jobs on all the data (old and new)
download all the output to local and run some other local processing on all the output
point is, I don't have any obvious "item" - I don't want to relate to the specific lines of text in my data, I work with all of it as one big chunk and don't want any commit intervals and such...
however, I do want to keep all these steps loosely coupled - as in, step a+b+c might succeed for several days and accumulate processed stuff while step d keeps failing, and then when it finally succeeds it will read and process all of the output of it's previous steps.
SO: is my "item" a fictive "working-item" which will signify the entire new data? do I maintain a series of queues myself and pass this fictive working-items between them?
thanks!
people always assume that the only use of spring batch is really only for the chunk processing. that is a huge feature, but what's overlooked is the visibility of the processing and job control.
give 5 people the same task with no spring batch and they're going to implement flow control and visibility their own way. give 5 people the same task and spring batch and you may end up with custom tasklets all done differently, but getting access to the job metadata and starting and stopping jobs is going to be consistent. from my perspective it's a great tool for job management. if you already have your jobs written, you can implement them as custom tasklets if you don't want to rewrite them to conform the 'item' paradigm. you'll still see benefits.
I don't see the problem. Your scenario seems like a classic application of Spring Batch to me.
read all new records (since the last execution) from some sql tables
Here, an item is a record
upload all the new records to hadoop
Same here
run a series of map-reduce (pig) jobs on all the data (old and new)
Sounds like a StepListener or ChunkListener
download all the output to local and run some other local processing on all the output
That's the next step.
The only problem I see is if you don't have Domain Objects for your records. But even then, you can work with maps or arrays, while still using ItemReaders and ItemWriters.

Categories