I'm developing a Java SE application with a MySQL database. It uses a connection pool of size 10 as there are many screens. Each screen has a Thread that will update some JTables about every 10 seconds. My connection pooling code is taken from http://java.sun.com/developer/onlineTraining/Programming/JDCBook/conpool.html with a few methods added for my own convenience.
My problem is that when I edit some of the data in the application, and save it back to the database, the JTables will now randomly display either the new updated data, or the old original data, each time the Thread runs to update the screen. It flickers back and forth between new and old, each time the thread loops around.
I also have some objects that I load by clicking on a row in the JTable, and displaying their details in textBoxes. If I click on a row with data that is having the problem above, the object which is loaded, also shows the same "old" values. (This is despite the object having got a new, different connection from the database pool to load its values)
When the JTable refreshes again, and shows the correct "updated" data - and I load the object, the object also displays the correct data.
Is this a problem with the database connection pooling library i'm using, is there a better alternative? I've tried running all my refreshing code with SQL_NO_CACHE but this has no effect. I'm new here so let me know if there's anything i'm missing from the details, thanks!
Try to change the isolation level on your connections:
connection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
Also, make sure you handle your transactions correctly, otherwise you might have random problems with phantom reads!
Edit: In order to have a correct isolation between your connections, you need to disable auto-commit and surround write operations inside a transaction. If you do not do that a read from another connection might occur in the middle of a write and it might return inconsistent data.
Related
I want a page of filtered data from an Oracle database table, but I have a query that might return tens of millions of records, so it's not feasible to pull it all into memory. I need to filter records out in a way that cannot be done via SQL, and return back a page of records. In other words, the pagination part must be done after the filtering.
So, I attempted to use Hibernate's ScrollableResults, thinking it would be a way to pull in only chunks at a time and iterate through them. So, I created it:
ScrollableResults results = query.setReadOnly(true)
.setFetchSize(500)
.setCacheable(false)
.scroll();
... and yet, it appears to pull everything into memory (2.5GB pulled in per query). I've seen another question and I've tried some of the suggestions, but most seem MySQL specific, and I'm using an Oracle 19 driver (e.g. Integer.MIN_VALUE is rejected outright as a fetch size in the Oracle driver).
There was a suggestion to use a stateless session (I'm using the EntityManager which has no stateless option), but my thought is that if we don't fetch many records (because we only want the first page of 200 filtered records), why would Hibernate have millions of records in memory anyway, even though we never scrolled over them?
It's clear to me that I don't understand how/why Hibernate pulls things into memory, or how to get it to stop doing so. Any suggestions on how to prevent it from doing so, given the constraints above?
Some things I'm going to try:
Different scroll modes. Maybe insensitive or forward only prevents Hibernate's need to pull everything in?
Clearing the session after we have our page. I'm closing the session (both using close() in the ScrollableResults and the EntityManager), but maybe an explicit clear() will help?
We were scrolling through the entire ScrollableResults to get the total count. This caused two things:
The Hibernate session cached entities.
The ResultSet in the driver kept rows that it has scrolled past.
Fixing this is specific to my case, really, but I did two things:
As we scroll, periodically clear the Hibernate session. Since we use the EntityManager, I had to do entityManager.unwrap(Session.class).clear(). Not sure if entityManager.clear() would do the job or not.
Make the ScrollableResults forward-only so the Oracle driver doesn't have to keep records in memory as it scrolls. This was as simple as doing .scroll(ScrollMode.FORWARD_ONLY). Only possible since we're only moving forward, though.
This allowed us to maintain a smaller memory footprint, even while scrolling through literally every single record (tens of millions).
Why would you scroll through all results just to get the count? Why not just execute a count query?
I have an SQLite database which I have to be constantly retrieving data from. Changes may be done to the data between each retrieval.
My goal is to maximize the app performance, so what is the fastest way to do this retrieving?
I can imagine 2:
constantly opening and closing new cursors
query all data at the beginning and store it in an ArrayList. When changing the data, change both SQLite DB and the ArrayList using indexOf.
---- EDITED ----
I need the data to create markers in a google's map.
I have considered using CursorLoader but as I don't need to interact whith other apps I don't want to use Content Providers.
Would creating a custom loader be a good idea?
In short, while it's not always that simple, the fastest way to do things is all at once.
Constantly making calls to and from a database can really make your apps performance bottleneck, especially if it's to a server and not just your devices SQLite database.
Depending on what you're doing with the data, you may be able to look into something like a CursorAdapter which handles the display of rows from the database, and each time you insert/update a row, the CursorAdapter will update the ListView accordingly. It also handles the opening/closing/moving to next of the Cursor, making it very readable and easy for developers to follow.
Again, however, try to do things in as few calls as possible. If you stick to using an ArrayList:
Make one call in the beginning for all items.
Loop through that cursor and add items to an array list.
Use the array list as a cache. Sure, you could update the DB each time you update the list (which might be safest, IMO), or you can just loop through the list and insert/update/delete when the app closes. If you take that approach, make sure you do so in a method like onPause(), as it is one of the earliest methods in which an Activity can be killed.
Perfect use case for a CursorLoader. Given a query, it'll keep your list adapter up to date with the latest data, assuming you notify when changes happen in the DB. It also conveniently handles activity lifecycle events for your (ie. it'll close the cursor when the activity finishes, stop updating when it pauses, etc.).
The fastest way is obviously to not use a database at all. However, that is clearly not a solution unless you find some way of exposing your array to access from elsewhere.
Using a database is a convenient way of centralising the data so many users can access the data and have the data up-to-date at all times. Unfortunately this is the slowest option.
Choosing your middle-ground between speed and availability is a difficult task. You have to find a balance between stale data and throughput.
If, for example, you would be comfortable with a picture of the data that was valid just 5 seconds ago then you could probably cache the data locally in your array and arrange for some mechanism to keep it up-to-date running behind the scenes.
If a 5 minute lag was acceptable you could probably arrange for a regular push to database.
Also, any mechanism you use must also handle parallel changes to the data - perhaps two users change the same datum at the same time.
You just need to decide on where to strike your balance.
I am using JDBC to connect to a mySQL database in my program. During runtime, the user can modify the contents of the database by adding new ones (more functionality is coming but right now that's all I have). However, when a new entry into the database is created I want the gui that is displaying the content to update themselves with the modified data.
I know how to update the gui elements, but that doesn't seem to apply to the ResultSet created from my database.
I am quite new to mySQL and JDBC so any help would be very appreciated !
First questions:
- If it is a web application, you can use Ajax and ajax callback to manage return of your application (success or failure)
- If it is a standard application, you can manage in that way (which work from my side):
As soon as user added data, you needs to wait the result of your database. If (insert/update/delete) does not return any exception, you have to modify your GUI (add/update/remove) your display. If you have any issue you should not modify your gui.
Do not perform any select all, as you can crash your application in case of huge db.
I have an application which does have around 25 to 25 look up tables.
When I select create new record or modify exiting records, drop down columns are populated from look up tables. Currently I am querying from look up tables separately. It takes almost 6-7 seconds to populate drop down fields when user click new record button or edit button.
What is the best approach in dealing such situations?
How can I make one view to execute one query rather than several queries to populate all drop down fields?
Any insight or help is highly appreciable.
There are several things you can do:
If lookup tables don't change or don't change often, cache them
Delay lookup of the dropdown values and load them after the rest of the page loads in a way which would not be noticable to user
It looks like you have too many fields on one page, consider splitting the form in several pages
It takes as long as 6 to 7 seconds? That sounds like you may not be using (JDBC) connection pooling. Are you? If you are not already using it, connection pooling should dramatically speed things up. In connection pooling, you get a connection, use it, and close it as quickly has possible. Doing so, I think you can stick with querying each table separately.
I have to implement a requirement for a Java CRUD application where users want to keep their search results intact even if they do actions which affects the criteria by which the returned rows are matched.
Confused? Ok. Let me give you a familiar example. In Gmail if you do an advanced search on unread emails, you are presented with a list of matching results. Click on an entry and then go back to the search list. What happens is that you have just read that entry but it hasn't disappeard from the original result set. Only that line has changed from bold to normal.
I need to implement the exact same behaviour but the application is designed in such a way that any transaction is persisted first and then the UI requeries the db to keep in sync. The complexity of the application and the size of the database prevents me from doing just a simple in memory caching of the matching rows and making the changes both in db and in memory.
I'm thinking of solving the problem on the database level by creating an intermediate table in the Oracle database holding pointers to matching records and requerying only those records to keep the UI in sync with data. Any Ideas?
In Oracle, if you open a cursor, the results of that cursor are static, regardless if another transaction inserts a row that would appear in your cursor, or updates or deletes a row that does exist in your cursor.
The challenge then is to not close the cursor if you want results consistent from when the cursor was opened.
If the UI maintains a single session on the database, one solution is to use Global Temporary Tables in Oracle. When you execute a search, insert the unique IDs into the GTT, then the UI just queries the GTT.
If the UI doesn't keep the session open, you could do the same thing but with an ordinary table. Then, of course, you'd just have to add some cleanup code to remove old search results from the table.
You can use a flashback query to read data from the past. For example, select * from employee as of timestamp to_timestap('01-MAY-2011 070000', 'DD-MON-YYYY HH24MISS');
Oracle only stores this historical information for a limited period of time. You'll need to look into your retention settings; the UNDO_RETENTION parameter, UNDO tablespace retention gaurantee and proper sizing, and also LOBs have their own retention setting.
Create two connections to the database.
Set the first one to READ ONLY (using SET TRANSACTION READ ONLY) do your searching from that connection but make sure you never end that transaction by issuing a commit or rollback.
As a read only transaction only sees the data as it was at the time the transaction started, the first connection will never see any changes to the database - not even committed ones.
Then you can do your updates in the second connection without affecting the results in the first connection.
If you cannot use two connections, you could implement the updates through stored procedures that use autonomous transactions, then you can keep the read only transaction open in the single connection you have.