I am having this problem for the last two projects that i worked on. Both of the projects are written in Java and use Oracle 11g as DB. When i look at the code there is nothing wrong in transaction management etc. The flow is very simple and like this in code.
Connection con = null;
try {
//Get connection
//Run validation
//Insert record
//Commit
} catch() {
//Rollback
} finally {
//Close connection
}
The validation part checks for some business rules and prevents dublicate entries.
1.st case
This works fine when a user calls this part of code fully and commits the current transaction, only after that another user comes. In this case when another user wants to run this code because that the other transaction committed the changes validation part can see the record and prevents duplicate.
But when two user runs the same code at the same time sometimes duplicate records occurs. The flow is like below and i have no idea how to handle it. I've looked at isolation levels etc but none of them works for this case. The only one applicable is using unique constraint but it is not suitable for the projects.
user1 passes validation
user2 passes validation
user1 insert record
user2 insert record
2.nd case
Another case is totaly bizarre and i can't reproduce it in my tests but i witnessed it in production. When the system load is high the system creates duplicate records on a single click of a user. That means the user presses the button only one time but the system creates multiple records at the background. These records have different ids but nearly exact creation times and all the other values are the same.
We thought initially that when the system load is high the application server couldn't handle it properly (because it was an old unsopperted one) and because it happened rarely we leave it there. But after sometime later we ha to change the application server to another one for another reason and the problem persist. And the second project i mentioned has a totaly different application server.
I and two different team worked on these problems for weeks but we couldn't find a suitable solution for these two cases and we couldn't even find the reason for the second one. Any help would be welcome if you guys encountered something like this or know the solution.
You need to use Synchronization on an object to avoid duplicates. Probably the RUN VALIDATION block might be a good candidate for fixing this but it really depends on your application logic.
It has nothing to do with your Webserver you need to use an Idempotent HTTP method to submit your form.
Related
I'm just looking for high-level advice when dealing with an issue with a multi-threaded application.
Here's how it works:
The application takes in Alerts, which are then processed in different threads to make Reports. On occasion, two Alerts include the same Report, however that is not desired.
It is a Spring application, written in Java, using a MySQL DB.
I altered my code to run a SELECT SQL query before saving a Report which checks to see if a similar report is already there. If it exists, the Report is not generated. However, if two Alerts come in at the same time, the SELECT command is run for Report #2, before Report #1 is saved.
I thought about putting in a sleep() with a random wait time of 1-10 seconds, but it still would cause an issue when the two threads had the same random sleep time assigned.
I'm pretty new to multi-threading, so does anyone have any ideas? Or resources to point me in the right direction.
Thanks a lot!!
Assuming you have code that looks something like this:
Report report = getReport(...); // calls the DB to get a record to see if it already exists
if (report == null) {
insertReport(...); // add a record to DB which might have already been added by another thread
}
then to avoid collisions across threads (or JVMs) combine the SELECT and INSERT. For example:
insertReportIfNotAlreadyExists(...);
which uses a query structured as:
INSERT INTO REPORTS (...) VALUES (...)
WHERE NOT EXISTS (...)
with the NOT EXISTS clause SELECTing for the record to make sure it doesn't already exist.
I try this:
Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/sonoo","root","password");
but it's very easy for someone to hack strings of username and password.
Opening Application with zip, winrar or any else program look like this and read code.
How can I secure my connection?
You need to decide what permissions someone who gets a copy of your JAR has. Do they have permission to run database queries or not?
If they should not: delete the database connection. They don't have permission.
If they should: then they can have the password. They have permission.
What seems to be tripping you up is that you are giving out the root password for your database, and so you want the the third option: "They should be able to do some database queries, but not others."
The JAR file is the wrong place to try to solve that problem. If you try to solve this at the JAR file level, one of two things will happen. Either your users were trustworthy all along and you wasted your time with whatever elaborate scheme you used, or some of your end-users are untrustworthy and one of them will hack you. They will hack you by stepping it through the JVM and editing your query strings right before the JVM sends them out, at the very last second, if they absolutely have to. Everything you do at this level will be security theater, like getting frisked at the airport, it doesn't make you significantly safer but there is a tiny chance that you can say "but we encrypted it!" and your clients might not dump you after the inevitable security breach.
That problem needs to be solved within the database, by creating a user account which does not have the permissions that they should not have. When you do SHOW GRANTS FOR enduser#'%' it will show you only the sorts of queries that they are allowed to do.
In many cases you want to give the user account a more fine-grained permission than just INSERT, SELECT, or UPDATE on a table. For example, you might have the logic "you can add to this table, but only if you also update the numbers in this other table." For these, you should use stored procedures, which can have their permissions set to either "definer" or "invoker": define it by a user with the appropriate permissions and then the invoker gets to have advanced permissions to do this particular query.
In some cases you have a nasty situation where you want to distribute the same application to two different clients, but they would both benefit significantly (at the expense of the other!) from being able to read each other's data. For example you might be an order processor dealing with two rival companies; either one would love to see the order history of the other one. For these cases you have a few options:
Move even SELECT statements into stored procedures. A stored procedure can call user() which can still give you the logged-in user, even though they are not the definer.
Move the database queries out of the shared JAR. Like #g-lulu says above, you can create a web API which you lock down really well, or something like that.
Duplicate the database, move the authentication parameters to a separate file which you read on startup.
Option 3 requires you to write tooling to maintain multiple databases as perfect duplicates of each other's structure, which sucks. However it has the nice benefit over (1) and (2) that a shared database inevitably leaks some information -- for example an auto_increment ID column could leak how many orders are being created globally and there might be ways to determine something like, "oh, they send all of their orders through this unusual table access, so that will also bump this ID at the same time, so I just need to check to see if both IDs are bumped and that'll reveal an order for our rival company".
You can create a webservice in PHP (or java or others). This webservice is stocked on a server and he's contain access and query to your database.
With your desktop app, just send a request (POST, GET) to your web service.
Exemple in PHP webservice :
if (isset($_POST['getMember'])){
do a query in your database
insert result into JSON
return JSON
}
I have implement a grid which displays document metadata and the user is able to edit the document on right click. I wanted to implement a locking mechanism for this. What would be the best way to put a lock on the document when one user has opened the editor ? These documents do reside in the database.
Just add a column that specifies who currently has the file checked out. When a person tries to check out a file, if that column is set, they will not be able to check it out, and will be notified of who has it checked out. Unless you have thousands of requests per second for a single document, this method will work fine.
In addition to adding a column to say who has the file checked out and preventing access using that. You can add a timestamp for when the lock was requested.
This way, if someone requests it and the lock is, for example, 30 mins old with no changes made, they can take the lock. (If the original user didn't quit gracefully or something).
If the documents are in a database, the database itself should have support for preventing inconsistent access.
http://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#setTransactionIsolation%28int%29
If the editor does not keep database transactions/connections open for the duration of file editing, however, and the java application runs client-side rather than server-side (as you could simply create a lock in the editor for concurrency on the server side), then things get a bit trickier and I haven't yet had enough database experience to say how you would resolve that, as using a field in the database to indicate editing status would have concurrency problems with that type of setup (unless the database itself supports locking on records, but that would depend on the DB engine in use).
Oh, one possibility would be to use file modification times (have a timestamp field in the database and update it each time a file is modified) and keep a no-dirty-reads-allowed transaction in use while checking the timestamp and determining if the file was modified by another user after the user attempting to save last accessed it; if so, it won't save the file to the database and will instead alert the user that the server-side file was changed and ask if they want to view the changes (similar to how version control systems work). By disallowing dirty reads for all such transactions, that should prevent other users from changing the file's record while the first transaction is open (to mark a record as "dirty", you could perhaps use a dummy field that would be updated at the start of each transaction with some random value). (Note: aglassman's answer would work similarly to this.)
Sorry in advance if someone has already answered this specific question but I have yet to find an answer to my problem so here goes.
I am working on an application (no I cannot give the code as it is for a job so I'm sorry about that one) which uses DAO's and Hibernate and POJO's and all that stuff for communicating and writing to the database. This works well for the application assuming I don't have a ton of data to check when I call Session.flush(). That being said, there is a page where a user can add any number of items to a product and there is one particular case where there are something along the lines of 25 items. Each item has about 8 fields a piece that are all stored in the database. When I call the flush it does save everything to the database but it takes FOREVER to complete. The three lines I am calling are:
merge(myObject);
Session.flush();
Session.refresh(myObject);
I have tried a number of different combinations of things to fix this problem and a number of different solutions so coming back and saying "Don't use flus()" isn't much help as the saveOrUpdate() and other hibernate sessions don't seem to work. The only solution I can think of is to scrap the entire project (the code we got was inherited and poorly written to say the least) or tell the user community to suck it up.
It is my understanding from Hibernate API that if you want to write the data to the database it runs a check on every item, if there is a difference it creates a queue of update queries, then runs the queries. It seems as though this data is being updated every time because the "DATE_CREATED" column in my database is different even if the other values are unchanged.
What I was wondering is if there was another way to prevent such a large committing of data or a way of excluding that particular column from the "check" hibernate does so I don't have to commit all 25 items if I only made a change to 1?
Thanks in advance.
Mike
Well, you really cannot avoid the dirty checking in hibernate unless you use a StatelessSession. Of course, you lose a lot of features (lazy-load etc.) with that, but it's up to you to make this decision.
Another option: I would definitely try to use dynamic-update=true in your entity. Like:
#Entity(dynamicUpdate = true)
class MyClass
Using that, Hibernate will update the modified columns only. In small tables, with few columns, it's not so effective, but in your case maybe it can help make the whole process faster as you cannot avoid dirty checking with a regular Hibernate Session. Updating a few columns instead of the whole object is always better, right?
This post talks more about dynamic-update attribute.
What I was wondering is if there was another way to prevent such a
large committing of data or a way of excluding that particular column
from the "check" hibernate does so I don't have to commit all 25 items
if I only made a change to 1?
I would profile the application to ensure that the dirty checking on flush is actually the problem. If you find that this is indeed the case you can use evict to manage the session size.
session.update(myObject);
session.flush();
session.evict(myObject);
I am a java programmer and I want to know how many database calls/trips are done by my application. We use Oracle as our relational database.
With oracle, I got to know about a way to alter session statistics and generate the trace files. Below are the queries to be fired:
ALTER SESSION SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET SQL_TRACE = TRUE;
After the trace files are generated, they could be read using the TKProf utility. But this approach cannot be used because:
my application uses hibernate and spring frameworks and hence the application does not have an handle to the session.
Even if we get the trace files, I need to know whether the set of queries are fired in one go (in a batch) or separately. I am not sure if TkProf output could help to understand this.
Does anyone have any better suggestions?
In TkProf, you can basically tell the number of round-trips as the number of "calls" (although there are exceptions so that less round trips are required, e.g. parse/execute/fetch of a single row select is, theoretically, possible in a single round trip, the so called "exact fetch" feature of oracle). However as a estimate, the tkprof figures are good enough.
If trace wait events, you should directly see the 'SQL*Net from/to client' wait events in the raw trace, but I think tkprof does not show it (not sure, give it a try).
Another way is to look into the session statistics:
select value
from v$mystat ms, v$statname sn
where ms.value > 0
and ms.statistic#=sn.statistic#
and sn.name IN ('SQL*Net roundtrips to/from client')
However, if you do that in your app, you will slowdown your app, and the figures you receive will include the round-trips for that select.
A wrote a few articles about round-trip optimization:
http://blog.fatalmind.com/2009/12/22/latency-security-vs-performance/
http://blog.fatalmind.com/2010/01/29/oracle-jdbc-prefetch-portability/
Firstly, use a dedicated database (or timeframe) for this test, so it doesn't get easily confused with other sessions.
Secondly, look at the view v$session to identify the session(s) for hibernate. The USERNAME, OSUSER, TERMINAL, MACHINE should make this obvious. The SID and SERIAL# columns uniquely identify the session. Actually the SID is unique at any time. The SERIAL# is only needed if you have sessions disconnecting and reconnecting.
Thirdly, use v$sessstat (filtered on the SID,SERIAL# from the v$session) and v$statname (as shown by Markus) to pull out the number of round trips. You can take a snapshot before the test, run the test, then look at the values again and determine the work done.
That said, I'm not sure it is a particularly useful measure in itself. The TKPROF will be more detailed and is much more focussed on time (which is a more useful measure).
Best would be to get a dedicated event 10046 level 12 tracefile of the running session. You will find there all information in detail. This means that you can see how many fetches the application will do per executed command and the related wait events/elapsed time. The resul can be analyzed using tool from Oracle like TKPROF or Oracle Trace Analyzer or Third party tools like [QueryAdvisor][1].
By the way you can ask your DBA to define a database trigger activating Oracle filetrace automatic after login. So capturing the file should not be the problem.
R.U.
[1]: http://www.queryadvisor.com/"TKPROF Oracle tracefile analysis with QueryAdvisor"