Suppose I have a table that contains valid data. I would like to modify this data in some way, but I'd like to make sure that if any errors occur with the modification, the table isn't changed and the method returns something to that effect.
For instance, (this is kind of a dumb example, but it illustrates the point so bear with me) suppose I want to edit all the entries in a "name" column so that they are properly capitalized. For some reason, I want either ALL of the names to have proper capitalization, or NONE of them to have proper capitalization (and the starting state of the table is that NONE of them do).
Is there an already-implemented way to run a batch update on the table and be assured that, if any one of the updates fails, all changes are rolled back and the table remains unchanged?
I can think of a few ways to do this by hand (though suggestions are welcomed), but it'd be nice if there was some method I could use that would function this way. I looked at the java.sql.statement.executeBatch() command, but I'm not convinced by the documentation that my table wouldn't be changed if it failed in some manner.
I hit this one too when starting with JDBC - it seemed to fly in the face of what I understood about databases and ACID guarantees.
Before you begin, be sure that your MySQL storage engine supports transactions. MyISAM doesn't support transactions, but InnoDB does.
Then be sure to disable JDBC autoCommit - Connection.setAutoCommit(false), or the JDBC will run each statement as a separate trasaction. The commit will be an all or nothing affair - there will be no partial changes.
Then you run your various update statements, and finally call Connection.commit() to commit the transaction.
See the Sun Tutorial for more details about JDBC transactions.
Using a batch does not change the ACID guarantees - you're either transacted or you're not! - batching is more about collecting multiple statements together for improved performance.
Related
At several points in the documentation of GridDB the auto-commit feature is disabled and instead there are manual commits. I did not manage to find any explanation for this behaviour.
It seems that it needs to be disabled when deleting a row from a GridDB container, but not for example when adding a row. In the later case there seems to be little difference between it being active or not. Though of course one has to commit manually at least once if it is disabled for changes to actually be reflected by the database.
So what does auto-commit exactly do, when is it committing changes automatically? When is there a need to or an advantage from disabling auto-commit?
These are the functions I am talking about:
Java:
col.setAutoCommit(false); col.commit();
PHP:
col->set_auto_commit(false); col->commit();
Auto commit allows GridDB to determine when is best to commit resulting in good performance but also allows other clients to fetch stale data.
I disable auto commit and commit manually every time for a singular write or any number deletes or updates but leave auto commit on when writing a stream of data.
As my understanding, LockAcquisitionException will happen when a thread is trying to update a row of record that is locking by another thread. ( Please do correct me if I am wrong )
So I try to simulate as follow:
I lock a row of record using dbVisualizer, then I using application to run a update query on the same record as well. At the end, I am just hitting global transaction time out instead of LockAcquisitionException with reason code 68.
Thus, I am thinking that my understanding is wrong. LockAcquisitionException is not happen by this way. Can kindly advise or give some simple example to create the LockAcquisitionException ?
You will get LockAcquisitionException (SQLCODE=-911 SQLERRMC=68) as a result of a lock timeout.
It may be unhelpful to compare the actions of dbViz with hibernate because they may use different classes/methods and settings at jdbc level which can influence the exception details. What matters is that at Db2 level both experienced SQLCODE=-911 with SQLERRMC=68 regardless of the exception-name they report for the lock-timeout.
You can get a lock-timeout on statements like UPDATE or DELETE or INSERT or SELECT (and others including DDL and commands), depending on many factors.
All lock-timeouts have one thing in common: one transaction waited too long and got rolled-back because another transaction did not commit quickly enough.
Lock-timeout diagnosis and Lock-Timeout avoidance are different topics.
The length of time to wait for a lock can be set at database level, connection level, or statement level according to what design is chosen, including mixing these. You can also adjust how Db2 behaves for locking by adjusting database parameters like CUR_COMMIT, LOCK_TIMEOUT and by adjusting isolation-level at statement-level or connection-level.
It's wise to ensure accurate diagnosis before thinking about avoidance.
As you are running Db2-LUW v10.5.0.9, consider careful study of this page and all related links:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.trb.doc/doc/t0055072.html
There are many situations that can lead to a lock timeout, so it's better to know exactly which situation is relevant for your case(s).
Avoiding lock-conflicts is a matter of both configuration and transaction design so that is a bigger topic. The configuration can be at Db2 level or at application layer or both.
Sometimes bugs cause lock-timeouts, for example when app-server threads have a database-connection that is hung and has not committed and is not being cleaned up correctly by the application.
You should diagnose the participants in the lock timeout. There are different ways to do lock-conflict diagnosis on Db2-LUW so choose the one that works for you.
One simple diagnosis tool that still works on V10.5.0.9 is to use the Db2-registry variable DB2_CAPTURE_LOCKTIMEOUT=ON, event though the method is deprecated. You can set this variable (and unset it) on the fly without needing any service-outage. So if you have a recreatable scenario that results in SQLCODE=-911 SQLERRMC=68 (lock timeout), you can switch on this variable, repeat the test, then switch off the variable. If the variable is switched on, and a lock timeout happens, Db2 will write a new text file containing information about the participants in the locking situation showing details that help you understand what is happening and that lets you consider ways to resolve the issue when you have enough facts. You don't want to keep this variable permanently set because it can impact perormance and fill up the Db2 diagnostics file-system if you get a lot of lock-timeouts. So you have to be careful. Read about this variable in the Knowledge Center at this page:
https://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.regvars.doc/doc/r0005657.html
You diagnose the lock-timeout by careful study of the contents of these files, although of course it's necessary to understand the details also. This is a regular DBA activity.
Another method is to use db2pdcfg -catch with a custom db2cos script, to decide what to do after Db2 throws the -911. This needs scripting skills and it lets you decide exactly what diagnostics to collect after the -911 and where to store those diagnostics.
Another method which involves much more work but potentially pays more dividends is to use an event monitor for locking. The documentation is at:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0054074.html
Be sure to study the "Related concepts" and "Related tasks" pages also.
I'm working on Java application that integrates with legacy system written Oracle PL/SQL. Unfortunately i'm not able to change this legacy system. Problem with this system is that it that sometimes COMMIT statements are written into procedures. But this causes that I'm not able to handle transactions correctly on my application level.
So is it possible to make oracle database procedures to ignore commit statements?
I have found that when doing ALTER SESSION DISABLE COMMIT IN PROCEDURE in beginning of connection will cause exception when PL/SQL procedure is trying to commit. But is it possible to make Oracle to ignore commit without changing PL/SQL code?
I don't think you can do that. You'll have to add a parameter to those procedures like"do commit"with a default value true. And you call them with parameter set to false. Pass the parameter value on, if they are nested. That way the legacy code still behaves the same but you get transaction control.
I'm working with Oracle for 9 years now. I also checked undocumented parameters regarding your question and I'm pretty sure, that there is no way to let Oracle ignore a commit of a stored procedure.
But of cause, theroretical you could use Oracle's flashback feature (e.g. flashback database or flashback table), to reset whole database or single tables to a state before your transaction began. But be aware, that this only works as desired, if you are the only one, who changes anything at the objects you flash back. This is usually unrealistic. By the way you also need to consider that the flashback feature is not designed to support such a scenario, so performance of your application will be suboptimal in case you need to flashback anything.
But this may be a way to solve your problem, if you have no other choice.
Probably the best thing is to alter the pl/sql procedures without affecting current functionality. In other words, add a new parameter to allow the user to ignore commits, but default to existing functionality (to commit). I've done this in a similar situation and it worked well.
So, you'd have something like:
create or replace procedure some_proc(
i_num in number, -- existing parameter
i_commit in number default 1) -- perform commit? 0=false, else true
as
begin
-- some DML here
if (i_commit <> 0) then
commit;
end if;
end;
Make sure that this new parameter is added to the end of the param list. So your app would pass in 0 (false) for i_commit.
Hope that helps.
I have found that when doing ALTER SESSION DISABLE COMMIT IN PROCEDURE
in beginning of connection will cause exception when PL/SQL procedure
is trying to commit.
Yeah, the documented behaviour of that statement is to force ORA-00034 exceptions if a procedure attempts to issue a commit. I think it's really intended as a testing thing, to identify procedures with embedded commits.
I think it is widely regarded as bad practice for stored procedures to issue commits. Control of the transaction must belong to the top of the calling stack.
Unfortunately there is no way to ignore those embedded commits. You will either have to re-write the PL/SQL routines, or else code some workarounds in your calling code (e.g. exception handler which issues additional DML to reverse committed changes).
Knowing what the procedure did would have been helpful.
However, assuming that the procedure modifies data in a limited number of tables(That's what we mostly do in PLSQL anyway), you could try running a flashback query on that schema:
FLASHBACK TABLE TABLE_NAME TO TIMESTAMP(TO_DATE('06-SEP-2012 23:59:59','DD-MON-YYYY HH24: MI: SS'));
You can set the time string "06-SEP-2012 23:59:59" to the time just before the procedure is called in your JAVA code.
Its a bad workaround, but worth a try I guess
I'm working on Java application that integrates with legacy system
written Oracle PL/SQL. Unfortunately i'm not able to change this
legacy system.
Hmmm, this smells like politics... "Don't touch this, it works!" I guess. :(
As a matter of fact, it's most likely impossible to ignore a COMMIT statement, as #Tilman Fliegel (and others) already said. And if it was, it would be quite an ugly wart in your codebase.
I'm not that good at politics, but I'd say if you cannot use nor change this, then just don't use this. I mean:
If you cannot change your procedures because they are used by other (old, immutable) systems, then duplicate them, then modify/refactor your copy until you're happy with it. If you're able to make your version retro-compatible, you may even provide a way for other (old) systems to use yours afterwards, when/if they are refactored: just introduce a "legacy mode" input parameter or whatever, cf. other answers.
If you cannot change code because you cannot understand/test it, then this is a major issue. It may even be safer to just trash it and start over from scratch (provided you're able to do so).
But maybe trying to ignore the COMMITs is easier after all. Humans are so hard to refactor... ;)
The fundamental source of my questionning came from this observation. When I use Hibernate and make any query, I get the following in the MySQL logs:
SET autocommit=0
insert into SimpleNamedEntity (name, version) values (null, 0)
commit
SET autocommit=1
Now, I did some research (refs below, would have added more, but I'm not 'good' enough it seems :-)) and this seems to be a fairly classic problem. I did a number of tests on various levels of architecture (mysql config, jdbc, connection pool, hibernate) to get a better idea of how things work, and I ended up more confused as a result, so here are a few questions:
Can someone confirm if autocommit does or doesn't have a negative impact on performance? The main reason I see for this is defensive behavior to prevent unwanted uncommitted transactions that would happen if people forgot to commit after statement execution. Is there another reason?
The way I see it, it seems to me that you will want autocommit to off when doing large dependant inserts-updates-deletes as this helps ensure data integrity. If all I want to do are selects, then it's useless and I'd probably want to set it to true to avoid some long-transaction lock-type side-effects. Does that make sense?
I noticed that whatever the mysql autocommit setting, the jdbc connection is always to true and the only way for me to control that (have it false by default) is to add init-connect="SET AUTOCOMMIT=0" on the mysql server'. But, the strange part is once I start using c3p0 with that setting, c3p0 decides to manually override that and forces the set autocommit to 1. There's even a rollback in this case I find really suspect. Why does c3p0 force-set the flag? Why does it rollback when it doesn't normally? Is there a way to tell c3p0 not to do this?
19 Query SHOW COLLATION
19 Query SELECT ##session.autocommit
19 Query SET NAMES utf8mb4
19 Query SET character_set_results = NULL
19 Query SET sql_mode='STRICT_TRANS_TABLES'
19 Query SELECT ##session.tx_isolation
19 Query rollback
19 Query SET autocommit=1
I was trying to 'clean up' the autocommit statements I was having in the example above. The closest I got was by setting autocommit to off on the mysql server. But even then, hibernate feels compelled to at a minimum do one set autocommit = 0 statement at the beginning of a transaction. Why would it need to do that? To 'reset' transaction state? Is there a configuration path I can use somewhere in the stack to prevent hibernate from doing this?
References:
Java JDBC ref
Article on the problem
(edit - clarification)
I am re-reading my question and can understand how complicated it all sounds. I'm sorry I wasn't clear, I'll try to clarify. I want transactions. I get the InnoDB part, READ_COMMITED. I also understand autocommit is essentially a transaction per sql statement. Here's what I don't get:
hibernate does 2 calls to the DB it doesn't need to do in my first example. I don't need it to toggle between autocommit = false and autocommit = true. I expect it to stay at false all the time but however I configure it, it still wants to make set autocommit calls automatically. I don't get why it does that.
c3p0 exhibits somewhat similar behavior. It does set autocommit automagically for me when I don't want it to, and I also don't understand why it does that.
You need to decide if you are doing transactions or not.
It is important to understand how SQL servers work, there is no such thing as no transactions. What autoCommit=true does it make each statement a transaction in its own right, you don't need to use an explicit BEGIN TRANSACTION and COMMIT around each statement (the SQL server implies this behavior).
Yes... it is important that you do SELECTs inside transaction. That fact you suggest it is not needed indicates you do not yet fully appreciate and understand the point of relational integrity.
Hibernate expects relational integrity to exist, if you read some data now, then runs some Java code, then use the data you previously read to access more data from SQL (for example using Hibernates lazy loading feature) then it is important that data still exists isn't it. Otherwise what should your Java code do now? throw the user an error?
This is what using transactions and say the default READ_COMMITTED Transaction Isolation level would provide and this is the minimum level you should start your quest to grok (learn) SQL.
So are you doing transactions or not. The question is do you want consistent, reliable, repeatible behavior or do you fast but sometimes crazy ? (this is not to say transactions slow down performance, that is a common misconception, only transactions that have conflicts with each other might lower performance, working on large data sets in an open transaction increase the possibility to lower performance).
Re autoCommit=true|false affecting performance. I think you are asking the wrong question.
Re databases locking up due to long transactions, this is usually a code/code-design bug, sometimes it is a design issue about how to do about doing things in SQL. Many small data-set transactions is good, one big large data-set is bad.
.
In order to get transactional support in MySQL you should ensure all your tables are InnoDB type using "CREATE TABLE tableName ( ... ) ENGINE=InnoDB;".
You should keep the SQL driver, pooled connection handle (via c3p0) with autoCommit=false (this should be the default).
You should ensure the default Transaction Isolation Level is (at least) READ_COMMITTED.
Don't forget to use the correct Hibernate dialect for InnoDB (check the hibernate startup logging output).
Now learn how to use that. If you hit a problem it is because you are doing something wrong and don't fully understand it all. If you feel you need to lower the isolation level, stop using transactions or need autoCommit=true then you should seriously hire a professional consulant to look at the project and should not consider turning it off until you understand why and when you can turn it off.
.
Update after your edit
The default autoCommit setting is a property of the driver and/or the connection pooler. So the suggestion to mess with hibernate setting isn't where I'd start.
By default hibernate checks what it gets and modifies it if necessary. Then puts is back to how it was before returning the Connection to the connection pooling (some broken connection poolers don't check/reset this so hibernate does this to improve its inter-operation within a larger system).
So if it is assistance with setting up the connection pooler (c3p0) then maybe you should start by posting the details of how you configured it ? The version you are using, also an example of the JDBC connection string in use and also the MySQL driver jar you are using.
Also what is your deployment scenario ? standalone J2SE ? web-app WAR ? app-server ?
Can you write some test code to get a Connection and there is API to ask it what the autoCommit mode it. http://download.oracle.com/javase/6/docs/api/java/sql/Connection.html#getAutoCommit%28%29 Your goal is to then adjust your configuration so that you always get 'false' back from this API.
The MySQL driver documentation http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-configuration-properties.html there is some reference to autoCommit in there.
You also want to try and run "mysqladmin variables" on your server to get back the server configuration, it maybe possible to set it globally (which seems like a silly thing to do IMHO).
In short you need to find the source of where it is being set on, since that is an unusual default. Then you can remove it.
I am going through apache cassandra and working on sample data insertion, retrieving etc.
The documentation is very limited.
I am interested in knowing
can we completely replace relation db like mysql/ oracle with cassandra?
does cassandra support rollback/ commit?
does cassandra clients (thrift/ hector) support fetching associated object (objects where we save one super columns' key in another super column family)?
This will help me a lot to proceed further.
thank you in advance.
Short answer: No.
By design, Cassandra values availability and partition tolerance over consistency1. Basically, it's not possible to get acceptable latency while maintaining all three of qualities: one has to be sacrificed. This is called CAP theorem.
The amount of consistency is configurable in Cassandra using consistency levels, but there doesn't exist any semantics for rollback. There's no guarantee that you'll be able to roll back your changes even if the first write succeeds.
If you want to build application with transactions or locks on top of Cassandra, you probably want to look at Zookeeper, which can be used to provide distributed synchronization.
You might've already guessed this, but Cassandra doesn't have foreign keys or anything like that. This has to be handled manually. I'm not that familiar with Hector, but a higher-level client could be able to do this semi-automatically.
Whether or not you can use Cassandra to easily replace a RDBMS depends on your specific use case. In your use case (based on your questions), it might be hard to do so.
In version 2.x you can combine CQL-statements in logged batch that is atomic. Either all or none of statements succeed. Also you can read about lightweight transactions.
More than that - there are several persistence managers for Cassandra. You can achive foreign keys behavior on client level with them. For example, Achilles and Kundera.
If Zookeeper is able to handle transactions that has Oracle-quality then its a done deal. Relations and relation integrity is no problem to implement on top of ANY database. A foreign key is just another data-field. ACID/Transactions is the key issue.
instead of commit and rollback, you must use batch.
batch worked atomic, this means all records in multiple tables submit or no submit atomic mode
for example :
var batch = new BatchStatement();
batchItem= session.Prepare(stringCommand);
batch.Add(batchItem);
var result = session.ExecuteAsync(batch);
Of course you can but it is completely depends on your use case. If you don't pick the right db for your use case then you need to worry about lots of things on your own. For ex, in rdbms geographically distribution doesn't provided you need to find a way to do it. In cassandra, you lack some acid properties under some conditions. You need to handle those properties on application side.
Yes but limited for certain use cases. You can use batch property. It supports rollback but you lack the isolation. I am not sure this property exist in OSS Cassandra. For more info look
Dont understand what you mean by super column. If you ask to find an id in another table columns, yeah you can do it, why not. But definitely not understand what you mean by super column.
Overall Cassandra is not ACID compliant but there are some features that helps you under some conditions to be ACID compliant like batch, lightweight transactions.