Recently my team have get a situation in which some records in our shared test database disappear with no clear reason. Because it's a shared database (which is utilized by so many teams), so that we can't track down if it's a programming mistake or someone just run a bad sql script.
So that I'm looking for a way to notify (at database level) when a row of a specific table A get deleted. I have looked at the Postgres TRIGGER, but it failed to give me the specific sql that cause the deletion.
Is there anyway I can log the sql statement which cause the deletion of some rows in table A?
You could use something like this.
It allows you to create a special triggers for PostgreSQL tables, that log all the changes to the chosen tables.
This triggers can log the query, that cause the change (via current_query()).
Using this as a base you can add more fields/information to log.
You would do this to the actual postgres config files:
http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). ddl logs all data
definition statements, such as CREATE, ALTER, and DROP statements. mod
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type. For clients using extended query protocol,
logging occurs when an Execute message is received, and values of the
Bind parameters are included (with any embedded single-quote marks
doubled).
The default is none. Only superusers can change this setting.
You want either ddl or all to be the selection. This is what you need to alter:
In your data/postgresql.conf file, change the log_statement setting to 'all'. Further the following may also need to be validated:
1) make sure you have turned on the log_destination variable
2) make sure you turn on the logging_collector
3) also make sure that pg_log actually exists relative to your data directory, and that the postgres user can write to it.
taken from here
Related
I have struggled with architectural problem.
I have table in DB2 v.9.7 database in which I need to insert ~250000 rows, with 13 columns each, in a single transaction. I especially need that this data would inserted as one unit of work.
Simple insert into and executeBatch give me:
The transaction log for the database is full. SQL Code: -964, SQL State: 57011
I don't have rights to change the size of transaction log. So I need to resolve this problem on the developer's side.
My second thought was to use savepoint before all inserts then I found out that works only with current transaction so it doesn't help me.
Any ideas?
You want to perform a large insert as a single transaction, but don't have enough log space for such transaction and no permissions to increase it.
This means you need to break up your insert into multiple database transactions and manage higher level commit or rollback on the application side. There is not anything in the driver, either JDBC or CLI, to help with that, so you will have to write custom code to record all committed rows and manually delete them if you need to roll back.
Another alternative might be to use the LOAD command by means of the ADMIN_CMD() system stored procedure. LOAD requires less log space. However, for this to work you will need to write rows that you want to insert into a file on the database server or to a shared filesystem or drive accessible from the server.
Hi you can use export/load commands to export/import large tables, this should be very fast.The LOAD command should not be using the transaction log.You may have problem if your user have no privilege to write file on server filesystem.
call SYSPROC.ADMIN_CMD('EXPORT TO /export/location/file.txt OF DEL MODIFIED BY COLDEL0x09 DECPT, select * from some_table ' )
call SYSPROC.ADMIN_CMD('LOAD FROM /export/location/file.txt OF DEL MODIFIED BY COLDEL0x09 DECPT, KEEPBLANKS INSERT INTO other_table COPY NO');
I am working on a java plugin interfacing with an H2 database. What I really want is an "Insert Ignore" statement; however, I'm aware that H2 doesn't support this. I am also aware of Merge, but this is really not what I want, if the record exists I don't want to change it.
What I am considering is to just run the insert and let the duplicate key exception happen. However, I don't want this to fill my log file. The DB call happens in an imported class that I can't change. So my questions are:
Is this a reasonable thing to do? I'm not one for letting errors happen, but this seems like the best way in this case (it should not happen all that much).
How can I keep this exception from hitting my log file? If there isn't a way to block exceptions down the stack, can I redirect the output of the stack trace that is output?
Thanks.
One solution is to use:
insert into test
select 1, 'Hello' from dual
where not exists(select * from test where id = 1)
This should work for all databases (except for the dual part; you may need to create your own dummy table with one row).
To disable logging exceptions, append ;trace_level_file=0 to the database URL:
jdbc:h2:~/test;trace_level_file=0
or run the SQL statement:
set trace_level_file 0
I have an employee management application. I am using a MySQL database.
In my application, I have functionality like add /edit/delete /view.
Whenever I run any functionality, one query is fired in the database. Like in add employee, it will fire the insert query.
So I want to do something on my database, so that I see how many queries have been fired till date.
I don't want to do any changes on my Java code.
You can use SHOW STATUS:
SHOW GLOBAL STATUS LIKE 'Questions'
As documented under Server Status Variables:
The status variables have the following meanings.
[ deletia ]
Questions
The number of statements executed by the server. This includes only statements sent to the server by clients and not statements executed within stored programs, unlike the Queries variable. This variable does not count COM_PING, COM_STATISTICS, COM_STMT_PREPARE, COM_STMT_CLOSE, or COM_STMT_RESET commands.
Beware that:
the statistics are reset when FLUSH STATUS is issued.
the SHOW STATUS command is itself a statement and will increment the Questions counter.
these statistics are server-wide and therefore will include other databases on the same server (if any exist)—a feature request for per-database statistics has been open since January 2006; in the meantime one can obtain per-table statistics from google-mysql-tools/UserTableMonitoring.
You should execute queries as mentioned below:
To get the SELECT query count, execute Show global status like 'com_select';
To get the UPDATE query count, execute Show global status like 'com_update';
To get the DELETE query count, execute Show global status like 'com_delete';
To get the INSERT query count, execute Show global status like 'com_insert';
You can also analyze the general log or route your application via a MySQL proxy to get all queries executed on a server.
If you don't want to modify your code then you can trace this on the database with triggers. The restriction is that triggers can only fire on insert/update/delete so can't be used to count reads (selects).
Maybe it's too "enterprise" and too "production" for your question.
When you use munin (http://munin-monitoring.org/) (other monitoring-tools have simular extenstions), you can use mysql-monitoring tools which show you how many requests (splitted in Insert/Update/Loaddata/...) you are firing.
With these tools, you see the usage and the load you are producing.
Especially when data changes, and may cause more accesses/load (missing indices, more queries because of big m:n-tables, ...) you recognize it.
It's extremely handy and you can do the check during your break. No typing, no thing, just check the graphs.
I think that the most exact method, which needs no modifications to the database or application in order to operate, would be to configure your database management system to log all events.
You are left with a log file, which is a text file that can be analyzed on demand.
Here is the The General Query Log manual page that will get you started.
I have a list of strings that contain valid SQL expressions.
I need to execute only those that do not modify the database.
What would be the best way to do this? Just doing something like:
if(sqlQuery.contains("DELETE")){
//don't execute this
}
seems like a bad hack
Update:
I'll make this more specific.
I already have a list of SQL queries that are allowed. I want to make sure only these are executed.
What would be the best way to match against these?
The easiest and best (most comprehensive) way to do this is to create a read-only user and only connect to the database with that user. In SQLServer, the easiest way to do this is to create the user and add them to the built-in "db_datareader" role. This will only allow SELECTs.
And you have to worry about more than just DELETEs, INSERTs or UPDATEs. You also have to be careful about calling any stored procedures, so to be safe you'd also want to remove execute rights, ALTER rights, GRANT rights, etc...
EDIT:
Just execute this...
CREATE LOGIN [user] WITH PASSWORD='password', DEFAULT_DATABASE=[your_db], CHECK_POLICY=OFF
GO
CREATE USER [user] FOR LOGIN [user]
EXEC sp_addrolemember N'db_datareader', N'your_db'
GO
DELETE is not the only SQL instruction that might modify your database; INSERT will definitely do so, and UPDATE might (depending on your exact query). So just analysing the Strings might be a hard way of doing this.
As long as performance is not really an issue, you could start a transaction, run your instructions one by one, check the number of affected rows for each of them, and finally rollback your transaction. Afterwards, you only run those statements that affected 0 rows.
Besides, check your database documentation: some RDBMS-es (like Oracle) don't support rollback of DDL statements like ALTER TABLE, DROP TABLE and the like...
I don't think there's a bulletproof way of preventing the alteration of records by simply checking the content of the given SQL. For example, you might have a field, which has the value "update" and some user is trying to query all rows which contain this value, yet the SQL would not be executed, since it contains a "blacklisted" string.
I guess the only safe way would be to execute the SQL's with an user, who has no rights to alter records at all.
I am working on a java plugin interfacing with an H2 database. What I really want is an "Insert Ignore" statement; however, I'm aware that H2 doesn't support this. I am also aware of Merge, but this is really not what I want, if the record exists I don't want to change it.
What I am considering is to just run the insert and let the duplicate key exception happen. However, I don't want this to fill my log file. The DB call happens in an imported class that I can't change. So my questions are:
Is this a reasonable thing to do? I'm not one for letting errors happen, but this seems like the best way in this case (it should not happen all that much).
How can I keep this exception from hitting my log file? If there isn't a way to block exceptions down the stack, can I redirect the output of the stack trace that is output?
Thanks.
One solution is to use:
insert into test
select 1, 'Hello' from dual
where not exists(select * from test where id = 1)
This should work for all databases (except for the dual part; you may need to create your own dummy table with one row).
To disable logging exceptions, append ;trace_level_file=0 to the database URL:
jdbc:h2:~/test;trace_level_file=0
or run the SQL statement:
set trace_level_file 0