I have a list of strings that contain valid SQL expressions.
I need to execute only those that do not modify the database.
What would be the best way to do this? Just doing something like:
if(sqlQuery.contains("DELETE")){
//don't execute this
}
seems like a bad hack
Update:
I'll make this more specific.
I already have a list of SQL queries that are allowed. I want to make sure only these are executed.
What would be the best way to match against these?
The easiest and best (most comprehensive) way to do this is to create a read-only user and only connect to the database with that user. In SQLServer, the easiest way to do this is to create the user and add them to the built-in "db_datareader" role. This will only allow SELECTs.
And you have to worry about more than just DELETEs, INSERTs or UPDATEs. You also have to be careful about calling any stored procedures, so to be safe you'd also want to remove execute rights, ALTER rights, GRANT rights, etc...
EDIT:
Just execute this...
CREATE LOGIN [user] WITH PASSWORD='password', DEFAULT_DATABASE=[your_db], CHECK_POLICY=OFF
GO
CREATE USER [user] FOR LOGIN [user]
EXEC sp_addrolemember N'db_datareader', N'your_db'
GO
DELETE is not the only SQL instruction that might modify your database; INSERT will definitely do so, and UPDATE might (depending on your exact query). So just analysing the Strings might be a hard way of doing this.
As long as performance is not really an issue, you could start a transaction, run your instructions one by one, check the number of affected rows for each of them, and finally rollback your transaction. Afterwards, you only run those statements that affected 0 rows.
Besides, check your database documentation: some RDBMS-es (like Oracle) don't support rollback of DDL statements like ALTER TABLE, DROP TABLE and the like...
I don't think there's a bulletproof way of preventing the alteration of records by simply checking the content of the given SQL. For example, you might have a field, which has the value "update" and some user is trying to query all rows which contain this value, yet the SQL would not be executed, since it contains a "blacklisted" string.
I guess the only safe way would be to execute the SQL's with an user, who has no rights to alter records at all.
Related
I am using Hibernate with MSSQL server writing the software that integrates with an existing database. There is an instead of insert trigger on the table that I need to insert into and it messes up ##Identity, which means on Hibernate's save I can't get the id of inserted row. I can't control the trigger (can't modify it). I saw this question, but it involves procedures, which my trigger does not have, so I thought my question is different enough. I can't post the whole trigger, but hopefully I can post enough to get the point across:
CREATE TRIGGER TrigName ON TableName
INSTEAD OF INSERT
AS
SET XACT_ABORT ON
BEGIN TRANSACTION
-- several DECLARE, SET statements
-- a couple of inserts into other tables for business logic
-- plain T-SQL statements without procedures or functions
...
-- this is the actual insert that i need to perform
-- to be honest, I don't quite understand how INSERTED table
-- was filled with all necessary columns by this point, but for now
-- I accept it as is (I am no SQL pro...)
INSERT INTO ClientTable (<columns>)
SELECT <same columns> from INSERTED
-- a couple of UPDATE queries to unrelated tables
...
COMMIT TRANSACTION;
I was wondering if there is a reliable way to get the id of the row being inserted? One solution I thought of and tried to make is to install an on insert trigger on the same table that writes the newly inserted row into a new table I added to the db. I'd use that table as a queue. After transaction commit in Hibernate I could go into that table and run a select with the info I just inserted (I still have access to it from the same method scope), and I can get the id and finally remove that row. This is a bulky solution, but best I can come up with so far.
Would really appreciate some help. I can't modify existing triggers and procedures, but I can add something to the db if it absolutely does not affect existing logic (like that new table and a on insert trigger).
To sum up: I need to find a way to get the ID of the row I just inserted with Hibernate's save call. Because of that instead of insert trigger, hibernate always returns identity=0. I need to find a way to get that ID because I need to do the insert in a few other tables during one transaction.
I think I found an answer for my question. To reply to #SeanLange's comment: I can't actually edit insert code - it's done by another application and inquiry to change that will take too long (or won't happen - it's a legacy application). What I did is insert another trigger on insert on the same table. Since I know the order of operations in the existing instead of insert trigger I can see that the last insert operation will be in the table I want so that means my on insert trigger will fire right after that. In the scope of that trigger I have access to inserted table out of which I pull out the id.
CREATE TRIGGER Client_OnInsert ON myClientTable
FOR INSERT
AS
BEGIN
DECLARE #ID int;
SET #ID = (select ClientID from inserted);
INSERT INTO ModClient (modClientId)
OUTPUT #ID
VALUES (#ID);
END
GO
Then in Hibernate (since I can't use save() anymore), I use a NativeQuery to do this insert. I set parameters and run the list() method of NativeQuery, which returns a List where the first and only argument is the id I want.
This is a bulky way, I know. If there is anything that's really bad that will stand out to people - please let me know. I would really appreciate some feedback on this. However, I wanted to post this answer as a potential answer that worked so far, but it does not mean it's very good. For this solution to work I did have to create another small table ModClient, which I will have to use as a temp id storage for this exact purpose.
I am working on a java plugin interfacing with an H2 database. What I really want is an "Insert Ignore" statement; however, I'm aware that H2 doesn't support this. I am also aware of Merge, but this is really not what I want, if the record exists I don't want to change it.
What I am considering is to just run the insert and let the duplicate key exception happen. However, I don't want this to fill my log file. The DB call happens in an imported class that I can't change. So my questions are:
Is this a reasonable thing to do? I'm not one for letting errors happen, but this seems like the best way in this case (it should not happen all that much).
How can I keep this exception from hitting my log file? If there isn't a way to block exceptions down the stack, can I redirect the output of the stack trace that is output?
Thanks.
One solution is to use:
insert into test
select 1, 'Hello' from dual
where not exists(select * from test where id = 1)
This should work for all databases (except for the dual part; you may need to create your own dummy table with one row).
To disable logging exceptions, append ;trace_level_file=0 to the database URL:
jdbc:h2:~/test;trace_level_file=0
or run the SQL statement:
set trace_level_file 0
Recently my team have get a situation in which some records in our shared test database disappear with no clear reason. Because it's a shared database (which is utilized by so many teams), so that we can't track down if it's a programming mistake or someone just run a bad sql script.
So that I'm looking for a way to notify (at database level) when a row of a specific table A get deleted. I have looked at the Postgres TRIGGER, but it failed to give me the specific sql that cause the deletion.
Is there anyway I can log the sql statement which cause the deletion of some rows in table A?
You could use something like this.
It allows you to create a special triggers for PostgreSQL tables, that log all the changes to the chosen tables.
This triggers can log the query, that cause the change (via current_query()).
Using this as a base you can add more fields/information to log.
You would do this to the actual postgres config files:
http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). ddl logs all data
definition statements, such as CREATE, ALTER, and DROP statements. mod
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type. For clients using extended query protocol,
logging occurs when an Execute message is received, and values of the
Bind parameters are included (with any embedded single-quote marks
doubled).
The default is none. Only superusers can change this setting.
You want either ddl or all to be the selection. This is what you need to alter:
In your data/postgresql.conf file, change the log_statement setting to 'all'. Further the following may also need to be validated:
1) make sure you have turned on the log_destination variable
2) make sure you turn on the logging_collector
3) also make sure that pg_log actually exists relative to your data directory, and that the postgres user can write to it.
taken from here
I've just tested my application under the profiler and found out that sql strings use about 30% of my memory! This is bizarre.
There are a lot of strings like this stored in app memory. This is SQL queries generated by hibernate, note the different numbers and trailing underscores:
select avatardata0_.Id as Id4305_0_,...... where avatardata0_.Id=? for update
select avatardata0_.Id as Id4347_0_,...... where avatardata0_.Id=? for update
Here is the part I can't understand. Why does hibernate have to generate different sql strings with different identifiers like "Id4305_0_" for each query? Why can't it use one query string for all identical queries? Is this some kind of trick to bypass query caching?
I would greatly appreciate if someone would describe me why it happening and how to avoid such resource wasting.
UPDATE
Ok. I found it. I was wrong assuming memory leak, It was my fault. Hibernate is working as intended.
My app created 121(!) SessionFactories in 10 threads, they produced about 2300 instances of SingleTableEntityPersisters. And each SingleTableEntityPersister generates about 15 SQL queries with different identifiers. Hibernate was forced to generate about 345.000 different SQL queries. Everything is fine, nothing weird :)
There is a logic behind the query string that hibernate generates. Its primary aim is to get unique aliases for tables and columns names.
From your query,
select avatardata0_.Id as Id4305_0_,...... where avatardata0_.Id=?
avatardata0_ ==> avatardata is the alias of the table and 0_ is appended to indicate it is the first table in the query. So if it were the second table(or Entity) in the query it should have been shown as avatardata1_. It uses the same logic for the column aliases.
So, this way all the possible conflicts are avoided.
You are seeing theses queries because you have turns on the show_sql flag the configuration. This is intended for the debugging of queries. Once you application started working you are supposed turn it off.
Read more on the API docs here.
I am not much aware of the memory consumption part, but you repeat your tests with the above flag turned off and see if there is any improvement.
Assuming you are using sql server, you might want to check the parameter type declaration for '?', making sure the declaration results in the same, fixed length declaration every time.
Dynamic length parameters would result in separate execution plans for each query. This could possibly comsume a lot of resources. What we see as the same procedure, get's interpreted by sql server as a different query, rendering a separate execution plan.
Thus,
exec myprocedure #p1 varchar(3)='foo'
and
exec myprocedure #p1 varchar(6)='foobar'
would result in different plans. Simply by the fact that the declarations of #p1, differ in size.
There is a lot to know about this behaviour. If the above applies to you, I would recommend you read up on 'parameter sniffing'.
No... you can generate you common query inside the hibernate. The logic behind is to mapping with table and fetch the record from there. It is used common query for all the database. Please create a common query like that :
Example :
select t.Id as Id4305_0_,...... from t where t.Id=?
I am looking for the MySQL equivalent of CONTEXT_INFO that is present in SQL Server. Or any other session variable like thing using which I can pass the username to the trigger.
I am currently working on logging table data for audit. I need to pass the username of the logged in user to the delete trigger.
Any ideas? We are deleting the rows from the table in a few cases and marking them as deleted in others.
Any alternate solutions are welcome. I thought of using AOP but it could prove problematic when deleting a cascade. I want to look into Hibernate Interceptors, not sure at this point if that works.
If I can find the MySQL equivalent of CONTEXT_INFO, my job is done and elegant as well.
Thanks,
Julia.
You should be able to get the current user with the USER() function. See the doc for details.
Ok, I didn't quite understand what you were asking. I think you may want to take a look at MySQL's support for connection-level user variables. Basically, in the connection that will run the UPDATE / INSERT / DELETE but before the actual query runs you need to run a set statement SET #user = 'my_user_id'. Then you should be able to use #user as the user in your trigger.