The following query is performed concurrently by two threads logged in with two different users:
WITH raw_stat AS (
SELECT
host(client_addr) as client_addr,
pid ,
usename
FROM
pg_stat_activity
WHERE
usename = current_user
)
INSERT INTO my_stat(id, client_addr, pid, usename)
SELECT
nextval('mystat_sequence'), t.client_addr, t.pid, t.usename
FROM (
SELECT
client_addr, pid, usename
FROM
raw_stat s
WHERE
NOT EXISTS (
SELECT
NULL
FROM
my_stat u
WHERE
current_date = u.creation
AND
s.pid = u.pid
AND
s.client_addr = u.client_addr
AND
s.usename = u.usename
)
) t;
From time to time, I get the following error:
tuple concurrently updated
I can't figure out what throw this error and why this error is thrown. Can you shed a light ?
Here is the sql definition of the table mystat.
mystats.sql
CREATE TABLE mystat
(
id bigint NOT NULL,
creation date NOT NULL DEFAULT current_date,
client_addr text NOT NULL,
pid integer NOT NULL,
usename name NOT NULL,
CONSTRAINT mystat_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
This isn't really an answer - so much as maybe helping someone else who stumbles on this error.
In my case, I was trying to be fancy and encapsulate the creation of all my functions within one function.
Something like
CREATE OR REPLACE FUNCTION main_func()
BEGIN
CREATE OR REPLACE FUNCTION child_func1()
BEGIN
END
CREATE OR REPLACE FUNCTION child_func1()
BEGIN
END
main func stuff...
END
For whatever reason, I could call this function no problem from inside pgAdmin. And I could call it as much as I wanted from Java -> MyBatis.
However, as soon as I started calling the function from two different threads, I got the error from the OP: ERROR : tuple concurrently updated
The fix was, simply take those child functions out of the main function, and maintain them separately.
Looking back on it, it's a pretty bad idea to be creating functions as a result of calling a function. However, the idea was to 'encapsulate' all the functionality together.
Hope this helps someone.
If the pg hackers threads are anything to go by, the error kicks in when the same row is concurrently being updated by competing transactions. In your case it's likely due to the not exists() clause, which can potentially yield true and two competing inserts of the same tuple.
To work around it, you'd want to either use more robust locking (e.g. a predicate lock), serializable isolation level, or place the needed logic in an upsert statement (can be done using a function with an exception block).
From the docs(https://www.postgresql.org/docs/current/functions-sequence.html) from Postgres, Because sequences are non-transactional, changes made by setval are not undone if the transaction rolls back.
It means that you need to update provide thread safety by yourself using transaction so running the query inside transaction might fix your problem.
I manage to solve my problem by changing my query to this one:
INSERT INTO my_stat(id, client_addr, pid, usename)
SELECT
nextval('mystat_sequence'), client_addr, pid, usename
FROM (
SELECT
host(client_addr) as client_addr,
pid ,
usename
FROM
pg_stat_activity
WHERE
usename = current_user
) s
WHERE
NOT EXISTS (
SELECT
NULL
FROM
my_stat u
WHERE
current_date = u.creation
AND
s.pid = u.pid
AND
s.client_addr = u.client_addr
AND
s.usename = u.usename
);
I think something happened under the hood right from the Postgresql internals but I can't figure out what ...
Related
In jOOQ am re-using a CTE in a later CTE. I am trying to summarise student completion records by year and school. I am using jOOQ 3.11.2 and postgres 9.4.
I have working SQL code. However in jOOQ, I am getting null values returned.
This appears to be a problem with how I am re-using one CTE in a later CTE.
At first, I thought it might be a problem with the use of count(). From the manual, it looks like count() is being used correctly. As a test, I removed all reference to count() in the query and still get the same error.
I could not find examples of reusing or chaining CTEs in jOOQ. Easy enough in SQL, as shown here: SQL - Use a reference of a CTE to another CTE but I haven't got the hang of it in jOOQ.
When run in debug mode on Intellij, I see an error that the select() statement cannot be evaluated in the second CTE.
Cannot evaluate org.jooq.impl.SelectImpl.toString()
Here is a minimal example showing what I am doing.
CommonTableExpression<Record4<String, String, String, Year>> cteOne = name("CteOne")
.fields("SCHOOL","STUDENT_NAME", "COURSE_COMPLETED", "YEAR_COMPLETED")
.as(
select( a.NAME.as("SCHOOL")
, a.STUDENT_NAME
, a.COURSE_DESCRIPTION.as("courseCompleted"),
, a.YEAR_COMPLETED
)
.from(a)
.orderBy(a.YEAR_COMPLETED)
);
CommonTableExpression<Record3<String, Year, Integer >> cteCounts = name("cteCounts")
.fields("SCHOOL", "YEAR_COMPLETED", "NUM_COMPLETED" )
.as( with(cteOne)
.select(
, field(name("cteOne","SCHOOL"), String.class)
, field(name("cteOne","YEAR_COMPLETED"), Year.class)
, count().as("NUM_COMPS_LOGGED")
)
.from(cteOne)
.groupBy(
field(name("cteCompsList","YEAR_COMPLETED"), Year.class)
, field(name("cteOne","SCHOOL"), String.class)
)
.orderBy(
field(name("cteCompsList","YEAR_COMPLETED"), Year.class)
, field(name("cteOne","SCHOOL"), String.class)
)
);
Can someone please point me in the right direction on this?
Just like in your plain SQL version of your query, your cteCounts should not have a with(cteOne) clause:
WITH
cteOne (columns...) AS (select...),
cteCounts (columns...) AS (select referencing cteOne, no "with cteOne" here...)
SELECT ...
FROM ...
Remove it and your query should be fine
I am retrieving data from database using jdbc. In my code I am using 3-4 tables to get data. But sometimes if table is not present in database my code gives exception. How to handle this situation. I want my code to continue working for other tables even if one table is not present. Please help.
I have wrote a code like this
sql="select * from table"
now Result set and all.
If table is not present in database it give exception that no such table. I want to handle it. In this code I cannot take tables which are already present in advance . I want to check here itself if table is there or not.
Please do not mark it as a duplicate question. The link you shared doesnot give me required answer as in that question they are executing queries in database not through JDBC code
For Sybase ASE the easiest/quickest method would consist of querying the sysobjects table in the database where you expect the (user-defined) table to reside:
select 1 from sysobjects where name = 'table-name' and type = 'U'
if a record is returned => table exists
if no record is returned => table does not exist
How you use the (above) query is up to you ...
return a 0/1-row result set to your client
assign a value to a #variable
place in a if [not] exists(...) construct
use in a case statement
If you know for a fact that there won't be any other object types (eg, proc, trigger, view, UDF) in the database with the name in question then you could also use the object_id() function, eg:
select object_id('table-name')
if you receive a number => the object exists
if you receive a NULL => the object does not exist
While object_id() will obtain an object's id from the sysobjects table, it does not check for the object type, eg, the (above) query will return a number if there's a stored proc named 'table-name'.
As with the select/sysobjects query, how you use the function call in your code is up to you (eg, result set, populate #variable, if [not] exists() construct, case statement).
So, addressing the additional details provided in the comments ...
Assuming you're submitting a single batch that needs to determine table existence prior to running the desired query(s):
-- if table exists, run query(s); obviously if table does not exist then query(s) is not run
if exists(select 1 from sysobjects where name = 'table-name' and type = 'U')
begin
execute("select * from table-name")
end
execute() is required to keep the optimizer from generating an error that the table does not exist, ie, the query is not parsed/compiled unless the execute() is actually invoked
If your application can be written to use multiple batches, something like the following should also work:
# application specific code; I don't work with java but the gist of the operation would be ...
run-query-in-db("select 1 from sysobjects where name = 'table-name' and type = 'U'")
if-query-returns-a-row
then
run-query-in-db("select * from table-name")
fi
This is the way of checking if the table exists and drop it:
IF EXISTS (
SELECT 1
FROM sysobjects
WHERE name = 'a_table'
AND type = 'U'
)
DROP TABLE a_table
GO
And this is how to check if a table exists and create it.
IF NOT EXISTS (
SELECT 1
FROM sysobjects
WHERE name = 'a_table'
AND type = 'U'
)
EXECUTE("CREATE TABLE a_table (
col1 int not null,
col2 int null
)")
GO
(They are different because in table-drop a temporary table gets created, so if you try to create a new one you will get an exception that it already exists)
Before running the query which has some risk in table not existing, run the following sql query and check if the number of results is >= 1. if it is >= 1 then you are safe to execute the normal query. otherwise, do something to handle this situation.
SELECT count(*)
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'your_db_name') AND (TABLE_NAME = 'name_of_table')
I am no expert in Sybase but take a look at this,
exec sp_tables '%', '%', 'master', "'TABLE'"
Sybase Admin
I'm confusing with implementation of CRUD methods for DAODatabase (for Oracle 11 xe).
The problem is that the "U"-method (update) in case of storing in generally to a Map collection inserts a new element or renews it (key-value data like ID:AbstractBusinessObject) in a Map collection. And you don't care about it, when you write something like myHashMap.add(element). This method (update) is widely used in project's business logic.
Obviously, in case of using Oracle I must care about both inserting and renewing of existing elements. But I'm stucked to choose the way how to implement it:
There is no intrinsic function for so-called UPSERT in Oracle (at least in xe11g r2 version). However, I can emulate necessary function by SQL-query like this:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
(src:http://stackoverflow.com/a/21310345/2938167)
By using this kind of query (first - insert, second - update) I presume that the data mostly will be inserted not updated (at least it will be rather rare). (May it be not optimal for concurrency?).
Ok, it is possible. But at this point I'm confusing to decide:
-- should I write an SQL function (with approriate arguments of course) for this and call it via Java
-- or should I simply handle a serie of queries for preparedStatements and do them via .executeUpdate/.executeQuery? Should I handle the whole UPSERT SQL code for one preparedStatment or split it into several SQL-queries and prepared statements inside one method's body? (I'm using Tomcat's pool of connections and I pass a connection instance via static method getConnection() to each method implementation in DAODatabase) ?
Is there another possibility to solve the UPSERT quest?
The equivalent to your UPSERT statement would seem to be to use MERGE:
MERGE INTO mytable d
USING ( SELECT 11 AS id, 'x1' AS t1 FROM DUAL ) s
ON ( d.id = s.id )
WHEN NOT MATCHED THEN
INSERT ( d.id, d.t1 ) VALUES ( s.id, s.t1 )
WHEN MATCHED THEN
UPDATE SET d.t1 = s.t1;
You could also use (or wrap in a procedure):
DECLARE
p_id MYTABLE.ID%TYPE := 11;
p_t1 MYTABLE.T1%TYPE := 'x1';
BEGIN
UPDATE mytable
SET t1 = p_t1
WHERE id = p_id;
IF SQL%ROWCOUNT = 0 THEN
INSERT INTO mytable ( id, t1 ) VALUES ( p_id, p_t1 );
END IF;
END;
/
However, when you are handling a CRUD request - if you are doing a Create action then it should be represented by an INSERT (and if something already exists then you ought to throw the equivalent of the HTTP status code 400 Bad Request or 409 Conflict, as appropriate) and if you are doing an Update action it should be represented by an UPDATE (and if nothing is there to update then return the equivalent error to 404 Not Found.
So, while MERGE fits your description I don't think it is representative of a RESTful action as you ought to be separating the actions to their appropriate end-points rather than combining then into a joint action.
I have a plpgsql script (editor's note: it's a function, actually) that contains a loop which drops the primary key constraint for some tables that were generated by eclipse-link. It looks something like this:
CREATE OR REPLACE FUNCTION remove_tables_constraints()
RETURNS boolean AS
$BODY$
DECLARE
constraint_statment text;
BEGIN
FOR constraint_statment IN
SELECT 'ALTER TABLE '||nspname||'.'||relname||' DROP CONSTRAINT '||conname
FROM pg_constraint
INNER JOIN pg_class ON conrelid=pg_class.oid
INNER JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace
where relname not in('exclude_table')
ORDER BY CASE WHEN contype='f' THEN 0 ELSE 1 END,contype,nspname,relname,conname LOOP
raise notice 'remove_tables_constraints run [%]', constraint_statment;
EXECUTE constraint_statment;
END LOOP;
RETURN true;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE COST 100;
select remove_tables_constraints();
The script is executed using:
Statement st = connection.createStatement();
st.execute(scriptStringloadedFromFile);
The script worked (and under some circumstances still works) fine.
It stopped working after changing the primary key of the tables from an int to a uid. The loop halts in mid execution without displaying any error messages (debug is set to the finest level).
The weird part is that the script does work, even after the change, if I just paste it into the psql shell instead of executing it from code. Moreover, it works when executing it from the java code if I unpack the loop and just write all the statements that the loop performs inline.
I've spent a couple of days on this and I'm clueless as to how to continue. Any ideas ?
I see a couple of problems:
You need to sanitize identifiers or you can get exceptions or worse, open an attack path for SQL injection. Identifiers can be illegal strings unless double-quoted. There are several ways to let Postgres take care of that automatically.
I used two forms below:
format() with %I parameter conversion (Postgres 9.1+)
Let Postgres coerce a regclass type, which is even better for table names (IMO).
You function is dropping all constraints, while you only want to drop PK constraints (contype = 'p') according to your description.
You are not excluding the system catalog and other system schemas. This should fail, no matter what.
Do not quote the language name plpgsql. It's an identifier.
Everything put together it could look something like this:
CREATE OR REPLACE FUNCTION remove_tables_constraints()
RETURNS boolean AS
$func$
DECLARE
constraint_statment text;
BEGIN
FOR constraint_statment IN
SELECT format('ALTER TABLE %s DROP CONSTRAINT %I'
, c.oid::regclass, o.conname)
FROM pg_constraint o
JOIN pg_class c ON c.oid = o.conrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname <> 'exclude_table' -- just one? then <>
AND o.contype = 'p' -- only pk constraints
AND n.nspname NOT LIKE 'pg%' -- exclude system schemas!
AND n.nspname <> 'information_schema' -- exclude information schema!
ORDER BY n.nspname, c.relname, o.conname -- commented irrelevant item
LOOP
RAISE NOTICE 'remove_table_constraints run [%]', constraint_statment;
EXECUTE constraint_statment;
END LOOP;
RETURN TRUE;
END
$func$
LANGUAGE plpgsql;
Or maybe better, without loop. Here, I first aggregate into a single list of commands and execute that once:
CREATE OR REPLACE FUNCTION remove_tables_constraints()
RETURNS boolean AS
$func$
DECLARE
_sql text;
BEGIN
SELECT INTO _sql
string_agg(format('ALTER TABLE %s DROP CONSTRAINT %I'
, sub.tbl, sub.conname), E';\n')
FROM (
SELECT c.oid::regclass AS tbl, o.conname
FROM pg_constraint o
JOIN pg_class c ON c.oid = o.conrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname <> 'exclude_table' -- just one? then <>
AND o.contype = 'p' -- only pk constraints
AND n.nspname NOT LIKE 'pg%' -- exclude system schemas!
AND n.nspname <> 'information_schema' -- exclude information schema!
ORDER BY n.nspname, c.relname, o.conname -- commented irrelevant item
LIMIT 10
) sub;
RAISE NOTICE E'remove_table_constraints:\n%', _sql;
EXECUTE _sql;
RETURN TRUE;
END
$func$
LANGUAGE plpgsql;
I was wondering if somebody knew a better way to do the following:
I need to query a database and return a value (in this case an int), then using this value, calculate the new value and update the database with this new value.
My current approach is using a method to get the current int value from the database, passing this value to another method to perform the calculations and then passing the new value to a third method to update the database.
So, the problem(?) with this is that it opens a new connection from the pool when getting the initial value from the db and then when updating it. Obviously it closes the connection at the end of the method but is there some easier / better way of doing this ? It seems a bit messy.
Try this:
SELECT fieldValue FROM table_name FOR UPDATE;
UPDATE table_name SET fieldToUpdate = fieldValue + 1;
See the UPDATE Syntax
You don't have to open a new connection for each query. Just open a connection at the start of your request, save its reference to a global variable that you can use in all your methods, and close it at the end of your request.
If you can do the calculations in SQL:
UPDATE TableToUpdate
SET ColumnB =
Calculations( ( SELECT ColumnA
FROM TableToSelect
WHERE (conditions for selecting)
)
)
WHERE (conditions for updating)
Depending on your requirements you could exploit multiple-table UPDATE:
UPDATE TableToUpdate U JOIN TableToSelect S ON ( -- join condition for selection value to process )
SET U.ColumnB = Calculations( S.ColumnC )
WHERE U.ColumnC = -- whatever selection condition