I have a scenario where i need to take count of rows in mysql table for the current branch(in that table we are store branch) and insert the count of rows with other details into the same table. But the problem is when two or more concurrent users try to insert from the same branch at the same time the count is same for all the users, but for me the insert should not happ for the other user(s) until i read the count and insert that one user request . Is there any way the locking works for this and any example would be helpful(All i need to do this in MySql store procedure)
Edit : Sorry, I cant share the working code but i can write example here
My table structure is here
id name branchid count
1 abc 1 1
2 xyz 1 2
3 abcd 2 1
4 wxyz 2 2
Here am taking count of rows from the above table for given branch(ex : 1) and inserting the row with that calculated count
Ex :
set #count = (select count(id) from tbl where branchid = 1);
later
insert into tbl(id, name, branchid, count)
values(5, 'abcd', 1, #count)
This works great provided if only one user access this from one branch , but if more than one user from same branch try to access this at exact same time the
#count
is duplicating for the branch users.
Why not just do it in one query:
insert into tbl(id, name, branchid, count)
select 5, 'abcd', 1, count(*)
from from tbl
where branchid = 1;
Related
I am trying to insert data into a table. That table has 6 attributes, 2 of its own and 4 foreign keys.
Now I write a query like this:
insert into ***bus***
values ( 4 , 45 , (**select** **bus_driver**.id , **conductor**.id , **trip_location**.trip_id , **bus_route**.route_id
**from bus_driver , conductor , trip_location , bus_route**));
And its giving me an error like:
Error Code: 1241. Operand should contain 1 column(s)
What should I change in my query
You need to remove the values clause and just put the select straight after the table and column names of the insert clause like below :
insert into bus(column1, column2 ........)
select 4 , 45 , bus_driver.id , conductor.id , trip_location.trip_id ,
bus_route.route_id from bus_driver , conductor , trip_location , bus_route;
It's not clear what you're trying to do. It looks like you're going to end up with a lot of rows inserted into your bus table depending on the data in the other tables you're selecting from.
If you run only the select statement, see what you get for results:
select bus_driver.id, conductor.id, trip_location.trip_id, bus_route.route_id
from bus_driver, conductor, trip_location, bus_route
Then add 4, 45 in front of all those rows. That's what you'll be inserting into the bus table.
You may be looking to do something more like:
insert into bus (column1, column2, column3, column4, column5, column6)
select 4, 45, bus_driver.id, conductor.id, trip_location.trip_id, bus_route.route_id
from bus_driver, conductor, trip_location, bus_route
where bus_driver.column? = ?
and conductor.column? = ?
...
And the where clauses would be constructed such that only one record is returned for each table. It depends on what you're trying to do though. There may be situations where you want more than one record from the selected tables, which would end up inserting multiple records into the bus table
We are constancly getting problem on our test cluster.
Cassandra configuration:
cassandra version: 2.2.12
nodes count: 6, seed-nodess 3, none-seed-nodes 3
replication factor 1 (of course for prod we will use 3)
Table configuration where we get problem:
CREATE TABLE "STATISTICS" (
key timeuuid,
column1 blob,
column2 blob,
column3 blob,
column4 blob,
value blob,
PRIMARY KEY (key, column1, column2, column3, column4)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC, column2 ASC, column3 ASC, column4 ASC)
AND caching = {
'keys':'ALL', 'rows_per_partition':'100'
}
AND compaction = {
'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
};
Our java code details
java 8
cassandra driver: astyanax
app-nodes count: 4
So, whats happening:
Under high load our application do many inserts in cassandra tables from all nodes.
During this we have one workflow when we do next with one row in STATISTICS table:
do insert 3 columns from app-node-1
do insert 1 column from app-node-2
do insert 1 column from app-node-3
do read all columns from row on app-node-4
at last step(4) when we read all columns we are sure that insert of all columns is done (it is guaranteed by other checks that we have)
The problem is that some times(2-5 times on 100'000) it happens that at stpp 4 when we read all columns, we get 4 columns instead of 5, i.e. we are missing column that was inserted at step 2 or 3.
We even start doing reads of this columns every 100ms in loop and we dont get expected result. During this time we also check columns using cqlsh - same result, i.e. 4 instead of 5.
BUT, if we add in this row any new column, then we immediately get expected result, i.e. we are getting then 6 columns - 5 columns from workflow and 1 dummy.
So after inserting dummy column we get missing column that was inserted at step 2 or 3.
Moreover when we get the timestamp of missing (and then apperared column), - its very closed to time when this column was actually added from our app-node.
Basically insertions from app-node-2 & app-node-3 are done nearlly at the same time, so finally these two columns allways have nearly same timestamp, even if we do insert of dummy column in 1 minute after first read of all columns at step 4.
With replication factor 3 we cannot reproduce this problem.
So open questions are:
May be this is expected behavior of Cassandra when replication factor is 1 ?
If its not expected, then what could be potential reason?
UPDATE 1:
next code is used to insert column:
UUID uuid = <some uuid>;
short shortV = <some short>;
int intVal = <some int>;
String strVal = <some string>;
ColumnFamily<UUID, Composite> statisticsCF = ColumnFamily.newColumnFamily(
"STATISTICS",
UUIDSerializer.get(),
CompositeSerializer.get()
);
MutationBatch mb = keyspace.prepareMutationBatch();
ColumnListMutation<Composite> clm = mb.withRow(statisticsCF, uuid);
clm.putColumn(new Composite(shortV, intVal, strVal, null), true);
mb.execute();
UPDATE 2:
Proceed testing/investigatnig.
When we caught this situation again, we immediately stop(killed) our java apps. And then can constantly see in cqlsh that particular row does not contain inserted column.
To appear it, first we tried nodetool flash on every cassandra node:
pssh -h cnodes.txt /path-to-cassandra/bin/nodetool flush
result - the same, column did not appear.
Then we just restarted the cassandra cluster and column appeared
UPDATE 3:
Tried to disable cassandra cache, by setting row_cache_size_in_mb property to 0 (before it was 2Gb)
row_cache_size_in_mb: 0
After it, the problem gone.
SO probably the probmlem may be in OHCProvider which is used as default cache provider.
I have a RDBMS table with a column BIGINT type and values are not sequential. I have a java program where I want each thread to get data as per PARTITION_SIZE i.e. I want a pair of column values like after doing ORDER BY on result,
Column_Value at Row 0 , Column_Value at Row `PARTITION_SIZE`
Column_Value at Row `PARTITION_SIZE+1` , Column_Value at Row `2*PARTITION_SIZE`
Column_Value at Row `2*PARTITION_SIZE+1` , Column_Value at Row `3*PARTITION_SIZE`
Eventually, I will pass above value ranges in a SELECT query's BETWEEN clause to get divided data for each thread.
Currently, I am able to do this partitioning via Java by putting all values in a List ( after getting all values from DB ) and then getting values at those specific indices - {0,PARTITION_SIZE},{PARTITION_SIZE+1,2*PARTITION_SIZE} ..etc but problem there is that List might have millions of records and is not advisable to store in memory.
So I was wondering if its possible to write such a query using SQL itself which would return me those ranges like below?
row-1 -> minId , maxId
row-2 -> minId , maxId
....
Database is DB2.
For example,
For table column values 1,2,12,3,4,5,20,30,7,9,11 ,result of SQL query for a partition size =2 should be {1,2},{3,4} ,{5,7},{9,11},{12,20},{30} .
In my eyes the mod() function would solve your problem and you could choose a dynamic number of partitions with it.
WITH numbered_rows_temp as (
SELECT rownumber() over () as rownum,
col1,
...
coln
FROM table
ORDER BY col1)
SELECT * FROM numbered_rows_temp
WHERE mod(rownum, <numberofpartitions>) = 0
Fill in the appropriate and change the result from 0 to - 1 in your queries.
Michael Tiefenbacher's answer is probably more useful, as it avoids an extra query, but if you do want to determine ID ranges, this might work for you:
WITH parms(partition_size) AS (VALUES 1000) -- or whatever
SELECT
MIN(id), MAX(id),
INT(rn / parms.partition_size) partition_num
FROM (
SELECT id, ROW_NUMBER() OVER (ORDER BY id) rn
FROM yourtable
) t , parms
GROUP BY INT(rn / parms.partition_size)
I am trying to get data for all dates in a range provided by my query, but I'm only getting the dates that actually exist in my table - missing dates are not reported. I need to create records in the table for those missing dates, with other columns left null, and then include them in the results.
My table table_name has records like:
ID Name Date_only
---- ---- -----------
1234 xyz 01-Jan-2014
1234 xyz 02-Jan-2014
1234 xyz 04-Jan-2014
...
For example, for the range 01-Jan-2014 to 04-Jan-2014, my query is:
select * from table_name
where id=1234
and (date_only >= '01-Jan-14' and date_only <= '04-Jan-14')
From Java or queried directly this shows three rows, with no data for 03-Jan-2014.
I need a single statement to insert rows for any missing dates into the table and return the data for all four rows. How can I do that?
UPDATE
Followed query worked for only if only 1 record available in table OR search range 2-5 days,
SELECT LEVEL, to_date('2014-11-08','yyyy-mm-dd') + level as day_as_date FROM DUAL CONNECT BY LEVEL <= 10 .
UPDATE WITH FIDDLE EXAMPLE
I got Error is:
I have table data and same query executed then i got error is ORA-02393: exceeded call limit on CPU usage, fiddle example is : my owntable sqlfiddle example .thanks in advance
you can use the below SQL for your purpose.The sql fiddle here http://sqlfiddle.com/#!4/3ee61/27
with start_and_end_dates as (select min(onlydate) min_date
,max(onlydate) max_date
from mytable
where id='1001'
and onlydate >= to_date('01-Jan-2015','dd-Mon-YYYY')
and onlydate <= to_date('04-Jan-2015','dd-Mon-YYYY')),
missing_dates as (select min_date + level-1 as date_value
from start_and_end_dates connect by level <=(max_date - min_date) + 1)
select distinct id,name,date_value
from mytable,missing_dates
where id='1001'
order by date_value
EDIT1:- Using your other example.The sqlfiddle is http://sqlfiddle.com/#!4/4c727/16
with start_and_end_dates as (select min(onlydate) min_date
,max(onlydate) max_date
from mytable
where name='ABCD'),
missing_dates as (select min_date + level-1 as date_value
from start_and_end_dates connect by level <=(max_date - min_date) + 1)
select distinct id,name,date_value
from mytable,missing_dates
where name='ABCD'
order by date_value;
You can use a query like
SELECT LEVEL, to_date('2014-01-01','yyyy-mm-dd') + level as day_as_date
FROM DUAL
CONNECT BY LEVEL <= 1000
to get a list of 1000 days from Jan 1 2014 (adjust to your need)
Next do an insert from select
INSERT INTO table_name (date_only)
SELECT day_as_date FROM (<<THE_QUERY_ABOVE>>)
WHERE day_as_date NOT IN (SELECT date_only FROM table_name)
Ok so basically I have my database table. The first column is the id. The second is a pkg_id. The 3rd is not important and the 4th is the previous id that the pkg_id was located at. I need to pull the last 3 pkg_id's from the table. So basically I need to pull the last 3 17879 pkg_id's and the last 3 3075. So in this example I need to pull id 9 , 7 , 6 for 17879 and id 8, 5, 3 for 3075.
I can't get my head around it. I do have access to the previous id that it was. So you see that for id 9 it says that 17879 was last in id 7. That id 8 was last in id 5.
If anybody could help me write a query that would be great. I'm also using Java for database access so it doesn't have to be just in mysql. Thanks so much.
SELECT m.*
FROM (
SELECT pkg_id,
COALESCE(
(
SELECT id
FROM mytable mi
WHERE mi.pkg_id = md.pkg_id
ORDER BY
id DESC
LIMIT 2, 1
), 0) AS mid
FROM (
SELECT DISTINCT pkg_id
FROM mytable
) md
) q
JOIN mytable m
ON m.pkg_id <= q.pkg_id
AND m.pkg_id >= q.pkg_id
AND m.id >= q.mid
Create an index on mytable (pkg_id, id) for this to work fast.
Note this condition: m.pkg_id <= q.pkg_id AND m.pkg_id >= q.pkg_id instead of mere m.pkg_id = q.pkg_id. This is required for the index to be used efficiently.