MySQL update with and operator - java

I had written some wrong syntax for an SQL query. But still, it outputted no error with a java tomcat server. Running on Debian 9.
MySQL Version:
mysql Ver 14.14 Distrib 5.7.24, for Linux (x86_64)
The query was as follows, I had misplaced the comma ',' with 'and' after the set operator
UPDATE table_pod_print set print_status = 1 and operator_id = 2091 where id = 1
I tried running it on the console, which gave me the following output:
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
Please help me in understanding why the query worked in the first place.

In MySQL SELECT 1 AND 0; produces 0 because the AND operator evaluates operands as a logical AND. Looking at your query the SET print_status was evaluated as (adding additional brackets for clarity):
print_status = (1 AND (operator_id = 2091))
which mean it would be 1 AND 1 if operator_id = 2091 for updated id = 1 row was true and 1 AND 0 if not.

Related

How to generate report from huge mongodb document [duplicate]

Using the code:
all_reviews = db_handle.find().sort('reviewDate', pymongo.ASCENDING)
print all_reviews.count()
print all_reviews[0]
print all_reviews[2000000]
The count prints 2043484, and it prints all_reviews[0].
However when printing all_reviews[2000000], I get the error:
pymongo.errors.OperationFailure: database error: Runner error: Overflow sort stage buffered data usage of 33554495 bytes exceeds internal limit of 33554432 bytes
How do I handle this?
You're running into the 32MB limit on an in-memory sort:
https://docs.mongodb.com/manual/reference/limits/#Sort-Operations
Add an index to the sort field. That allows MongoDB to stream documents to you in sorted order, rather than attempting to load them all into memory on the server and sort them in memory before sending them to the client.
As said by kumar_harsh in the comments section, i would like to add another point.
You can view the current buffer usage using the below command over the admin database:
> use admin
switched to db admin
> db.runCommand( { getParameter : 1, "internalQueryExecMaxBlockingSortBytes" : 1 } )
{ "internalQueryExecMaxBlockingSortBytes" : 33554432, "ok" : 1 }
It has a default value of 32 MB(33554432 bytes).In this case you're running short of buffer data so you can increase buffer limit with your own defined optimal value, example 50 MB as below:
> db.adminCommand({setParameter: 1, internalQueryExecMaxBlockingSortBytes:50151432})
{ "was" : 33554432, "ok" : 1 }
We can also set this limit permanently by the below parameter in the mongodb config file:
setParameter=internalQueryExecMaxBlockingSortBytes=309715200
Hope this helps !!!
Note:This commands supports only after version 3.0 +
solved with indexing
db_handle.ensure_index([("reviewDate", pymongo.ASCENDING)])
If you want to avoid creating an index (e.g. you just want a quick-and-dirty check to explore the data), you can use aggregation with disk usage:
all_reviews = db_handle.aggregate([{$sort: {'reviewDate': 1}}], {allowDiskUse: true})
(Not sure how to do this in pymongo, though).
JavaScript API syntax for the index:
db_handle.ensureIndex({executedDate: 1})
In my case, it was necessary to fix nessary indexes in code and recreate them:
rake db:mongoid:create_indexes RAILS_ENV=production
As the memory overflow does not occur when there is a needed index of field.
PS Before this I had to disable the errors when creating long indexes:
# mongo
MongoDB shell version: 2.6.12
connecting to: test
> db.getSiblingDB('admin').runCommand( { setParameter: 1, failIndexKeyTooLong: false } )
Also may be needed reIndex:
# mongo
MongoDB shell version: 2.6.12
connecting to: test
> use your_db
switched to db your_db
> db.getCollectionNames().forEach( function(collection){ db[collection].reIndex() } )

What does ^1 mean in SQL call?

SELECT LOWER(pla_lan_code) as locale,
pla_auto_translate_opt_out_flag^1 as autoTranslationEnabled,
pte_manual_edit_flag^1 as autoTranslated,
ftr_created_date as queuedDate,
ftr_translation_date as translationDate,
ftr_engine as translationEngine,
ftr_id as translationId,
pla_auto_translate_opt_out_flag as translationOptOut
SELECT * FROM property_languages (nolock)
LEFT OUTER JOIN properties_text_live (nolock)
This query is embedded in Java code. I am trying to convert it into a stored proc. I want to know what ^1 equates to in SQL.
This isn't standard SQL. In Transact-SQL (used in MS SQL Server and Sybase) ^ is the bitwise exclusive-OR operator.
1 ^ 1 is 0, and 0 ^ 1 is 1.
If the original int stores 0 for false and 1 for true, XORing by 1 would reverse the sense of the original flags.
Guessing that pla_auto_translate_opt_out_flag is an int with 1 for opting out and 0 for enabling autotranslation, using the operator returns 1 for enabling and 0 for opting out.

PreparedStatement throwing index is out of range when it is not out of range

I am generating a table and inserting the data into that table. The table is created from data that has complex relationships. We created this so users of our BI tool could easily deal with the data.
We have a bunch of these tables that get created each one is different based on the user generated data. In one case we are getting index out of range exception. When inspecting the meta data it is not out of range.
The exact exception message is:
com.microsoft.sqlserver.jdbc.SQLServerException: The index 13 is out of range.
The generated insert statement is:
INSERT INTO [F22_AF] ([frcId],[eId],[aId],[dateCreated],[DateofThing],[Wasthisaninjury],[Whowainjured],[WhowainjuredFNAME],[WhowainjuredLNAME],[Whatwasinvolved],[WhatwasinvolvedDATA],[WhatOptionsWereAvailable],[AccidentWitnessed]) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)
When the value for accident witnessed is set the exception is thrown. When the value is about to be set I printed the column index and the value:
13 : AccidentWitnessed = false
When it fails I printed out the meta data from the insert statement and got this:
parameterMetaData.getParameterCount()=13
ColIndex - Type || Class
1 - int || java.lang.Integer
2 - int || java.lang.Integer
3 - int || java.lang.Integer
4 - datetime || java.sql.Timestamp
5 - datetime || java.sql.Timestamp
6 - nvarchar || java.lang.String
7 - int || java.lang.Integer
8 - nvarchar || java.lang.String
9 - nvarchar || java.lang.String
10 - int || java.lang.Integer
11 - nvarchar || java.lang.String
12 - int || java.lang.Integer
13 - bit || java.lang.Boolean
When I try to set col index 13 I get Index Out Of Range, however according to the meta data it exists! If it didn't exist I would get another out of range exception.
The same code works for a different set of data.
Can anyone explain why this error would occur when the column index exists?
My advice? Believe the JVM. It doesn't matter what you think.
You're not looking at the right spot to understand what's going wrong. Write a small, self-contained example and run it in a debugger. You'll be able to see where the code is going awry that way.
Same error also occured for me when I used Hibernate. But the exception does not depend to hibernate. This exception is depend to JDBC or Database version.In mine database is oracle and got same exception.Reason was the ojdbc bug which I used. In Prepared Statement if I passed more than 7 parameters I got exception.This is a bug. The solution is changing ojdbc.

Inconsistent counter values between replicas in Cassandra

I've got a 3 machine Cassandra cluster using rack unaware placements strategy with a replication factor of 2.
The column family is defined as follows:
create column family UserGeneralStats with comparator = UTF8Type and default_validation_class = CounterColumnType;
Unfortunately after a few days of production use I got some inconsistent values for the counters:
Query on replica 1:
[default#StatsKeyspace] list UserGeneralStats['5261666978': '5261666978'];
Using default limit of 100
-------------------
RowKey: 5261666978
=> (counter=bandwidth, value=96545030198)
=> (counter=downloads, value=1013)
=> (counter=previews, value=10304)
Query on replica 2:
[default#StatsKeyspace] list UserGeneralStats['5261666978': '5261666978'];
Using default limit of 100
-------------------
RowKey: 5261666978
=> (counter=bandwidth, value=9140386229)
=> (counter=downloads, value=339)
=> (counter=previews, value=1321)
As the standard read repair mechanism doesn't seem to repair the values I tried to force an
anti-entropy repair using nodetool repair. It did't have any effect on the counter values.
Data inspection showed that the lower values for the counters are the correct ones so I suspect that either Cassandra (or Hector which I used as API to call Cassandra from Java) retried some increments.
Any ideas how to repair the data and possibly prevent the sittuation from happening again?
If neither RR nor repair fixes it, it's probably a bug.
Please upgrade to 0.8.3 (out today) and verify it's still present in that version, then you can file a ticket at https://issues.apache.org/jira/browse/CASSANDRA.

BCP exits with code 0

When running BCP from my Java application it exits with status code 0 when 1 is expected.
I run bcp with an invalid combination of data and formatting file and bcp gives the following error:
Starting copy...
SQLState = 22005, NativeError = 0
Error = [Microsoft][SQL Server Native Client 10.0]Invalid character value for cast specification
BCP copy in failed
BCP however exits with exit code 0 and not 1, as i suspect. Now it is extremely difficult to see if something failed while running BCP. Exitting with the right code works once they match to some degree (like same delimiters).
Command
PS C:\Users\feh\Desktop> bcp integrate_test.dbo.AS_LOADER_DELIMITED in .\data.dat -S "10.0.0.161\SQL2K5,1048" -U user -P pass -f .\formatting.ctl -m 1
Starting copy...
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Server Native Client 10.0]Unexpected EOF encountered in BCP data-file
0 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total : 1
PS C:\Users\feh\Desktop> $lastexitcode
0
How can i validate a formatting file to the data and get a exit code 1 when they do not match?
If the bcp is "in" the consider using BULK INSERT. This does the same as bcp but can be trapped using SQL Server TRY/CATCH or normal exception handling
Otherwise, are you trapping %ERRORLEVEL% and not some other code?

Categories