I'm facing an issue where a Java process (that I have no control over) is inserting rows into a table and causing an overflow. I have no way of intercepting the queries, and the exception raised by ORACLE is not informative. It only mentions an overflow, but not which column it's happening on.
I'd like to know which query is causing this overflow as well as the values being inserted.
I tried creating a trigger BEFORE INSERT on the table that copies the rows into another temporary table that I can later read, however it looks like the trigger is not being run when the overflow happens.
Trigger syntax:
CREATE OR REPLACE TRIGGER OVERFLOW_TRIGGER
BEFORE INSERT
ON VICTIM_TABLE
FOR EACH ROW
BEGIN
insert into QUERIES_DUMP values (
:old.COL1, :old.COL2, :old.COL3,
:old.COL4, :old.COL5, :old.COL6,
:old.COL7, :old.COL8, :old.COL9,
:old.COL10, :old.COL11, :old.COL12
);
END;
/
The table QUERIES_DUMP has the same structure of the failing table however with the NUMBER and VARCHAR2 columns pushed to their max capacity. I'm hoping to get a list of queries and then find out which ones are breaking the rules.
Is it expected for a trigger to not run in case of an overflow, even if set to run before insert?
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
EDIT1:
The error being thrown is:
Description: Error while query: INSERT INTO
...
ORA-01438: value larger than specified precision allowed for this column
What I know is that there are no wrong types being inserted anywhere. It's most probably the length of one of the numeric fields, but they're numerous and the insertion process takes more than an hour, so I can't brute force my way into guessing the column.
I've thought about backing up the table and creating a new victim_table with larger columns, but the process actually inserts into a lot of other tables as well in a complex datamodel and the DB has somewhat sensitive information so I can't endanger its consistency by moving things around.
I tried an INSTEAD OF trigger but ORACLE doesn't seem to accept an INSTEAD OF for inserts on a table.
I added logging on the JDBC layer, but the queries I got did not have values, only '?'
Description: Error while query: INSERT INTO VICTIM_TABLE ( . . . ) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
It depends on the specific error (including an ORA-xxxxx error number is always greatly appreciated).
If the problem is that the Java application is trying to insert a value that cannot be converted to the table's data type, that error would be expected to be thrown before the trigger could run. Data type validations have to happen before the trigger can execute.
Imagine what would happen if data type validations happened after the trigger ran. If the Java app passed an invalid value for, say, col1, then inside the trigger, :new.col1 would have a data type of whatever col1 has in the underlying table but would have an invalid value. Any reference to that field, therefore, would need to result in an error being raised-- you couldn't plausibly log an invalid value to your table.
Are you sure that you can't intercept the queries somehow? For example, if you renamed victim_table to victim_table_base, created a view named victim_table with larger data types, and then defined an instead of trigger on the view that validated the data and inserted it into the table, you could identify which values were invalid. Alternately, since your Java application is using JDBC (presumably) to interact with the database, you should be able to enable logging at the JDBC layer to see the parameter values that are being passed.
Related
I have a Korma based software stack that constructs fairly complex queries against a MySQL database. I noticed that when I am querying for datetime columns, the type that I get back from the Korma query changes depending on the syntax of the SQL query being generated. I've traced this down to the level of clojure.java.jdbc/query. If the form of the query is like this:
select modified from docs order by modified desc limit 10
then I get back maps corresponding to each database row in which :modified is a java.sql.Timestamp. However, sometimes our query generator generates more complex union queries, such that we need to apply an order by ... limit ... constraint to the final result of the union. Korma does this by wrapping the query in parentheses. Even with only a single subquery--i.e., a simple parenthesized select--so long as we add an "outer" order by ..., the type of :modified changes.
(select modified from docs order by modified desc limit 10) order by modified desc
In this case, clojure.java.jdbc/query returns :modified values as strings. Some of our higher level code isn't expecting this, and gets exceptions.
We're using a fork of Korma, which is using an old (0.3.7) version of clojure.java.jdbc. I can't tell if the culprit is clojure.java.jdbc or java.jdbc or MySQL. Anyone seen this and have ideas on how to fix it?
Moving to the latest jdbc in a similar situation changed several other things for us and was a decidedly "non-trvial" task. I would suggest getting off of a korma fork soon and then debugging this.
For us the changes focused around what korma returned on update calls changed between the verions of the backing jdbc. It was well worth getting current even though it's a moderately painful process.
Getting current with jdbc will give you fresh new problems!
best of luck with this :-) These things tend to be fairly specific to the DB server you are using.
Other options for you is to have a policy of aways specifying an order-by parameter or building a library to coerce the strings into dates. Both of these have some long term technical dept problems.
I'm receiving the following error message from a Java/Spring/Hibernate application when it tries to execute a prepared statement against a mysql database :
Caused by: java.sql.SQLException: Illegal mix of collations (latin1_swedish_ci,COERCIBLE) and (latin1_german1_ci,COERCIBLE) for operation '='
The select statement which generates this (as shown in the tomcat log) is:
SELECT s.* FROM score_items s where
s.s_score_id_l=299 and
(s.p_is_plu_b = 'F') and
isTestProduct(s.p_upc_st) = 'N' and
v_is_complete_b='T'
order by s.nc_name_st, s.p_upc_st
The table collation per the show table status command is:
utf8_general_ci
The collation for all the char, varchar and text fields is "utf8_general_ci". It's null for the bigint, int and datetime fields.
The database collation is latin1_swedish_ci as displayed by the command:
show variables like "collation_database";
Edit: I was able to successfully run this from my local machine using Eclipse/STS and a Tomcat 6 instance. The local process is reading the from the same database as the process on the production server which generated the error. The server where the error occurs is a Tomcat 7. instance is an Amazon Linux server.
Edit 2: I was also able to successfully run the report when I ran it from our QA environment, with the JDBC statement in server.xml reset to point at the production database. QA is essentially a mirror of the production environment, with some dev work going on. I should also note that I saw a similar error last month, but it disappeared when I reran the report. Finally, I'm not sure why it would make a difference, but the table being queried is huge, with over 7 million rows and probably 100 fields per row.
Edit 3: Based on Shadow's comments, I discovered the character set "latin1" was being specified on the test function. I've changed that to utf8 and hoping this solves the issue.
How do I found out which field is "latin1_german1_ci"?
Why is the comparison using "latin1_swedish_ci" when the table and fields are either "utf8_general_ci or null?
Could the problem be related to function character set, and if so how do I identify which character set/collation it's using?
How do I narrow down which field/function is causing the problem?
This has got nothing to do with java or hibernate, this is purely down to mysql and perhaps to the connection string.
In mysql you can define character set and collation at multiple levels, which can cause a lot of issues:
server
database
table
column
connection
See mysql documentation on character sets and collations for more details.
To sum up: the higher level defaults kick in if and only if at lower level you do not specify charater set or collation. So, a column level definition overrides a table level definition. show table status command show the table level defaults, but these may have been overridden on column level. show full columns or show create table commands will show you the true character sets and collations used by any given field.
Connection level character set / collation definitions further complicate the picture because string constants used in the sql statements will use the connection character set / collation, unless they have an explicit declaration.
However, mysql uses coercibility values to avoid most issues arising from the use of various character sets and expressions as described in mysql documentation on character sets / collations used in expressions.
From you mentioning that the query works when executed from another computer indicates that the issue is with the connection character set / collation. I think it will be around the isTestProduct() call.
The only way to really determine which condition causes the isdue is to eliminate the conditions one by one and when the error is gone, then the last eliminated condition was the culprit. But defining appropriate connection character set and collation that is in line with what is used in the fields will also help.
Is it possible to restart the ID column of an HSQLDB after rows were inserted? Can I even set it to restart at a value lower than existing IDs in the table?
The Situation
I have a simple Java program which connects to a HSQLDB like so:
DriverManager.getConnection("jdbc:hsqldb:file:" + hsqldbPath, "", "");
This gives me an HsqlException when executing the following script (this is an excerpt, the complete script for HSQLDB 2.2.4 can be found here):
SET SCHEMA PUBLIC
CREATE MEMORY TABLE PUBLIC.MAP(
ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 0) NOT NULL PRIMARY KEY,
FOO VARCHAR(16) NOT NULL)
ALTER TABLE PUBLIC.MAP ALTER COLUMN ID RESTART WITH 1
// [...]
SET SCHEMA PUBLIC
INSERT INTO MAP VALUES(1,'Foo')
INSERT INTO MAP VALUES(2,'Bar')
ALTER TABLE PUBLIC.MAP ALTER COLUMN ID RESTART WITH 42
The message is:
HsqlException: error in script file: ALTER TABLE PUBLIC.MAP ALTER COLUMN ID RESTART WITH 42
The exception goes away when I move the RESTART-command before the INSERTs. The documentation gives no hint as to why that would be necessary.
I will eventually have to make this work on version 2.2.4 but have the same problem with the current version 2.3.2.
Background
What I am trying to do here is to recreate a situation which apparently occurred in production: An unlucky interaction with the database (I don't know what exactly happened) seems to have caused newly inserted rows to collide with existing ones because they were issued the same IDs. I want to create a test replicating the scenario in order to write a proper fix.
The .script file of the database follows a predefined order for the statements. This shouldn't be altered if it is edited and only certain manual changes are allowed (see the guide for details).
You can execute the ALTER TABLE statement via JDBC at the start of your test instead of inserting it in the script.
If IDENTITY values for the PRIMARY KEY collide, you will get an exception when you insert the values.
The actual fix for a problem like this is to RESTART WITH the max value in the primary key column plus one.
I think SEQUENCES are much more flexiblee than IDENTITY. The IDENTITY generator disabled JDBC batching, by the way.
But if you use SEQUENCE identifiers, you must pay attention to the hilo optimizers as well, because identifier are generated by Hibernate using a sequence value as a base calculation starting point.
With a SEQUENCE the restart goes like this:
ALTER SEQUENCE my_seqeunce RESTART WITH 105;
i have some large data in one table and small data in other table,is there any way to run initial load of golden gate so that same data in both tables wont be changed and rest of the data got transferred from one table to other.
Initial loads are typically for when you are setting up the replication environment; however, you can do this as well on single tables. Everything in the Oracle database is driven by System Change Numbers/Change System Numbers (SCN/CSN).
By using the SCN/CSN, you can identify what the starting point in the table should be and start CDC from there. Any prior to the SCN/CSN will not get captured and would require you to manually move that data in some fashion. That can be done by using Oracle Data Pump (Export/Import).
Oracle GoldenGate also provided a parameter called SQLPredicate that allows you to use a "where" clause against a table. This is handy with initial load extracts because you would do something like TABLE ., SQLPredicate "as of ". Then data before that would be captured and moved to the target side for a replicat to apply into a table. You can reference that here:
https://www.dbasolved.com/2018/05/loading-tables-with-oracle-goldengate-and-rest-apis/
Official Oracle Doc: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/loading-data-file-replicat-ma-19.1.html
On the replicat side, you would use HANDLECOLLISIONS to kick out any ducplicates. Then once the load is complete, remove it from the parameter file.
Lots of details, but I'm sure this is a good starting point for you.
That would require programming in java.
1) First you would read your database
2) Decide which data has to be added in which table on the basis of data that was read.
3) Execute update/ data entry queries to submit data to tables.
If you want to run Initial Load using GoldenGate:
Target tables should be empty
Data: Make certain that the target tables are empty. Otherwise, there
may be duplicate-row errors or conflicts between existing rows and
rows that are being loaded. Link to Oracle Documentations
If not empty, you have to treat conflicts. For instance if the row you are inserting already exists in the target table (INSERTROWEXISTS) you should discard it, if that's what you want to do. Link to Oracle Documentation
Recently my team have get a situation in which some records in our shared test database disappear with no clear reason. Because it's a shared database (which is utilized by so many teams), so that we can't track down if it's a programming mistake or someone just run a bad sql script.
So that I'm looking for a way to notify (at database level) when a row of a specific table A get deleted. I have looked at the Postgres TRIGGER, but it failed to give me the specific sql that cause the deletion.
Is there anyway I can log the sql statement which cause the deletion of some rows in table A?
You could use something like this.
It allows you to create a special triggers for PostgreSQL tables, that log all the changes to the chosen tables.
This triggers can log the query, that cause the change (via current_query()).
Using this as a base you can add more fields/information to log.
You would do this to the actual postgres config files:
http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). ddl logs all data
definition statements, such as CREATE, ALTER, and DROP statements. mod
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type. For clients using extended query protocol,
logging occurs when an Execute message is received, and values of the
Bind parameters are included (with any embedded single-quote marks
doubled).
The default is none. Only superusers can change this setting.
You want either ddl or all to be the selection. This is what you need to alter:
In your data/postgresql.conf file, change the log_statement setting to 'all'. Further the following may also need to be validated:
1) make sure you have turned on the log_destination variable
2) make sure you turn on the logging_collector
3) also make sure that pg_log actually exists relative to your data directory, and that the postgres user can write to it.
taken from here