So I've searched around quite a bit and cannot seem to find an answer to this. I have two different applications that speak to the same DB2 database; one uses Hibernate, and the other uses just the native ResultSet, PreparedStatement, etc.
Let's say for example I have an object who's primary key is "ABC". The database has this keys length defined as 3. If I use the spring repositories findOne("ABC"), it returns my object. But the strange thing is if I call findOne("ABC123456"), it will still return that same database row, except the key field in the object is "ABC123456".
Now if I run a native query on the other application using a PreparedStatement, the same thing will happen, except in this case the primary key is returned correctly in the ResultSet.
Assuming this is a DB issue, but not sure what to look at. The database column is setup as a CHAR(3).
Related
I currently have a method in my Java program, using JDBC that checks if a specific table exists in a MySQL database. I had a logic error where the DatabaseMetaData.getTables() method was returning a same-named table from a different database, and I've now solved that by specifying the catalog in the statement as seen below (table represents the table name I'm looking for).
ResultSet tables = connectionToDatabase().getMetaData().getTables("snakeandladder", null, table, null);
However, after doing some research, I saw a lot of people recommending to use Show Tables instead, but not actually explaining why to use Show tables over the above.
Can someone explain to me the limitations of using the statement above and why Show Tables would be a better option?
Thank you!
DatabaseMetaData.getTables() is more portable, most of the databases (not only MySQL) should be able to provide information through defined API.
On the other hand using MySQL specific query "show tables;" may cause more harm than good:
you introduce a query string which can be exploited by an attacker, also the code now contains a statically compiled query string.
if ever the database provider will change, so the code will have to be updated (again portability)
We have a stateless ejb which persists some data in an object oriented database. Unfortunately, today our persistence object does not have a unique key due to some unknown reason and altering the PO is also not possible today.
So we decided to synchronize the code. Then we check if there is an object already persisted with the name(what we consider should be unique). Then we decide to persist or not.
Later we realized that the code is deployed on a cluster which has three jboss instances.
Can anyone please suggest an idea which does not allow to persist objects with the same name.
If you have a single database behind the JBoss cluster you can just apply a unique contraint to the column for example (I am assuming its an SQL database):
ALTER TABLE your_table ADD CONSTRAINT unique_name UNIQUE (column_name);
Then in the application code you may want to catch the SQL exception and let the user know they need to try again or whatever.
Update:
If you cannot alter the DB schema then you can achieve the same result by performing a SELECT query before insert to check for duplicate entries, if you are worried about 2 inserts happening at the same time you can look at applying a WRITE_LOCK to the row in question
I'm facing an issue where a Java process (that I have no control over) is inserting rows into a table and causing an overflow. I have no way of intercepting the queries, and the exception raised by ORACLE is not informative. It only mentions an overflow, but not which column it's happening on.
I'd like to know which query is causing this overflow as well as the values being inserted.
I tried creating a trigger BEFORE INSERT on the table that copies the rows into another temporary table that I can later read, however it looks like the trigger is not being run when the overflow happens.
Trigger syntax:
CREATE OR REPLACE TRIGGER OVERFLOW_TRIGGER
BEFORE INSERT
ON VICTIM_TABLE
FOR EACH ROW
BEGIN
insert into QUERIES_DUMP values (
:old.COL1, :old.COL2, :old.COL3,
:old.COL4, :old.COL5, :old.COL6,
:old.COL7, :old.COL8, :old.COL9,
:old.COL10, :old.COL11, :old.COL12
);
END;
/
The table QUERIES_DUMP has the same structure of the failing table however with the NUMBER and VARCHAR2 columns pushed to their max capacity. I'm hoping to get a list of queries and then find out which ones are breaking the rules.
Is it expected for a trigger to not run in case of an overflow, even if set to run before insert?
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
EDIT1:
The error being thrown is:
Description: Error while query: INSERT INTO
...
ORA-01438: value larger than specified precision allowed for this column
What I know is that there are no wrong types being inserted anywhere. It's most probably the length of one of the numeric fields, but they're numerous and the insertion process takes more than an hour, so I can't brute force my way into guessing the column.
I've thought about backing up the table and creating a new victim_table with larger columns, but the process actually inserts into a lot of other tables as well in a complex datamodel and the DB has somewhat sensitive information so I can't endanger its consistency by moving things around.
I tried an INSTEAD OF trigger but ORACLE doesn't seem to accept an INSTEAD OF for inserts on a table.
I added logging on the JDBC layer, but the queries I got did not have values, only '?'
Description: Error while query: INSERT INTO VICTIM_TABLE ( . . . ) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
It depends on the specific error (including an ORA-xxxxx error number is always greatly appreciated).
If the problem is that the Java application is trying to insert a value that cannot be converted to the table's data type, that error would be expected to be thrown before the trigger could run. Data type validations have to happen before the trigger can execute.
Imagine what would happen if data type validations happened after the trigger ran. If the Java app passed an invalid value for, say, col1, then inside the trigger, :new.col1 would have a data type of whatever col1 has in the underlying table but would have an invalid value. Any reference to that field, therefore, would need to result in an error being raised-- you couldn't plausibly log an invalid value to your table.
Are you sure that you can't intercept the queries somehow? For example, if you renamed victim_table to victim_table_base, created a view named victim_table with larger data types, and then defined an instead of trigger on the view that validated the data and inserted it into the table, you could identify which values were invalid. Alternately, since your Java application is using JDBC (presumably) to interact with the database, you should be able to enable logging at the JDBC layer to see the parameter values that are being passed.
Something that has been puzzling me for a bit now, and an hour or two of googlin' hasn't really revealed any useful answers on the subject, so I figured I'd just write the question.
When I create a database in SQL using 'CREATE DATABASE DBNAME' am I implicitly creating a catalog in that database? Is it proper to refer to that 'DBNAME' as a catalog? Or is it something completely unrelated?
When I use the MySQL JDBC driver to get the list of tables in a database using the getMetaData() function, the "TABLE_CAT" column (which I would assume means 'catalog') is always set to the name of the database I've choosen.
Coincidence? Or am I just completely wrong on all of this?
catalog is the JDBC term for what many people (and some RDBMs) call databases. i.e. a collection of tables/views/etc. within a database system.
Using Apache Derby with Java (J2ME, but I don't think that makes a difference) is there any way of checking if a database already exists and contains a table?
I know of none, except few work around, unlike MySQL where we have that facility of IF EXIST.
What you do is, try to connect to the database, if couldn't its likely its not there. And after a successful connection, you can do a simple select, like SELECT count(*) FROM TABLE_NAME, to know whether the table exist or not. You would be depending on the exception. Even in an official example from Sun, I have seen the similar work around.
In Oracle we have dictionary tables to know about the database objects. I doubt if we have anything like that in Derby.
[Edited]
Well, I found that there is a way to know if the table exist. Try, SELECT tablename FROM SYSTABLES. It is for checking the existence of a table, for checking database you may need to do similar thing, I explained above.
Adeel, you could also use Connection.getMetaData to return a DatabaseMetaData object, then use the getTables, once you have the connection to the database of course. This has the advantage of working for any database with a JDBC driver worth it's salt.
For checking if the database exists, if you are using Derby in the embedded way, or the server is on the same machine, you could check if the folder for the database exists. Is a bit kludgy though. I would do as Adeel suggests and try to connect, catching the exception if it's not there.
I would suggest getting the DatabaseMetaData object, then using the getTables(null, null, null, new String[]{"TABLE"}) method from it, which returns a ResultSet. Use the next() method of the ResultSet, which returns a boolean, to test if any tables exist. If it tests true, you have tables in existence. False, and the database is empty.