We are using the jdbc-odbc bridge to connect to an MS SQL database. When perform inserts or updates, strings are put into the database padded to the length of the database field. Is there any way to turn off this behavior (strings should go into the table without padding)?
For reference, we are able to insert field values that don't contain the padding using the SQL management tools and query analyzer, so I'm pretty sure this is occuring at the jdbc or odbc layer of things.
EDIT: The fields in the database are listed as nvarchar(X), where X = 50, 255, whatever
EDIT 2: The call to do the insert is using a prepared statement, just like:
PreparedStatement stmt = new con.prepareStatement("INSERT INTO....");
stmt.setString(1, "somevalue");
How are you setting the String? Are you doing?:
PreparedStatement stmt = new con.prepareStatement("INSERT INTO....");
stmt.setString(1, "somevalue");
If so, try this:
stmt.setObject(1, "somevalue", Types.VARCHAR);
Again, this is just guessing without seeing how you are inserting.
Are you using CHAR fields in the database or VARCHAR?
CHAR pads the size of the field. VARCHAR does not.
I don't think JDBC would be causing this.
If you can make your insert to work with regular SQL tools ( like ... I don't know Toad for MS SQL Sever or something ) then changing the driver should do.
Use Microsoft SQL Server JDBC type IV driver.
Give this link a try
http://www.microsoft.com/downloads/details.aspx?familyid=F914793A-6FB4-475F-9537-B8FCB776BEFD&displaylang=en
Unfortunately these kinds of download comes with a lot of garbage. There's an install tool and another hundreds of file. Just look for something like:
intalldir\lib\someSingle.jar
Copy to somewhere else and uninstall/delete the rest.
I did this a couple of months ago, unfortunately I don't remeber exactly where it was.
EDIT
Ok, I got it.
Click on the download and at the end of the page click on "I agree and want to download the UNIX version"
This is a regular compressed file ( use win rar or other ) and there look for that sigle jar.
That should work.
If you are using the bundled Sun JDBC-ODBC Bridge driver, you may want to consider migrating to a proper MS SQL JDBC driver. Sun does not recommend that the bridge driver be used in a production environment.
The JDBC-ODBC Bridge driver is recommended only for experimental use or when no other alternative is available.
Moving to a more targeted driver may fix your problem all together, or at least it will provide a production ready solution when you do fix the bug.
Related
MySQL Connector-J Documentation (here) mentions two ways in which the JDBC retrieves results from the MySQL database. One is the default operation, in which the entire result set is loaded into memory and made accessible in the code. The second is row by row streaming.
I would like to know whether the latest versions of MySQL/MySQL JDBC support server side cursors. Specifically, I would like to know whether the options useCursorFetch=True and defaultFetchSize>0 can be used to ensure that the result set is retrieved from the database in batches of certain size (fetch size). MySQL describes server side cursors in its C API (here), and I would like to know whether similar support is there with MySQL JDBC.
If this support exists, what are the constraints of such an operation? I understand that a temporary table would be created in the server's memory from which results would be fetched. But what are the other things to look out for (such as table/row locks, restrictions on update/insertions, and result set/connection closing)?
The most recent version of the documentation you linked to has this note:
By default, ResultSets are completely retrieved and stored in memory. [...] you can tell the driver to stream the results back one row at a time.
To enable this functionality, create a Statement instance in the following manner:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
This sounds like what you're looking for.
So basically, I would like to avoid stored procedures, but at the same time I would'nt want multiple round-trips to database to execute sequential statements.
Apparently this blog says Facebook uses mysql's multiple-statement-queries. Unfortunately, its a C API, is there a java equivalent of it?
So in brief, the question "is in java+mysql how can a second jdbc statement use the output of the first statement as input to execute, without a round-trip to database and without a storedproc" ?
If not how do other people approach this problem?
Yes, the JDBC driver for MySQL support the multi-statement queries. It is however disabled by default for security reasons, as multi-statement queries significantly increase the risks associated with eventual SQL injections.
To turn on multi-statement queries support, simply add the allowMultiQueries=true option to your connection string (or pass the equivalent option in map format). You can get more information on that option here: https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-configuration-properties.html.
Once this option enabled, you can simply execute a call similar to: statement.execute("select ... ; select ... ; select ..."). Returned ResultSets can be iterated from the Statement object: each call to statement.getResultSet() return the next one. Use statement.getMoreResults() to determine if there are indeed more ResultSet available.
It sounds like you want to do batch processing.
here is a duplicate question with an good answer:
How to execute multiple SQL statements from java
I'm trying to render a list of records in a jsf page with a query that returns many records, I'm using weblogic 10.3.0.0 and sql server driver 4 from Microsoft to connect a sql server database, when I run this jsf page, this consumes a lot of memory because the query returns many record and therefore a OutOfMemoryError is occurring. I've seen that with setFetchSize you can limit the results, but here:
What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?
the microsoft's sql server driver not limit this, I used jDTS driver as the above post has suggested, but the same problem ocurre, I've also tried to use this:
http://msdn.microsoft.com/en-us/library/bb879937.aspx
for use adaptive buffering with the driver, but my driver version is 4 so this by default has adaptive buffering, but apparently no, I've tried this:
statement = connectionDB.createStatement();
SQLServerStatement SQLstmt = (SQLServerStatement) statement;
SQLstmt.setResponseBuffering("adaptive");
but this no returns results, I've put this in the connection properties also, but the problem still occure, I understand that the problem is the query have big results and the driver no execute it with chunks, and therefore the memory is decreasing, and I believe that this is the problem. I don't know who workaround use, if do the manual pagination with the query, if use another driver, etc., please help me to find a workaround, whatever info is well received, sorry for my poor english
I'm hitting a problem when trying to update a ResultSet.
I'm querying the database via JDBC, and getting back a resultset which is not CONCUR_UPDATABLE.
I need to replace the '_' into ' ' at the specified columns. How could I do that?
String value = derivedResult.getString(column).replace("_", " ");
derivedResult.updateString(column, value);
derivedResult.updateRow();
This works fine on Updatable, but what if it's ResultSet.CONCUR_READ_ONLY?
EDIT:
This will be a JDBC driver, which calls another JDBC Drivers, my problem is i need to replace the content of the ResultSets, even if it's forward only, or Read only. If I set scroll_insensitive and updatable, there isn't a problem, but there are JDBC drivers that works with forward only resultsets.
Solutions:
Should I try to move the results to an inmemory database and replace the contents there.
Should I implement the resultset which acts like all my other classes: Calls the underlying drivers function with modifications if needed.
I don't want to use the resulst afterward to make updates or inserts. Basically this will be done on select queries.
In my experience updating the result set is only possible for simple queries (select statements on a single table). However, depending on the database, this may change. I would first consult the database documentation.
Even if you create your own resultset which would be updatable, why do you think that the database data would change? It is highly probable (almost certain) that the update mechanism uses code that is not public and only exists in the resultset instance implementation type of the jdbc driver you use.
I hope the above makes sense.
Im using sun.jdbc.odbc.JdbcOdbcDriver to connect to an oracle database.I know I would be probably be better off using the thin driver but I want the app to work without specifying the db server name and port no.My connection string is like jdbc:odbc:DSN.
The queries that I execute in my application may return millons of rows.All the data is critical so I cannot limit them within the query.My concern is for my java app running into memory issues.
When I check the fetch size of the statement it is set to 1.This seems extremely sub-optimal(A query retrieving 45K rows took abt 13 mins)to me and I would like to have a fetch size of atleast 500 so as to improve performance.
My understanding is that when my query is executed(I'm using Statement) the statement object points to the results on the database which I iterate using a ResultSet. The resultSet will hit the database to fetch n no of rows (where n is fetch size) each time I do a resultSet.next(). Is this interpretation correct? If so does it mean that my app will never face any out of memory issues until my fetch size is so large that the JVM gets swamped?
When I do a stmt.setFetchSize() after creating a statement I get an invalid fetch size sql exception.I am able to avoid this exception if I set the stmt.setMaxRows() to a larger value than the fetch size.But
1. I dont want my results to be limited to the MaxRows value.
2. Tried setting max rows to a huge value and tried with fetch size of 500 but saw no improvement in time taken.
Please help me figure out how I can set a valid fetch size and get some improvement.Any other optimization suggestions for the same driver would be appreciated.
Thanks,
Fell
I haven't used a JDBC-ODBC bridge in quite a few years, but I would expect that the ODBC driver's fetch size would be controlling. What ODBC driver are you using and what version of the ODBC driver are you using? What is the fetch size specified in the DSN?
As a separate issue, I would seriously question your decision to use a JDBC-ODBC bridge in this day and age. A JDBC driver can use a TNS alias rather than explicitly specifying a host and port which is no harder to configure than an ODBC DSN. And getting rid of the requirement to install, configure, and maintain an ODBC driver and DSN vastly improves performance an maintainability.
Are you sure the fetch size makes any difference in Sun's ODBC-JDBC driver? We tried it a few years ago with Java 5. The value is set, even validated but never used. I suspect it's still the case.
Use JDBC. There is no advantage in using ODBC bridge. In PreparedStatement or CallableStatement you can call setFetchSize() to restrict the rowset size.