I'm currently trying to store encrypted data in some of the columns of a Postgres DB. After receiving helpful feedback from this question: client-side-encryption-with-java-and-postgres-database I am using converters/bindings to implement transparent encryption in the JDBC layer.
Right now I'm trying to insert a BigDecimal[][][] into a Postgres DB column of type bytea.
The insertion works but the problem is that the encryption code I've added in the converters/binding doesn't seem to run. Unfortunately, when I check the database I'm seeing an unencrypted 3D matrix. (FYI my encryption utility code is tested and does work)
To test, I put my encryption code in the DAO layer and the BigDecimal[][][] matrix does get encrypted on DB inserts. Although I could do this it defeats the purpose of using converters/bindings for encryption.
So my question:
With the code I provided below am I doing anything wrong that is preventing the encryption code in my converter/binding to be run? I thought after a Prepared Statement is executed the converter is the next step but maybe not? I have a lack of knowledge on just when the converter/binding code gets called in the whole JOOQ flow so any insight is much appreciated! Thanks :D
First I'm using a PreparedStatment in a DAO to execute the insert query.
I can't show the full code but basically for the stmt I'm setting the BigDecimal[][][] as an object parameter:
private Result executeInsert(BigDecimal[][][] valueToEncrypt, String insertSql) {
try (Connection conn = config.connectionProvider().acquire();
PreparedStatement stmt = conn.prepareStatement(insertSql)) {
// Get a human readable version of the 3d matrix to insert into the db.
PostgresReadableArray humanReadableMatrix = getPostgresReadableArray(valueToEncrypt)
stmt.setObject(parameterIndex++, humanReadableMatrix, Types.OTHER);
ResultSet res = stmt.executeQuery();
}
...
}
I am currently attaching the binding to a codegen xml file here:
<forcedType>
<userType>
java.math.BigDecimal[][][]
</userType>
<binding>com.myapp.EncryptionBinding</binding>
<includeExpression>matrix_column</includeExpression>
<includeTypes>bytea</includeTypes>
</forcedType>
Here is my binding class EncryptionBinding:
public class EncryptionBinding implements Binding<byte[], BigDecimal[][][]> {
#Override
public Converter<byte[], BigDecimal[][][]> converter() {
return new MatrixConverter();
}
// Rending a bind variable for the binding context's value and casting it to the json type
#Override
public void sql(BindingSQLContext<BigDecimal[][][]> ctx) throws SQLException {
}
// Registering VARCHAR types for JDBC CallableStatement OUT parameters
#Override
public void register(BindingRegisterContext<BigDecimal[][][]> ctx) throws SQLException {
ctx.statement().registerOutParameter(ctx.index(), Types.VARCHAR);
}
// Converting the BigDecimal[][][] to a Encrypted value and setting that on a JDBC PreparedStatement
#Override
public void set(BindingSetStatementContext<BigDecimal[][][]> ctx) throws SQLException {
ctx.statement().setBytes(ctx.index(), ctx.convert(converter()).value());
}
...
Here is my converter class MatrixConverter used in the above EncryptionBinding class:
public class MatrixConverter extends AbstractConverter<byte[], BigDecimal[][][]> {
private static final Logger logger = LoggerFactory.getLogger(MatrixConverter.class);
public MatrixConverter() {
super(byte[].class, BigDecimal[][][].class);
}
#Override
public BigDecimal[][][] from(byte[] databaseObject) {
return EncryptionUtils.decrypt(databaseObject);
}
#Override
public byte[] to(BigDecimal[][][] userObject) {
return EncryptionUtils.encrypt(JsonUtils.toJson(userObject));
}
}
Related
Hi I would like to stream a very large table spring-data-jdbc. For this purpose
I have set my connection to READ_ONLY I have declared in my repository a method that looks in the following way:
PackageRepository extends Repository<Package,String> {
Stream<Package> findAll();
}
My expectation here would be that the resultset would be of type FORWARD_ONLY and this method will not block indefinatly untill all results are recieved from the database.
Here I would make a comparison with Spring Data JPA where the Stream methods are not blocking and the content of the database is fetched in portions depending on the fetch size.
Have I missed some configuration ? How can I achieve this behaviour with spring-data-jdbc ?
UPDATE: I will put the question in a different form. How can I achieve with spring-data-jdbs the equivalent of:
template.query(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection con) throws SQLException {
PreparedStatement statement = con.prepareStatement("select * from MYTABLE with UR",ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(150000);
return statement;
}
}, new RowCallbackHandler() {
#Override
public void processRow(ResultSet rs) throws SQLException {
// do my processing here
}
});
Just adding thesetFetchSize(Integer.MIN_VALUE) before querying, the queryForStream indeed gives us a stream which load records one by one rather than eagerly load all records into memroy in one shot.
namedTemplate.getJdbcTemplate().setFetchSize(Integer.MIN_VALUE);
Stream<LargeEntity> entities = namedTemplate.queryForStream(sql, params, rowMapper);
dependencies:
spring framework 5.3+
mysql-connector-java 8.0.x (or mariadb-java-client 2.7.x)
Using JOOQ 3.5.2 with MySQL 5.7, I'm trying to accomplish the following...
MySQL has a set of JSON functions which allow for path targeted manipulation of properties inside larger documents.
I'm trying to make an abstraction which takes advantage of this using JOOQ. I began by creating JSON serializable document model which keeps track of changes and then implemented a JOOQ custom Binding for it.
In this binding, I have all the state information necessary to generate calls to these MySQL JSON functions with the exception of the qualified name or alias of the column being updated. A reference to this name is necessary for updating existing JSON documents in-place.
I have been unable to find a way to access this name from the *Context types available in the Binding interface.
I have been considering implementing a VisitListener to capture these field names and pass them through the Scope custom data map, but that option seems quite fragile.
What might the best way to gain access to the name of the field or alias being addressed within my Binding implementation?
--edit--
OK, to help clarify my goals here, take the following DDL:
create table widget (
widget_id bigint(20) NOT NULL,
jm_data json DEFAULT NULL,
primary key (widget_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Now let's assume jm_data will hold the JSON representation of a java.util.Map<String,String>. For this JOOQ provides a very nice extension API by implementing and registering a custom data-type binding (in this case using Jackson):
public class MySQLJSONJacksonMapBinding implements Binding<Object, Map<String, String>> {
private static final ObjectMapper mapper = new ObjectMapper();
// The converter does all the work
#Override
public Converter<Object, Map<String, String>> converter() {
return new Converter<Object, Map<String, String>>() {
#Override
public Map<String, String> from(final Object t) {
try {
return t == null ? null
: mapper.readValue(t.toString(),
new TypeReference<Map<String, String>>() {
});
} catch (final IOException e) {
throw new RuntimeException(e);
}
}
#Override
public Object to(final Map<String, String> u) {
try {
return u == null ? null
: mapper.writer().writeValueAsString(u);
} catch (final JsonProcessingException e) {
throw new RuntimeException(e);
}
}
#Override
public Class<Object> fromType() {
return Object.class;
}
#Override
public Class toType() {
return Map.class;
}
};
}
// Rending a bind variable for the binding context's value and casting it to the json type
#Override
public void sql(final BindingSQLContext<Map<String, String>> ctx) throws SQLException {
// Depending on how you generate your SQL, you may need to explicitly distinguish
// between jOOQ generating bind variables or inlined literals. If so, use this check:
// ctx.render().paramType() == INLINED
ctx.render().visit(DSL.val(ctx.convert(converter()).value()));
}
// Registering VARCHAR types for JDBC CallableStatement OUT parameters
#Override
public void register(final BindingRegisterContext<Map<String, String>> ctx)
throws SQLException {
ctx.statement().registerOutParameter(ctx.index(), Types.VARCHAR);
}
// Converting the JsonElement to a String value and setting that on a JDBC PreparedStatement
#Override
public void set(final BindingSetStatementContext<Map<String, String>> ctx) throws SQLException {
ctx.statement().setString(ctx.index(),
Objects.toString(ctx.convert(converter()).value(), null));
}
// Getting a String value from a JDBC ResultSet and converting that to a Map
#Override
public void get(final BindingGetResultSetContext<Map<String, String>> ctx) throws SQLException {
ctx.convert(converter()).value(ctx.resultSet().getString(ctx.index()));
}
// Getting a String value from a JDBC CallableStatement and converting that to a Map
#Override
public void get(final BindingGetStatementContext<Map<String, String>> ctx) throws SQLException {
ctx.convert(converter()).value(ctx.statement().getString(ctx.index()));
}
// Setting a value on a JDBC SQLOutput (useful for Oracle OBJECT types)
#Override
public void set(final BindingSetSQLOutputContext<Map<String, String>> ctx) throws SQLException {
throw new SQLFeatureNotSupportedException();
}
// Getting a value from a JDBC SQLInput (useful for Oracle OBJECT types)
#Override
public void get(final BindingGetSQLInputContext<Map<String, String>> ctx) throws SQLException {
throw new SQLFeatureNotSupportedException();
}
}
...this implementation is attached at build time by the code-gen like so:
<customTypes>
<customType>
<name>JsonMap</name>
<type>java.util.Map<String,String></type>
<binding>com.orbiz.jooq.bindings.MySQLJSONJacksonMapBinding</binding>
</customType>
</customTypes>
<forcedTypes>
<forcedType>
<name>JsonMap</name>
<expression>jm_.*</expression>
<types>json</types>
</forcedType>
</forcedTypes>
...so with this in place, we have a nice, strongly typed java Map which we can manipulate in our application code. The binding implementation though, well it always writes the entire map contents to the JSON column, even if only a single map entry has been inserted, updated, or deleted. This implementation treats the MySQL JSON column like a normal VARCHAR column.
This approach poses two problems of varying significance depending on usage.
Updating only a portion of a large map with many thousands of entries produces unnecessary SQL wire traffic as a side effect.
If the contents of the map are user editable, and there are multiple users editing the contents at the same time, the changes of one may be overwritten by the other, even if they are non-conflicting.
MySQL 5.7 introduced the JSON data type, and a number of functions for manipulating documents in SQL. These functions make it possible to address the contents of the JSON documents, allowing for targeted updates of single properties. Continuing our example...:
insert into DEV.widget (widget_id, jm_data)
values (1, '{"key0":"val0","key1":"val1","key2":"val2"}');
...the above Binding implementation would generate SQL like this if I were to change the java Map "key1" value to equal "updated_value1" and invoke an update on the record:
update DEV.widget
set DEV.widget.jm_data = '{"key0":"val0","key1":"updated_value1","key2":"val2"}'
where DEV.widget.widget_id = 1;
...notice the entire JSON string is being updated. MySQL can handle this more efficiently using the json_set function:
update DEV.widget
set DEV.widget.jm_data = json_set( DEV.widget.jm_data, '$."key1"', 'updated_value1' )
where DEV.widget.widget_id = 1;
So, if I want to generate SQL like this, I need to first keep track of changes made to my Map from when it was initially read from the DB until it is to be updated. Then, using this change information, I can generate a call to the json_set function which will allow me to update only the modified properties in place.
Finally getting to my actual question. You'll notice in the SQL I wish to generate, the value of the column being updated contains a reference to the column itself json_set( DEV.widget.jm_data, .... This column (or alias) name does not seem to be available to the Binding API. I there a way to identify the name of the column of alias being updated from within my Binding implementation?
Your Binding implementation is the wrong place to look for a solution to this problem. You don't really want to change the binding of your column to somehow magically know of this json_set function, which does an incremental update of the json data rather than a full replacement. When you use UpdatableRecord.store() (which you seem to be using), the expectation is for any Record.field(x) to reflect the content of the database row exactly, not a delta. Of course, you could implement something similar in the sql() method of your binding, but it would be very difficult to get right and the binding would not be applicable to all use-cases.
Hence, in order to do what you want to achieve, simply write an explicit UPDATE statement with jOOQ, enhancing the jOOQ API using plain SQL templating.
// Assuming this static import
import static org.jooq.impl.DSL.*;
public static Field<Map<String, String>> jsonSet(
Field<Map<String, String>> field,
String key,
String value
) {
return field("json_set({0}, {1}, {2})", field.getDataType(), field, inline(key), val(value));
}
Then, use your library method:
using(configuration)
.update(WIDGET)
.set(WIDGET.JM_DATA, jsonSet(WIDGET.JM_DATA, "$.\"key1\"", "updated_value1"))
.where(WIDGET.WIDGET_ID.eq(1))
.execute();
If this is getting too repetitive, I'm sure you can factor out common parts as well in some API of yours.
I am searching for more hours now with no result. Please help...
This is my class to test:
public class DBSelectSchema extends Database {
private static final Logger LOG = Logger
.getLogger(DBSelectSchema.class.getName());
private Connection conn = null;
public DBSelectSchema() {
super();
}
/**
* This method will return the version of the database.
*
* #return version
* #throws Exception
*/
public JSONObject getVersionFromDB() throws SQLException {
ResultSet rs = null;
JSONObject version = new JSONObject();
PreparedStatement query = null;
try {
conn = mensaDB();
query = conn.prepareStatement("SELECT number FROM version");
rs = query.executeQuery();
if (rs.isBeforeFirst()) {
rs.next();
version.put(HTTP.HTTP, HTTP.OK);
version.put("version", rs.getString("number"));
} else {
version.put(HTTP.HTTP, HTTP.NO_CONTENT);
version.put(HTTP.ERROR, "Die SQL Abfrage lieferte kein Result!");
}
rs.close();
query.close();
conn.close();
} catch (SQLException sqlError) {
String message = ERROR.SQL_EXCEPTION;
LOG.log(Level.SEVERE, message, sqlError);
return version;
} catch (JSONException jsonError) {
String message = ERROR.JSON_EXCEPTION;
LOG.log(Level.SEVERE, message, jsonError);
return version;
}
return version;
}
I am trying to get in each branch for 100% code coverage.
How can I mock ResultSet rs, JSONObject version and PreparedStatement query to do/return what I want:
Currently I am testing like that:
#Test
public void getVersionFromDB_RS_FALSE() throws SQLException, JSONException {
MockitoAnnotations.initMocks(this);
Mockito.when(dbSelMocked.mensaDB()).thenReturn(conn);
Mockito.when(conn.prepareStatement(Mockito.anyString())).thenReturn(query);
Mockito.when(query.executeQuery()).thenReturn(rs);
Mockito.when(rs.isBeforeFirst()).thenReturn(false);
JSONObject returnObj = dbSelMocked.getVersionFromDB();
assert(...);
}
But this just works when the 3 variables are class variables (like Connection conn) and not local variables. But I dont want them (even Connection) not to be global.
=== EDIT 1 ===
It works like that if all variables are local:
#Test
public void getVersionFromDB_RS_FALSE() throws SQLException, JSONException {
System.out.println("####################");
System.out.println("started test: getVersionFromDB_RS_FALSE");
System.out.println("####################");
Connection conn = Mockito.mock(Connection.class);
PreparedStatement query = Mockito.mock(PreparedStatement.class);
ResultSet rs = Mockito.mock(ResultSet.class);
MockitoAnnotations.initMocks(this);
Mockito.when(dbSelMocked.mensaDB()).thenReturn(conn);
Mockito.when(conn.prepareStatement(Mockito.anyString())).thenReturn(query);
Mockito.when(query.executeQuery()).thenReturn(rs);
Mockito.when(rs.isBeforeFirst()).thenReturn(false);
JSONObject returnObj = dbSelMocked.getVersionFromDB();
assertTrue(returnObj.has("error"));
}
But I am not able to mock JSONObject version in another test anymore :(
How can I do that?
#Test
public void getVersionFromDB_JSON_EXCEPTION() throws SQLException, JSONException {
System.out.println("####################");
System.out.println("started test: getVersionFromDB_JSON_EXCEPTION");
System.out.println("####################");
JSONObject version = Mockito.mock(JSONObject.class);
MockitoAnnotations.initMocks(this);
doThrow(new JSONException("DBSelectSchemaIT THROWS JSONException")).when(version).put(anyString(), any());
JSONObject returnObj = dbSelMocked.getVersionFromDB();
System.out.println(returnObj.toString());
assertTrue(returnObj.equals(null));
}
I think its overwritten in the real method... because it does not throw an exception and the method does not fail.
Your test code has multiple issues.
The test is verbose and fragile
The same (verbose) setup is required for multiple tests
You don't test real object, instead you are using mock of your class for testing
The first 2 issues can be solved by extracting repeated code to a setup method (I added static import for Mockito to reduce the noise):
#Before
public void setUp() throws Exception {
Connection conn = mock(Connection.class);
PreparedStatement query = mock(PreparedStatement.class);
when(dbSelMocked.mensaDB()).thenReturn(conn);
when(conn.prepareStatement(anyString())).thenReturn(query);
when(query.executeQuery()).thenReturn(rs);
rs = mock(ResultSet.class); // rs is field
}
Now in each of your tests you can configure rs to return whatever you need:
#Test
public void getVersionFromDB_RS_FALSE() throws Exception {
// Given
when(rs.isBeforeFirst()).thenReturn(false);
// When
JSONObject returnObj = dbSelMocked.getVersionFromDB();
// Then
assertTrue(returnObj.has("error"));
}
Now the most important issue: you are mocking class DBSelectSchema to return connection mock. Mocking class under test can cause different hard-to-spot problems.
To solve this issue you have 3 options:
Refactor your code and inject some connection factory. So you'll be
able to mock it in your test.
Extend class DBSelectSchema in your test and override method
mensaDB() so it will return mocked connection
Use embedded database like H2 and put test data in 'number' table
before calling getVersionFromDB()
Option #1
Extract creation of connection to a separate class and use it in your DBSelectSchema:
public class ConnectionFactory {
public Connection getConnection() {
// here goes implementation of mensaDB()
}
}
Then inject it to your DBSelectSchema:
public DBSelectSchema(ConnectionFactory connFactory) {
this.connFactory = connFactory;
}
Now your test you can use real DBSelectSchema class with mocked ConnectionFactory
ConnectionFactory connFactory = mock(ConnectionFactory.class);
dbSel = new DBSelectSchema(connFactory);
Option #2
You can make almost real class under test:
final Connection conn = mock(Connection.class);
dbSel = new DBSelectSchema() {
#Override
public Connection mensaDB() {
return conn;
}
};
Option #3
This option is most preferable, because you will call real SQL commands and you mock the whole database instead of classes. It requires some effort to use plain JDBC here, but it worth that. Keep in mind that SQL dialect can differ from the DB used in production.
#Before
public void setUp() throws Exception {
Class.forName("org.h2.Driver");
conn = DriverManager.getConnection("jdbc:h2:mem:test;INIT=RUNSCRIPT FROM 'classpath:schema.sql'");
}
#After
public void tearDown() throws Exception {
conn.close();
}
Then in your test you simply add required records to DB:
#Test
public void getVersionFromDB() throws Exception {
// Given
conn.prepareStatement("INSERT INTO version(number) VALUES (1)").execute();
// When
JSONObject returnObj = dbSel.getVersionFromDB();
// Then
assert(...);
}
Obviously, DBSelectSchema must use the same connection, so you can use in combination with options #1 and #2,
You are unit testing data access layer by mocking all ADO calls. By doing so, you will end up with a unit test that does not really test any logic.
Taking an example from your code: assume that you are using the following sql to retrieve a version number : SELECT number FROM version. Now assume that the column name changed and you should retrieve 2 additional column from your sql. You will eventually end up with an sql like SELECT number, newColumn1, newColumn2 FROM version. With the test you would have written (using mock), it would still pass even though its not really testing whether the 2 new column is being retrieved. You get my point?
I would advise you to have a look at this thread for some possible alternative to test your data access layer. Using mock for your data access layer will end up with brittle test that does not really test anything
Your test is much too large, and you seem to be testing too much.
Split your code along it's natural breaks, so that the code that does the data retrieval is separate from the logic that manipulates it.
You only want to test the code that you write, not the 3rd party code. Its out of scope for your needs, and if you can't trust it, then don't use it.
I'm writing generic logger for SQLException and I'd like to get parameters that were passed into PreparedStatement, how to do it ? I was able to get the count of them.
ParameterMetaData metaData = query.getParameterMetaData();
parameterCount = metaData.getParameterCount();
Short answer: You can't.
Long answer: All JDBC drivers will keep the parameter values somewhere but there is no standard way to get them.
If you want to print them for debugging or similar purposes, you have several options:
Create a pass-through JDBC driver (use p6spy or log4jdbc as a basis) which keeps copies of the parameters and offers a public API to read them.
Use Java Reflection API (Field.setAccessible(true) is your friend) to read the private data structures of the JDBC drivers. That's my preferred approach. I have a factory which delegates to DB specific implementations that can decode the parameters and that allows me to read the parameters via getObject(int column).
File a bug report and ask that the exceptions are improved. Especially Oracle is really stingy when it comes to tell you what's wrong.
Solution 1: Subclass
Simply create a custom implementation of a PreparedStatement which delegates all calls to the original prepared statement, only adding callbacks in the setObject, etc. methods. Example:
public PreparedStatement prepareStatement(String sql) {
final PreparedStatement delegate = conn.prepareStatement(sql);
return new PreparedStatement() {
// TODO: much more methods to delegate
#Override
public void setString(int parameterIndex, String x) throws SQLException {
// TODO: remember value of X
delegate.setString(parameterIndex, x);
}
};
}
If you want to save parameters and get them later, there are many solutions, but I prefer creating a new class like ParameterAwarePreparedStatement which has the parameters in a map. The structure could be similar to this:
public class ParameterAwarePreparedStatement implements PreparedStatement {
private final PreparedStatement delegate;
private final Map<Integer,Object> parameters;
public ParameterAwarePreparedStatement(PreparedStatement delegate) {
this.delegate = delegate;
this.parameters = new HashMap<>();
}
public Map<Integer,Object> getParameters() {
return Collections.unmodifiableMap(parameters);
}
// TODO: many methods to delegate
#Override
public void setString(int parameterIndex, String x) throws SQLException {
delegate.setString(parameterIndex, x);
parameters.put(parameterIndex, x);
}
}
Solution 2: Dynamic proxy
This second solution is shorter, but seems more hacky.
You can create a dynamic proxy by calling a factory method on java.lang.reflect.Proxy and delegate all calls on the original instance. Example:
public PreparedStatement prepareStatement(String sql) {
final PreparedStatement ps = conn.prepareStatement(sql);
final PreparedStatement psProxy = (PreparedStatement) Proxy.newProxyInstance(ClassLoader.getSystemClassLoader(), new Class<?>[]{PreparedStatement.class}, new InvocationHandler() {
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
if (method.getName().equals("setLong")) {
// ... your code here ...
}
// this invokes the default call
return method.invoke(ps, args);
}
});
return psProxy;
}
Then you intercept the setObject, etc. calls by looking at method names and looking to the second method arguments for your values.
This article, from Boulder, ahtoulgh DB 2 "specific", gives a complete example of ParameterMetadata usage.
I'm trying to generate some sql files in my java application.
The application will not execute any sql statements, just generate a file with sql statements and save it.
I'd like to use the java.sql.PreparedStatement to create my statements so that i don't have to validate every string etc. with my own methods.
Is there a way to use the PreparedStatement without the calling java.sql.Connection.prepareStatement(String) function, because I don't have a java.sql.Connection?
Take a look at this Java library: http://openhms.sourceforge.net/sqlbuilder/
I'm guessing that until you've got a sql connection, the parser won't know what rules to apply. I'm guessing that it's actually the SQL driver or even server that's compiling the sql statement.
Assuming your sql is simple enough, then how about using a cheap connection, like, say a sqlite connection.
SQLite will create a new database on the fly if the database you're attempting to connect to does not exist.
public Connection connectToDatabase() {
// connect to the database (creates new if not found)
try {
Class.forName("org.sqlite.JDBC");
conn = DriverManager.getConnection("jdbc:sqlite:mydatabase.db");
// initialise the tables if necessary
this.createDatabase(conn);
}
catch (java.lang.ClassNotFoundException e) {
System.out.println(e.getMessage());
}
catch (java.sql.SQLException e) {
System.out.println(e.getMessage());
}
return conn;
}
Not really. Preparing a statement in most cases means that it will be compiled by DBMS which is "hard" without connection.
http://java.sun.com/docs/books/tutorial/jdbc/basics/prepared.html
This is a dastardly devious problem, thankfully it's pretty easy to cope with:
public class PreparedStatementBuilder
{
private String sql; // the sql to be executed
public PreparedStatementBuilder(final String sql) { this.sql = sql; }
protected void preparePrepared(final PreparedStatement preparedStatement)
throws SQLException
{
// this virtual method lets us declare how, when we do generate our
// PreparedStatement, we want it to be setup.
// note that at the time this method is overridden, the
// PreparedStatement has not yet been created.
}
public PreparedStatement build(final Connection conn)
throws SQLException
{
// fetch the PreparedStatement
final PreparedStatement returnable = conn.prepareStatement(sql);
// perform our setup directives
preparePrepared(returnable);
return returnable;
}
}
To use, just write an anonymous class that overrides void preparePrepared(PreparedStatement):
final String sql = "SELECT * FROM FOO WHERE USER = ?";
PreparedStatementBuilder psBuilder = new PreparedStatementBuilder(sql){
#Override
protected void preparePrepared(PreparedStatement preparedStatement)
throws SQLException
{
preparedStatement.setString(1, "randal");
}};
return obtainResultSet(psBuilder);
Presto! You now have a way to work with a PreparedStatement without yet having built it. Here's an example showing the minimal boilerplate you'd otherwise have to copy paste to kingdom come, every time you wanted to write a different statement:
public ResultSet obtainResultSet(final PreparedStatementBuilder builder)
throws SQLException {
final Connection conn = this.connectionSource.getConnection();
try
{
// your "virtual" preparePrepared is called here, doing the work
// you've laid out for your PreparedStatement now that it's time
// to actually build it.
return builder.build(conn).executeQuery();
}
finally
{
try { conn.close(); }
catch (SQLException e) { log.error("f7u12!", e); }
}
}
You really really don't want to be copy pasting that everywhere, do you?
Try implementing PreparedStatement.
Example : class YourOwnClass implements PreparedStatement {
// 1. Do implement all the methods ,
2. Get the minimal logic to implement from OraclePreparedStatement(classes12.jar) or
sun.jdbc.odbc.JdbcOdbcCallableStatement
}