I am aware of what Serialization is however I have not found any real practical example describing the latter one (saving an object in a database taking advantage of the JAVA_OBJECT mapping).
Do I have first to serialize the object and then save it to the database?
In the case of MySQL, you don't have to serialize the object first, the driver will do it for you. Just use the PreparedStatement.setObject method.
For example, first in MySQL create the table:
create table blobs (b blob);
Then in a Java program create a prepared statement, set the parameters, and execute:
PreparedStatement preps;
preps = connection.prepareStatement("insert into blobs (b) values (?)");
preps.setObject(1, new CustomObject());
preps.execute();
Don't forget that the class of the object that you want to store has to implement the Serializable interface.
Serialization is used to save the state of an object and marshall it to a stream and share it with a remote process. The other process just need to have the same class version to deserialize the stream back to an object.
The problem with the database approach is that you will need to expose the databse even to the remote process. This is generally not done due to various reasons, mainly security.
Related
I've got to learn Java JDBC currently.
Today I had a look on how Stored Procedures are called from within JDBC.
What I don't get ..., when I have a Stored Procedure like for example this one:
CREATE PROCEDURE demo.get_count_for_department
(IN the_department VARCHAR(64), OUT the_count INT)
BEGIN
...
"the_count" is marked as an out parameter. Type is also specified. So this should all be known.
Nevertheless I have to specify the type again
statement.registerOutParameter(2, Types.INTEGER);
I have to put the type in there again? It seems redundant to me.
Why do I have to give two parameter in there at all?
statement = connection.prepareCall("{call get_count_for_department(?, ?)}");
I haven't seen this in any other programming language. You only have to take care for the in-parameter. For the out-parameter takes the function care itself.
Why is that different here?
Perhaps someone can drop me a few lines. So that I get a better idea about how those Stored Procedure-calls work.
The reason is that the sql statement is just a string as seen from java perspective.
The task of a JDBC driver is to send that string to the database and receive results.
You could read the stored procedure metadata to get information about the stored procedure you are about to call but that takes time and possibly multiple queries to the DB.
If you want that kind of integration you go a step up from JDBC and use some kind of utilities or framework to map DB object to java ones.
Depending on the database it might technically not be necessary. Doing this allows a JDBC driver to execute the stored procedure without first having to query the database for metadata about the statement, and it can also be used to disambiguate between multiple stored procedures with the same name (but different parameters).
I have a Java method which passes a CLOB to a PL/SQL procedure using JDBC. I was able to do that using the createClob() method of the Connection class.
Here is the Java Doc for the Connection class. If you notice other than createClob() method there are also createBlob() , createArrayOf() , createNClob() methods in this class.
I am curious why the creation of instances of Blob , Clob , NClob is part of the Connection class ? It seems a bit out of place. Why should datatypes and its creation be tied to connection object ?
Why can't we create instances of these datatypes independently ? I am planning to expose this method with the following signature in a SOAP Web Services:
public String handleEmployeeReview(int empId , String fileName)
It seems little odd that a web service client would first have to create a Connection instance for creating a instance of Clob. (Unless there is another way of creating and passing Clobs that I am unaware of.)
Which also makes me wonder if my choice of Clob datatype for this method is the right one. Considering its being exposed in the web service.
JDBC is designed to be database engine independent. The database types INT, VARCHAR, TIMESTAMP, etc., could have a more common implementation into Java types: int, String, java.sql.Timestamp which extends from java.util.Date, and on.
Data types like BLOB, CLOB, NLOB are more specific fields that can be implemented very differently in database engines, some database engines don't even support arrays as data type for table columns, but still JDBC should provide a transparent interface to communicate the client code and the database engine. The designers of JDBC interfaces thought that the creation of these objects should depend on the JDBC implementation (this is, a CLOB object is database engine specific), and the best place to provide the creation of CLOB objects (and similar) would be provided by the java.sql.Connection interface, since you at least need to open a physical database connection to create an instance of such specific database engine object. IMO this is the proper interface to do it, since it allows using the same CLOB object in different PreparedStatements and CallableStatements with no problems.
The usage of Connection#createClob method and similars should be used by your dao layer only. Other datasources may use a different approach to store binary data of your files e.g. a direct byte[] that is stored in memory, for this case the datasource would be a cache system, not a direct database.
I need to store java object in H2 database so that I can call the object's method in a easy way.
I read data type of H2. The only way I can store an java Object is to use type "OTHER Type".
However this data type employs serialization and deserialization. If I store a very large object, it means it gonna take much time and memory to do serialization stuff.
I just want to refer the object. Is there anyway to achieve it?
If H2 cannot achieve it, is there any other in memory database can achieve it?
MongoDB gives the ability to write documents of any structure i.e. any number and types of key/value pairs can be written. Assuming that I use this features that my documents are indeed schema-less then how do I manage reads, basically how does the application code ( I'm using Java ) manage reads from the database.
The java driver reads and writes documents as BasicBSONObjects, which implement and are used as Map<String, Object>. Your application code is then responsible for reading this map and casting the values to the appropriate types.
A mapping framework like Morphia or Spring MongoDB can help you to convert BSONObject to your classes and vice versa.
When you want to do this yourself, you could use a Factory method which takes a BasicBSONObject, checks which keys and values it has, uses this information to create an object of the appropriate class and returns it.
I'm writing an application which needs to write an object into database.
For simplicity, I want to serialize the object.
But ObjectOuputStream needed for the same purpose has only one constructor which takes any subclass of OutputStream as parameter.
What parameter should be passed to it?
You can pass a ByteArrayOutputStream and then store the resulting stream.toByteArray() in the database as blob.
Make sure you specify a serialVersionUID for the class, because otherwise you'll have hard time when you add/remove a field.
Also consider the xml version for object serialization - XMLEncoder, if you need a bit more human-readable data.
And ultimately, you may want to translate your object model to the relational model via an ORM framework. JPA (Hibernate/EclipseLink/OpenJPA) provide object-relational mapping so that you work with objects, but their fields and relations are persisted in a RDBMS.
Using ByteArrayOutputStream should be a simple enough way to convert to a byte[] (call toByteArray after you've flushed). Alternatively there is Blob.setBinaryStream (which actually returns an OutputStream).
You might also want to reconsider using the database as a database...
e.g. create ByteArrayOutputStream and pass it to ObjectOuputStream constructor
One thing to add to this. java serialization is a good, general use tool. however, it can be a bit verbose. you might want to try gzipping the serialized data. you can do this by putting a GZIP stream between the object stream and the byte stream. this will use a small amount of extra cpu, but that is often a worthy tradeoff to shipping the extra bytes over the network and shoving them in a db.