I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc).
So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out?
Thanks!
#Femi is correct that an object can either be a generated-id or an id, but not both. The issue is more than how ORMLite stores the object but it also has to match the schema that the database was generated with.
ORMLite supports a allowGeneratedIdInsert=true option to #DatabaseField annotation that allows this behavior. This is not supported by some database types (Derby for example) but works under Android/SQLite.
For posterity, you can also create 2 objects that share the same table -- one with a generated-id and one without. Then you can insert using the generated-id Dao to get that behavior and the other Dao to take the id value set by the caller. Here's another answer talking about that. The issue for you sounds like that this will create a lot of of extra DAOs.
The only other solution is to not use the id for your purposes. Let the database generate the id and then have an additional field that you use that is set externally for your purposes. Forcing the database-id in certain circumstances seems to me to be a bad pattern.
From http://ormlite.com/docs/generated-id:
Boolean whether the field is an auto-generated id field. Default is false. Only one field can have this set in a class. This tells the database to auto-generate a corresponding id for every row inserted. When an object with a generated-id is created using the Dao.create() method, the database will generate an id for the row which will be returned and set in the object by the create method. Some databases require sequences for generated ids in which case the sequence name will be auto-generated. To specify the name of the sequence use generatedIdSequence. Only one of this, id, and generatedIdSequence can be specified.
You must use either generatedId (in which case it appears all ids must be generated) or id (in which case you can set them) but not both.
Related
I am trying to track changes made to a database (schema) using a java app. We are trying to track changes for each column/unique-constraint/index and table.
Functionally I know table.column is unique. So, if the datatype of a column changes, we know which column to find and record the change. But what if the name changes? If JDBC's result set is ordered (it asks for index), then I can rely on the order to give me the same column everytime, even if the name changes. Will there be any surprises here, since it is a result 'set'?
However, I learnt that we can change the order of the columns as well. Isn't there any unique ID associated with the columns so that they can be picked up on that basis?
I would mostly not want to use information_schema route, but even though i checked there for mysql, found nothing useful.
I'm dealing with a legacy database that uses a strange key/ID configuration for one of its tables. It's the table that defines user information. Here are the columns (I've simplified things a little):
ID
Secondary ID
First Name
Last Name
Change Type
All of these columns are part of the key in the database itself and are needed to uniquely identify a row, with one exception. When the Change Type column has a null value then the ID column uniquely identifies a row. This exception is heavily relied on to get a user's name based on their ID. However I need to specify all columns as #Id for hibernate to work correctly with this table ... or do I? Assuming I do, how would I go about also implementing the exception so that objects can be loaded from the database by just the ID? Ideally I'd like to be able to interact with this object as if ID was the only key since in practice that's how it's done in straight SQL by the DBAs.
I have the following table in my db:
CREATE TABLE document (
id INT PRIMARY KEY AUTOINCREMENT,
productModelId INT NOT NULL,
comment VARCHAR(50),
CONSTRAINT FK_product_model FOREIGN KEY (productModelId) REFERENCES product_model(id),
)
Of course, real table is much more complicated, but this is enough to understand the problem.
Our users want to see the number of the document when they click button "new". So, in order to do that, we have to create object in db and send to client that object. But, there is a problem. We need to know productModelId before we save the object in db. Otherwise we will have an sql exception.
I see two possible variants (both are ugly, really):
To show modal list with product models to user and after that create object in database with productModelId chosen by user.
To create a temporary number and after that to save the object in db when user finishes editing the document and saves id. We also need to remove NOT NULL case and validate this somwhere in code.
The first way is bad because we have too much modals in our application. Our UI is too heavy with them.
The second variant is ugly because our database is not consistent without all the checks.
What can you suggest we do? Any new solutions? What do you do in your apps? May be some UI tips. We are using the first variant at the moment.
Theory says that the id you use on your database should not be a relevant information, so the user should not see it if not well hidden in an URL or similar, so you should not display it to the user, and the problem you have is one possible confirmation of this theory.
Right now the solution you have is partially correct: it satisfies technical requirements, but is still bad because if the user doesn't complete the insert you'll end up with the DB having empty records (meaning, with ID and foreign key ok, but all other fields empty or with useless default values), so you are basically circumventing the database validations.
There are two better solutions, but both require you to review your database.
The first is not to use the id as something to display to the user. Use another column, with another "id", declare it unique on the database, generate it at application, display it to the user, and then use this other "id" (if it's unique, it is effectively an id) wherever needed.
The second one is the one that is being used often cause it does not require a central database or other authority to check uniqueness of ids, so scales better in distributed environments.
Drop the use of the common "id int" auto-incremented or not, and use UUIDs. Your id will be a varchar or a binary, an UUID implementation (like java.util.UUID, but you can find in other languages) will generate a unique id by itself whenever (and wherever, even on the client for example) you need it, and then you supply this id when saving.
We make it the following way.
Created table id_requests with fields issue_type_id and lastId. We need this in order to avoid the situation when two users hit the button 'new' and get the same ids.
And of course we added field innerNum to all the tables we use this feature in.
Thank you!
Here's the case: I am creating a batch script that runs daily, parsing logfiles and exporting the data to a database. The format of this file is basically
std_prop1;std_prop2;std_prop3;[opt_prop1;[opt_prop2;[opt_prop3;[..]]]
The standard properties map to a table with a column for each property, where each line in the logfile basically maps to a corresponding row. It might look like LOGDATA(id,timestamp,systemId,methodName,callLenght). Since we should be able to log as many optional properties as we like, we cannot map them to the same table, since that would mean adding a row the table every time a new property was introduced. Not to think of the number of NULL references ...
So the additional properties go in another table, say EXTRA_PROPS(logdata_foreign_key,propname,value). In reality, most of the optional properties are the same (e.g. os version, app container, etc), making it somewhat wasteful to log for instance 4 rows in EXTRA_PROPS for each row in LOGDATA (in the case that one on average had 4 extra properties). So what I would like my batch job to do is
for each additionalProperty in logRow:
see if additionalProperty already exist
if exists:
create a reference to it in a reference table
if not:
add the property to the extra properties table
create a reference to it in a reference table
I would then probably have three slightly different tables:
LOGDATA(id,timestamp,systemId,methodName,callLenght)
EXTRA_PROPS(id,propname,value)
LOGDATA_HAS_EXTRA_PROPS(logid,extra_prop_id)
I am not 100% this is a better way of doing it, I would still create N rows in the LOGDATA_HAS_EXTRA_PROPS table for N properties, but at least I would not add any new rows to EXTRA_PROPS.
Even if this might not be the best way (what is?), I am still wondering about the tecnhical side: How would I implement this using Hibernate? It does not have to be superfast, but it would need to chew through 100K+ rows.
Firstly, I would not recommend using Hibernate for this type of logic. Hibernate is a great product but doing this kind of high load data operations may not be it's strongest point.
From data modeling standpoint, it appears to me that (propname,value) is actually a primary key in EXTRA_PROPS. Basically, you want to express the logic that, for example, hostname + foo.bar.com combination will only appear once in the table. Am I right? That would be PK. So you will need to use that in LOGDATA_HAS_EXTRA_PROPS. Using name alone will not be sufficient for reference.
In Hibernate (if you choose to use it), that can be expressed via composite key using #EmbeddedId or Embeddable on object mapped to EXTRA_PROPS. And then you can have many to many relationship that uses LOGDATA_HAS_EXTRA_PROPS as association table.
I'm getting introduced to serialization and ran into some problems when pairing it with LinkedList
Consider i have the following table:
CREATE TABLE JAVA_OBJECTS (
ID BIGINT NOT NULL UNIQUE AUTO_INCREMENT,
OBJ_NAME VARCHAR(50),
OBJ_VALUE BLOB
);
And i'm planning to store 3 object types - so the table may look like so -
ID OBJ_NAME OBJ_VALUE
============================
1 Class1 BLOB
2 Class2 BLOB
3 Class1 BLOB
4 Class3 BLOB
5 Class3 BLOB
And i'll use 3 different LinkedList's to manage these objects..
I've been able to implement LoadFromTable() and StoreIntoTable(Class1 obj1).
My question is - if i change an attribute for a Class2 object in LinkedList<Class2>, how do i effect the change in the DB for this individual item? Also take into account that the order of the elements in LinkedList may change..
Thanks : )
* EDIT
Yes, i understand that i'll have to delete/update a row in my DB table. But how do i keep track of WHICH row to update? I'm only storing the objects in the List, not their respective IDs in the table.
You'll have to store their IDs in the objects you are storing. However, I would suggest not trying to roll your own ORM system, and instead use something like Hibernate.
If you change an attribute in a an object or the order of items. You will have to delete that row and insert the updated list again.
How do i effect the change in the DB for this individual item?
I hope I get you right. The SQL update and delete statements allow you to add a WHERE clause in which you chose the ID of the row to update.
e.g.
UPDATE JAVA_OBJECTS SET OBJ_NAME ="new name" WHERE ID = 2
EDIT:
To prevent problems with your Ids you could wrap you object
class Wrapper {
int dbId;
Object obj;
}
And add them instead of the 'naked' object into your LinkedList
You can use AUTO_INCREMENT attribute for your table and then use the mysql_insert_id() function to retrieve the id assigned to the row added/updated by the last INSERT/UPDATE statement. Along with this maintain a map (eg a HashMap) from the java object to the Id. Using this map you can keep track of which row to delete/update.
Edit: See the answer to this question as well.
I think the real problem here is, that you mix and match different levels of abstraction. By storing serialized Java objects into a relational database as BLOBs you have to consider several drawbacks:
You loose interoperability. Applications written in other languages than Java are not able to read the data back. Even other Java applications have to have the class files of the serialized classes in their classpath.
Changing the class definitions of the stored classes will end up in maintenance nightmares.
You give up the advantages of a relational database. Serialization hides the actual data from the database. So the database is presented only with a black box. You are unable to execute any meaningfull query against the real data. All what you have is the ID and block of bytes.
You have to implement low level data handling by yourself. Actually the database is made to handle your data effectively, but because of serialization you hinder it doing its job. So you are on your own and you are running into that problem right now.
So in most cases you benifit from separation of concerns and using the right tool for a job.
Here are some suggestions:
Separate the internal data handling inside your application from persistent storage. Design your database schema in a way to enable the built-in database features to handle the data efficently. In case of a relational database like MySQL you can choose from different technologies like plain JDBC, object relational mappers like JPA or simple mappers like MyBatis. Separation here means to avoid to contaminate the database with implementation specific concerns.
If you have for example in your Java application a List of Person instances and each Person consists of a name and an age. Then you would represent that list in a relational database as a table consisting of a VARCHAR field for the name and a numeric field for the age and maybe a third field for a unique key. Then the database is able to do what it can do best: managing large amounts of data.
Inside your application you typically separate the persistent layer from the rest of your program containing the code to communicate with the database.
In some use cases a relational database may not be the appropiate tool. Maybe in a single user desktop application with a small set of data it may be the best to simply serialize your Person list into a plain file and read it back at the next start up.
But there exists other alternatives to persist your data. Maybe some kind of object oriented database is the right tool. In particular I have experiences with Fast Objects. As a simplification it is serialization on steroids. There is no need for a layer like JPA or JDBC between your application and your database. You are able to store the class instances directly into the database. But unlike the relational database with its BLOB field, the OODB knows your classes and the actual data and can benefit from that.
Another alternative may be JDBM or Berkeley DB.
So separation of concerns and choosing the right persistence strategy (and using it the right way) is a key concern for the success of your project. But doing it right is hard even for experienced developers.