I'm developing a MySQL database project using JDBC. It uses parent/child tables linked with foreign keys.
TL;DR: I want to be able to get the AUTO_INCREMENT id of a table before an INSERT statement. I am already aware of the getGeneratedKeys() method in JDBC to do this following an insert, but my application requires the ID before insertion. Maybe there's a better solution to the problem for this particular application? Details below:
In a part of this application, the user can create a new item via a form or console input to enter details - some of these details are in the form of "sub-items" within the new item.
These inputs are stored in Java objects so that each row of the table corresponds to one of these objects - here are some examples:
MainItem
- id (int)
- a bunch of other details...
MainItemTitle
- mainItemId (int)
- languageId (int)
- title (String)
ItemReference
- itemId (int) <-- this references MainItem id
- referenceId (int) <-- this references another MainItem id that is linked to the first
So essentially each Java object represents a row in the relevant table of the MySQL database.
When I store the values from the input into the objects, I use a dummy id like so:
private static final int DUMMY_ID = 0;
...
MainItem item = new MainItem(DUMMY_ID, ...);
// I read each of the titles and initialise them using the same dummy id - e.g.
MainItemTitle title = new MainItemTitle(DUMMY_ID, 2, "Here is a a title");
// I am having trouble with initialising ItemReference so I will explain this later
Once the user inputs are read, they are stored in a "holder" class:
class MainItemValuesHolder {
MainItem item;
ArrayList<MainItemTitle> titles;
ArrayList<ItemReference> references;
// These get initialised and have getters and setters, omitted here for brevity's sake
}
...
MainItemValuesHolder values = new MainItemValuesHolder();
values.setMainItem(mainItem);
values.addTitle(englishTitle);
values.addTitle(germanTitle);
// etc...
In the final layer of the application (in another class where the values holder was passed as an argument), the data from the "holder" class is read and inserted into the database:
// First insert the main item, belonging to the parent table
MainItem mainItem = values.getMainItem();
String insertStatement = mainItem.asInsertStatement(true); // true, ignore IDs
// this is an oversimplification of what actually happens, but basically constructs the SQL statement while *ignoring the ID*, because...
int actualId = DbConnection.insert(insertStatement);
// updates the database and returns the AUTO_INCREMENT id using the JDBC getGeneratedKeys() method
// Then do inserts on sub-items belonging to child tables
ArrayList<MainItemTitle> titles = values.getTitles();
for (MainItemTitle dummyTitle : titles) {
MainItemTitle actualTitle = dummyTitle.replaceForeignKey(actualId);
String insertStatement = actualTitle.asInsertStatement(false); // false, use the IDs in the object
DbConnection.insert(insertStatement);
}
Now, the issue is using this procedure for ItemReference. Because it links two MainItems, using the (or multiple) dummy IDs to construct the objects beforehand destroys these relationships.
The most obvious solution seems to be being able to get the AUTO_INCREMENT ID beforehand so that I don't need to use dummy IDs.
I suppose the other solution is inserting the data as soon as it is input, but I would prefer to keep different functions of the application in separate classes - so one class is responsible for one action. Moreover, by inserting as soon as data is input, then if the user chooses to cancel before completing entering all data for the "main item", titles, references, etc., the now invalid data would need to be deleted.
In conclusion, how would I be able to get AUTO_INCREMENT before insertion? Is there a better solution for this particular application?
You cannot get the value before the insert. You cannot know what other actions may be taken on the table. AUTO_INCREMENT may not be incrementing by one, you may have set that but it could be changed.
You could use a temporary table to store the data with keys under your control. I would suggest using a Uuid rather than an Id so you can assume it will always be unique. Then your other classes can copy data into the live tables, you can still link the data using the Uuids to find related data in your temporary table(s), but write it in the order that makes sense to the database (so the 'root' record first to get it's key and then use that where required.
Related
It is easy to make relation between two tables of a database by #Relation and #ForeignKey of the Room library
And in SQLite we can join tables from different databases
But how can I do it by Room Library?
In Room you will not be able to code cross database foreign keys. The same restriction applies to SQLite. However, a Foreign Key is not required for a relationship to exist, it is a constraint(rule) used to enforce the integrity of a relationship.
Likewise in Room you will not be able to utilise cross database relationships. The #Relation annotation basically defines join criteria used the for queries that Room generates.
However, you can programmatically have relations between two room databases via the objects.
Example
As a basic example (based upon a Room database I was looking at) consider:-
The first database (already existed), whose abstract class is Database which has a single entity that is defined in the Login class and has all the Dao's in the interface AllDao.
A Login object having 4 members/fields/columns, the important one being a byte[] with the hash of the user, named userHashed.
The second database, whose abstract class is OtherDatabase which has a single entity defined in the UserLog class and has all the Dao's in the interface OtherDBDao.
A UserLog object having 3 members/fields/columns, the importane/related column being the hash of the respective user(Login) (the parent in the Login table).
With the above consider the following :-
//First Database
db = Room.databaseBuilder(this,Database.class,"mydb")
.allowMainThreadQueries()
.build();
allDao = db.allDao();
//Other Database
otherdb = Room.databaseBuilder(this,OtherDatabase.class,"myotherdb")
.allowMainThreadQueries()
.build();
otherDBDao = otherdb.otherDBDao();
// Add some user rows to first db
Login l1 = new Login(t1,t2,t3,10);
Login l2 = new Login(t2,t3,t4,20);
Login l3 = new Login(t3,t4,t1,30);
Login l4 = new Login(t4,t1,t2,40);
allDao.insertLogin(l1);
allDao.insertLogin(l2);
allDao.insertLogin(l3);
allDao.insertLogin(l4);
// Get one of the Login objects (2nd inserted)
Login[] extractedLogins = allDao.getLoginsByUserHash(t2);
// Based upon the first Login retrieved (only 1 will be)
// add some userlog rows to the other database according to the relationship
if (extractedLogins.length > 0) {
for (int i = 0;i < 10; i++) {
Log.d("USERLOG_INSRT","Inserting UserLog Entry");
otherDBDao.insertUserLog(new UserLog(extractedLogins[0].getUserHashed()));
}
}
UserLog[] extractedUserLogs = otherDBDao.getUserLogs(extractedLogins[0].getUserHashed());
for(UserLog ul : extractedUserLogs ) {
// ....
}
The above :-
builds both databases.
Adds 4 users to the first database.
extracts all of the Login objects that match a specific user (there will only be 1) from the first database.
for each Login extracted (again just the 1) it adds 10 UserLog rows to the other database.
as the TEST, uses the userhash from the first database to extract all the related UserLog rows from the other database.
to simplify showing the results a breakpoint was place on the loop that would process the extracted UserLog objects.
Of course such a design would probably never be used.
The following is a screen shot of the debug screen when the breakpoint is triggered :-
Say, I want to save/create new item to the DynamoDb table,
if and only if there is not any existent item already that that would contain the referenceId equal to the same value I set.
In my case I want to create a item with withReferenceId=123 if there is not any other withReferenceId=123 in the table.
the referenceId is not primary key! (I don not want it to be it)
So the code:
val withReferenceIdValue = "123";
val saveExpression = new DynamoDBSaveExpression();
final Map<String, ExpectedAttributeValue> expectedNoReferenceIdFound = new HashMap();
expectedNoReferenceIdFound.put(
"referenceId",
new ExpectedAttributeValue(new AttributeValue().withS(withReferenceIdValue)).withComparisonOperator(ComparisonOperator.NE)
);
saveExpression.setExpected(expectedNoReferenceIdFound);
newItemRecord.setReferenceId(withReferenceId);
this.mapper.save(newItemRecord, saveExpression); // do not fail..
That seems does not work.
I the table has the referenceId=123 already the save() does not fail.
I expected this.mapper.save to fail with exception.
Q: How to make it fail on condition?
I also checked this one where they suggest to add auxiliary table (transaction-state table)..because seems the saveExpression works only for primary/partition key... if so:
not sure why there that limitation. in any case if it is primary key
one can not create duplicated item with the same primary key.. why
creating conditions on first place. 3rd table is too much.. why there
is not just NE to whatever field I want to use. I may create an index
for this filed. not being limited to use only primary key.. that what
I mean
UPDATE:
My table mapping code:
#Data // I use [lombok][2] and it does generate getters and setters.
#DynamoDBTable(tableName = "MyTable")
public class MyTable {
#DynamoDBHashKey(attributeName = "myTableID")
#DynamoDBAutoGeneratedKey
private String myTableID;
#DynamoDBAttribute(attributeName = "referenceId")
private String referenceId;
#DynamoDBAttribute(attributeName = "startTime")
private String startTime;
#DynamoDBAttribute(attributeName = "endTime")
private String endTime;
...
}
Correct me if I'm wrong, but from the:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/dynamodb-dg.pdf
Conditional Writes By default, the DynamoDB write operations (PutItem,
UpdateItem, DeleteItem) are unconditional: each of these operations
will overwrite an existing item that has the specified primary key
the primary key - that makes me thing that the conditional write works ONLY with primary keys
--
Also there is attempt use the transactional way r/w from the db. There is a library. That event has not maven repo: https://github.com/awslabs/dynamodb-transactions
As an alternative seems is the way to use 3rd transaction table with the primary keys that are responsible to tell you whether you are ok to read or write to the table. (ugly) as we replied here: DynamoDBMapper save item only if unique
Another alternative, I guess (by design): it is to design your tables in a way so you use the primary key as your business-key, so you can use it for the conditional writes.
--
Another option: use Aurora :)
--
Another options (investigating): https://aws.amazon.com/blogs/database/building-distributed-locks-with-the-dynamodb-lock-client/ - this I do not like either. because potentially it would create timeouts for others who would want to create new items in this table.
--
Another option: Live with this let duplication happens for the item-creation (not including the primary key). And take care of it as a part of "garbage collection". Depends on the scenario.
I have this bean/table "Userinfo" with columns id, username, and twitchChannel.
For most userinfo the twitchChannel column will be null. I'm going through every userinfo entity in the table and search the column twitchChannel, if the column is not null I put the twitchChannel in an array.
this is what my request looks like:
"SELECT ui FROM Userinfo ui WHERE ui.iduserinfo=:id"
It is very inefficient because I'm going through every single entity even those which have a null twitchChannel and I'm not interested in those.
This is java but I commented every line so it's easy to understand for those who don't know it.
while (true) { // I'm going through the table in an infinite loop
int id = 0; //id that is incremented for searches
Userinfo ui; // this will be an object that will hold the result of my query
do {
ui = ups.getUserInfo(id); // this execute the query I posted above
id++; //incrementing id for next search
if (ui.getTwitch() != null) { // if the search didn't return null
twitchChannels.add(ui.getTwitch()); // put my twitch in an array
}
} while (ui != null);
}
So at the moment I'm going through every entity in my table even those with a null twitch. To my understanding it's possible to speed up the process with indexes.
CREATE INDEX twitchChannel
ON Userinfo (twitchChannel)
So something like that would have a table with not null twitchChannel. How I loop through this table like above ?
Will it work the same way with java persistence?
Change the query to:
SELECT ui
FROM Userinfo ui
WHERE twitchChannel IS NOT NULL
This will benefit from an index on Userinfo(twitchChannel) (assuming there really are very few values that are filled in). At the very least, this reduces the amount of data passed from the database to the application, even if an index is not used.
If I've understood your question correctly. You have a table containing numerical id's. And you are searching the space of real numbers to see if any of those correspond to an id in your table ('twitch' id ?)
Assuming you have less than infinity users, I would have thought you can reverse this logic.
Change your query to :
SELECT iduserinfo FROM Userinfo ORDER BY iduserinfo
Then your java code will be something along the lines of :
uiResult = ups.getUserInfo(id); // this executes the new query
while (uiResult.next()) {
twitchChannels.add(uiResult.getTwitch()); // put my twitch in an array
}
(Apologies, its been a long time since I've used jdbc).
Sorry If I've misunderstood the question.
I am using play framework for the first time and I need to link objects of the same type. In order to do so I have added a self referencing many to many relationship like this:
#ManyToMany(cascade=CascadeType.ALL)
#JoinTable(name="journal_predecessor", joinColumns={#JoinColumn(name="journal_id")}, inverseJoinColumns={#JoinColumn(name="predecessor_id")})
public List<Journal> journalPredecessor = new ArrayList<Journal>();
I obtain the table journal_predecessor which contains the two columns: journal_id and predecessor_id, both being FKs pointing to the primary key of the table journal.
My question is how can I query this table using raw queries if I am using H2 in-memory database. thanks!
Actually it was very easy. I just needed to create an instance of SqlQuery to create a raw query:
SqlQuery rawQuery = Ebean.createSqlQuery("SELECT journal_id from journal_predecessor where journal_id=" + successorId + " AND predecessor_id=" + predecessorId);
And because i just needed to check weather a row exists or not, I find the size of the set of the results returned by the query:
Set<SqlRow> sqlRow = rawQuery.findSet();
int rowExists = sqlRow.size();
Context: Ebean, play-Framework, Model, Optemistic Locking
Is it possible to set an annotation to a value of a model, which tells ebean that it shouldn't throw a "optemistic locking exception" for this value, because it is independent of the previous data?
Example usage: I have a lastAction value, which is updated frequently. It doesn't matter if this value is absolut correct, because it is just used to determin the automated logout time or deletion time (registered and guest user).
I believe that you can achieve this by using 2 separate tables one for optimistic-lockable attributes, another one for do-not-care attributes.
Later you can combine them in one DB view.
For example:
create table optimistic_lockable {
id bigint primary key
....
}
create table non_lockable {
id primary key
,lockable_id foreign key refences optimistic_lockable (id)
}
create view model_view as
select * from optimistic_lockable ol, non_lockable nl
where ol.id = nl.lockable_id
You map your model to model_view. And IFF the DB engine allows to insert into view, you'll probably be fine ;)