I created table in google Big table datastore ,In that the i set primary key using
#annotations as follows
#Id
#Column(name = "groupname")
private String groupname;
#Basic
private String groupdesc;
I worked corretly,but it override the previous record,how to solve this
for eg
if i entered
groupname=group1
groupdesc=groupdesc
than it accept
after that i enter same groupname it override previous record
for eg
groupname=group1
groupdesc=groups
this record override previous one.
This is simply how the App Engine datastore works: It does not distinguish between insertions and updates. If you're not confident they keys you're generating yourself are unique, you either need to use auto generated keys, or check for existence before inserting a record.
Related
Say, I want to save/create new item to the DynamoDb table,
if and only if there is not any existent item already that that would contain the referenceId equal to the same value I set.
In my case I want to create a item with withReferenceId=123 if there is not any other withReferenceId=123 in the table.
the referenceId is not primary key! (I don not want it to be it)
So the code:
val withReferenceIdValue = "123";
val saveExpression = new DynamoDBSaveExpression();
final Map<String, ExpectedAttributeValue> expectedNoReferenceIdFound = new HashMap();
expectedNoReferenceIdFound.put(
"referenceId",
new ExpectedAttributeValue(new AttributeValue().withS(withReferenceIdValue)).withComparisonOperator(ComparisonOperator.NE)
);
saveExpression.setExpected(expectedNoReferenceIdFound);
newItemRecord.setReferenceId(withReferenceId);
this.mapper.save(newItemRecord, saveExpression); // do not fail..
That seems does not work.
I the table has the referenceId=123 already the save() does not fail.
I expected this.mapper.save to fail with exception.
Q: How to make it fail on condition?
I also checked this one where they suggest to add auxiliary table (transaction-state table)..because seems the saveExpression works only for primary/partition key... if so:
not sure why there that limitation. in any case if it is primary key
one can not create duplicated item with the same primary key.. why
creating conditions on first place. 3rd table is too much.. why there
is not just NE to whatever field I want to use. I may create an index
for this filed. not being limited to use only primary key.. that what
I mean
UPDATE:
My table mapping code:
#Data // I use [lombok][2] and it does generate getters and setters.
#DynamoDBTable(tableName = "MyTable")
public class MyTable {
#DynamoDBHashKey(attributeName = "myTableID")
#DynamoDBAutoGeneratedKey
private String myTableID;
#DynamoDBAttribute(attributeName = "referenceId")
private String referenceId;
#DynamoDBAttribute(attributeName = "startTime")
private String startTime;
#DynamoDBAttribute(attributeName = "endTime")
private String endTime;
...
}
Correct me if I'm wrong, but from the:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/dynamodb-dg.pdf
Conditional Writes By default, the DynamoDB write operations (PutItem,
UpdateItem, DeleteItem) are unconditional: each of these operations
will overwrite an existing item that has the specified primary key
the primary key - that makes me thing that the conditional write works ONLY with primary keys
--
Also there is attempt use the transactional way r/w from the db. There is a library. That event has not maven repo: https://github.com/awslabs/dynamodb-transactions
As an alternative seems is the way to use 3rd transaction table with the primary keys that are responsible to tell you whether you are ok to read or write to the table. (ugly) as we replied here: DynamoDBMapper save item only if unique
Another alternative, I guess (by design): it is to design your tables in a way so you use the primary key as your business-key, so you can use it for the conditional writes.
--
Another option: use Aurora :)
--
Another options (investigating): https://aws.amazon.com/blogs/database/building-distributed-locks-with-the-dynamodb-lock-client/ - this I do not like either. because potentially it would create timeouts for others who would want to create new items in this table.
--
Another option: Live with this let duplication happens for the item-creation (not including the primary key). And take care of it as a part of "garbage collection". Depends on the scenario.
So I am using Postgres and Hibernate 4.2.2 and with entity like this
#Entity(name = "Users")
#Check(constraints = "email ~* '^[A-Za-z0-9._%-]+#[A-Za-z0-9.-]+[.][A-Za-z]+$'")
#DynamicInsert
public class Users {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "id_user",unique = true)
#Index(name = "user_pk")
private Integer idUser;
Hibernate still inserts some id that is already in the table, instead of leaving it emtpy for the database to fill it in. Also hibernate forces ids based on its cache not even checking the database whether it has the lates id.
How can I force it so I can leave id blank and let the database insert it?
First I thought it was because I was using int and that int is by default 0 but even when using object it just forces the id there from its cache.
So my goal is to let the database fill the ids instead of hibernate or at least Hibernate before filling it in to check the database for id first.
So the error I was getting wasCaused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "users_pkey" Detail: Key (id_user)=(1) already exists.
And it wasn't caused by Hibernate and caching but by import of data at creation of database, where I inserted with given ids eg: INSERT INTO users(id_user,email,password,tag) VALUES (1,'a#b.c','***','Adpleydu');
and the sequence for generating wasn't updated so if I inserted with pure SQL via console I got the same error.
Seeding the data is the problem. However you can still seed with pure sequal and have the sequence "keep up".
1) Assure your primary key is of type SERIAL.
CREATE TABLE table_name(
id SERIAL
);
2) Add this 'setval' line to assure the sequence is updated.
select setval('table_name_id_seq',COALESCE((select max(id) + 1 from table_name), 1));
Reference:
https://www.postgresqltutorial.com/postgresql-serial/
I have a GAE project written in Java and I have some thoughts about the HRD and a problem that I'm not sure how to solve.
Basically I have users in my system. A user consists of a userid, a username, an email and a password. Each time I create a new user, I want to check that there isn't already a user with the same userid (should never happen), username or email.
The userid is the key, so I think that doing a get with this will be consistent. However, when I do a query (and use a filter) to find possible users with the same username or email, I can't be sure that the results are consistent. So if someone has created a user with the same username or email a couple of seconds ago, I might not find it with my query. I understand that ancestors are used to work around this problem, but what if I don't have an ancestor to use for the query? The user does not have a parent.
I'd be happy to hear your thoughts on this, and what is considered to be best practice in situations like these. I'm using Objectify for GAE if that changes anything.
I wouldn't recommend using email or any other natural key for your User entity. Users change their email addresses and you don't want to end up rewriting all the foreign key references in your database whenever someone changes their email.
Here's a short blurb on how I solve this issue:
https://groups.google.com/d/msg/google-appengine/NdUAY0crVjg/3fJX3Gn3cOYJ
Create a separate EmailLookup entity whose #Id is the normalized form of an email address (I just lowercase everything - technically incorrect but saves a lot of pain when users accidentally capitalize Joe#example.com). My EmailLookup looks like this:
#Entity(name="Email")
public class EmailLookup {
/** Use this method to normalize email addresses for lookup */
public static String normalize(String email) {
return email.toLowerCase();
}
#Id String email;
#Index long personId;
public EmailLookup(String email, long personId) {
this.email = normalize(email);
this.personId = personId;
}
}
There is also a (not-normalized) email field in my User entity, which I use when sending outbound emails (preserve case just in case it matters for someone). When someone creates an account with a particular email, I load/create the EmailLookup and the User entities by key in a XG transaction. This guarantees that any individual email address will be unique.
The same strategy applies for any other kind of unique value; facebook id, username, etc.
A way around the HRD's eventual consistency, is to use get instead of query. To be able to do this is you need to generate natural IDs, e.g. generate IDs that consists of data you receive in request: email and username.
Since get in HRD has strong consistency, you will be able to reliably check if user already exists.
For example a readable natural ID would be:
String naturalUserId = userEmail + "-" + userName;
Note: in practice emails are unique. So this is a good natural ID on it's own. No need to add a made-up username to it.
You may also enable cross-group transactions (see https://developers.google.com/appengine/docs/java/datastore/overview#Cross_Group_Transactions) and then in one transaction look for the user and create a new one, if that helps.
Recommend avoiding an indexed field and query unless you have other uses for it. Here is what I have done before (Python) using key_name (since entity ids need to be ints). Easy to use either the key_name or id for other entities that need to link to user:
username = self.request.get('username')
usernameLower = username.lower()
rec = user.get_by_key_name(usernameLower)
if rec is None:
U = user(
key_name = usernameLower,
username = username,
etc...)
U.put()
else:
self.response.out.write(yourMessageHere)
I am using Google App Engine with the Datastore interface.
Whenever I'm trying to update an entity, a whole new entity is created, this is despite the fact that I'm positive I am saving the same entity, meaning it has the same key for sure.
This is my code:
Key key=KeyFactory.createKey("user",Long.parseLong(ID));
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
Entity entity=new Entity("user",key);
entity.setProperty // ...whatever, updating the properties
datastore.put(entity); //by putting an entity it's supposed to
// either create a new one if non exists, or update an entity if it already exists
I am sure that the key is the same during all updates as is confirmed in my admin console:
id=3001 600643316
id=3002 600643316
id=3003 600643316
a bunch of entities with the same key (600643316) is created.
The datastore only lets the app create a new entity with a String key name, not a numeric ID. Numeric IDs are system-assigned IDs. If the Key has a numeric ID but not a String key name, then the datastore will ignore it and replace it with a system-assigned numeric ID.
In your example, if ID is a string, then you can just remove the Long.parseLong() bit, or convert it back to a String. KeyFactory.createKey(String kind, String name) creates a Key with a key name.
So it seems Dan is correct and this is the correct way to do it , as explained in google's guides if you want your app to build keys from unique keys that you create you need to use strings .
"You specify whether an entity ought to use an app-assigned key name string or a system-assigned numeric ID as its identifier when you create the object. To set a key name, provide it as the second argument to the Entity constructor:
Entity employee = new Entity("Employee","asalieri");" It seems you're correct , in their example the second argument is indeed a string – user1032663
I am reading the docs for Key generation in app engine. I'm not sure what effect using a simple String key has over a real Key. For example, when my users sign up, they must supply a unique username:
class User {
/** Key type = unencoded string. */
#PrimaryKey
private String name;
}
now if I understand the docs correctly, I should still be able to generate named keys and entity groups using this, right?:
// Find an instance of this entity:
User user = pm.findObjectById(User.class, "myusername");
// Create a new obj and put it in same entity group:
Key key = new KeyFactory.Builder(
User.class.getSimpleName(), "myusername")
.addChild(Goat.class.getSimpleName(), "baa").getKey();
Goat goat = new Goat();
goat.setKey(key);
pm.makePersistent(goat);
the Goat instance should now be in the same entity group as that User, right? I mean there's no problem with leaving the User's primary key as just the raw String?
Is there a performance benefit to using a Key though? Should I update to:
class User {
/** Key type = unencoded string. */
#PrimaryKey
private Key key;
}
// Generate like:
Key key = KeyFactory.createKey(
User.class.getSimpleName(),
"myusername");
user.setKey(key);
it's almost the same thing, I'd still just be generating the Key using the unique username anyway,
Thanks
When you specify a string key as you are in your example, you're specifying a key name (see the docs). As such, you shouldn't be using the KeyFactory - simply set the key field as 'myusername'.
There's no performance difference between the two options, though: Internally they are stored identically; the key name is just easier to use if you're not using parent entities for this model.