the question i am going to ask is very old and i think it asked on SO 5-10 times.
but mine has a different situation.
Read my Problem before making it duplicate by so many wise (over) SO users.
I am importing CSV sheet containing 10K records in my application.
My logic works in following manner,
(1) Validate & Import Sheet
(2) Save to the database if record does not exist
Step 2 is done for each and every record of the sheet.
in the step -2 i have to generate UUID to identify a particular record later ,
in my first solution
// this might be unique in some cases
String id = UUID.randomUUID().toString();
but i checked that it does generate unique id in each case , for example
if i import 10 sheet one by one with different records in it, all 10 times i am getting
duplicate key error from database for at least 4000 times in each import and save operation,
that means that out of 10,000 key generation it generates only 6000 unique ids.
so then i generate an alphanumeric code which length is 6 , some thing like
eSv3h7
and append it to previously generated id and hence get the following id
d545f2b2-63ab-4703-89b0-f2f8eca02154-eSv3h7
after testing still there is a problem of id duplication.
I also tried several combination mentioned here and on other sites but still there is a same problem of id duplication,
Now i am wondering that this occurs only for 10k records saving in loop , actually i need to import sheet which is having 8 million records in it
so how can i solve my problem of generating a Unique id in my particular case ?
Update 1 - based on all the comments
Try this thing at you end.
loop through 1 to 10,000
generate uuid in the loop
store it somewhere in simple text file
then make a simple program to find the duplicates from them , if you do not find any one duplicate in first attempt , repeat all above steps again and again and i am sure you will find duplicates.
in the past i am also strong believer of the same thing that UUID will never generates duplicates, share me your result of above test.
Update 2 - Code
This is the method which is called by each record of the sheet to be saved by caller method' loop.
#Override
public void preSynchronizedServiceExecution(ServiceData sData,
ValueObject valueObject) throws BlfException {
PropertyVO pVO = (PropertyVO) valueObject;
ArrayList<CountyAuctionPropertyVO> capList = pVO
.getCountyAuctionPropertyList();
for (CountyAuctionPropertyVO caVO : capList) {
TadFrameworkUtil.processValueObjectKeyProperty(caVO, true);
TadFrameworkUtil.processValueObjectKeyProperty(caVO
.getPropertyLastOwner(), true);
TadFrameworkUtil.processValueObjectKeyProperty(caVO
.getPropertyLastOwner().getAdd(), true);
}
ArrayList<PropertyAminitiesVO> amList = pVO.getPropertyAminitiesList();
for (PropertyAminitiesVO pamVO : amList) {
TadFrameworkUtil.processValueObjectKeyProperty(pamVO, true);
}
ArrayList<PropertyAttributesVO> atList = pVO
.getPropertyAttributesList();
for (PropertyAttributesVO patVO : atList) {
TadFrameworkUtil.processValueObjectKeyProperty(patVO, true);
}
TadFrameworkUtil.processValueObjectKeyProperty(pVO, true);
TadFrameworkUtil.processValueObjectKeyProperty(pVO.getSiteAdd(), true);
}
Following is id generation method
public static String generateUUID() throws BlfException {
// this might be unique in some cases
String id = UUID.randomUUID().toString();
// introduce custom random string in mixing of upper and lower
// alphabets,
// which is 6 character long
// and append it to generated GUID.
String rs = randomString(6);
id = id.concat("-").concat(rs);
return id;
}
Update 3 (Method added)
public static void processValueObjectKeyProperty(ValueObject valueObject,
boolean create) throws BlfException {
String key = (String) BlfConverter.getKey(valueObject);
if (!StringUtility.isStringNonEmpty(key)) {
throw new BlfException(valueObject.getObjectName()
+ "- key property does not exist.");
}
if (create) {
String id = generateUUID();
valueObject.setProperty(key, id);
} else {
String exisitingId = valueObject.getProperty(key);
if (!StringUtility.isStringNonEmpty(exisitingId)) {
String id = generateUUID();
valueObject.setProperty(key, id);
}
}
}
The random string method is just a simple methods of 2 lines which generates alpha numeric random string of length 6.
please ask me if you need anything more so i can post here.
Update 4 (Sample genearted UUID )
d545f2b2-63ab-4703-89b0-f2f8eca02154-eSv3h7
6f06fa28-6f36-4ed4-926b-9fef86d002b3-DZ2LaE
20142d05-f456-4d72-b845-b6819443b480-xzypQr
67b2a353-e7b4-4245-90a0-e9fca8644713-AgSQZm
8213b275-2cb1-4d37-aff0-316a47e5b780-vMIwv9
and i am getting accurate result from database if i need to fetch it from there.
Thanks
Thanks for all your user who seriously study my question and spent some time to help me in solving it.
I found that the error was in Database layer of business logic foundation.
One of the Object needs to Updated but it was created using previous
existing id so that i was getting the Duplicate Primary key Error.
I develop a Unit Test for id generation and tested UUID for more than
one billion key , it is guaranteed to be unique, it is true in all
circumstances.
Thanks again to everyone.
Related
I need to generate a number of length 12, say variable finalId.
Out of those 12 digits, 5 digits are to be taken from another value, say partialid1.
Now finalId = partialId1(5 - digits)+ partialId2(7 digits).
I need to generate partialid2 randomly, where I can use Random class of Java.
Finally i have to insert this finalId in Database, as a Primary key.
So to make sure that newly generated finalId is not existing in Oracle Database, I need to query Oracle Database as well.
Is there any efficient way other than the one i have mentioned above to generate Id in Java and check in database before persisting it?
In general making one id from another has issues because you may be clumping two things together which would be easier just to keep separate. Specifically you may be trying to squeeze a foreign key into a primary key when you could just use two keys.
In any case if you really want to build a semi-random primary key from a stub then I would suggest doing it bitwise because that way it'll be easy to extract the original id in SQL and in Java.
As has been mentioned, if you generate a UUID then you don't really need to worry about checking if it's already used, otherwise you probably will want to.
That said the code for making your ids could look like this:
public class IdGenerator {
private SecureRandom random = new SecureRandom();
private long preservedMask;
private long randomMask;
public void init(int preservedBits) {
this.preservedMask = (1L << preservedBits) - 1;
this.randomMask = ~this.preservedMask;
}
public long makeIdFrom(long preserved) {
return (this.random.nextLong() & this.randomMask) | (preserved & this.preservedMask);
}
public UUID makeUuidFrom(long preserved) {
UUID uuid = UUID.randomUUID();
return new UUID(uuid.getMostSignificantBits(), (uuid.getLeastSignificantBits() & this.randomMask) | (preserved & this.preservedMask));
}
public boolean idsMatch(long id1, long id2) {
return (id1 & this.preservedMask) == (id2 & this.preservedMask);
}
}
Essentially this preserves a number of least significant bits in your original. That number you need to specify when you call init.
I would prefer the Java UUID.
You can get the random id using UUID using below code.
String id = UUID.randomUUID().toString().substring(0,7);
System.out.println("id "+ id);.
And can append it with your other partial id and have a unique key or primary constraint in DB, depending on the column where you want to store it as suggested by #Erwin.
Note:- we have done it in past for so many primary keys and never had a case where you id collided.
I have a local server provided by GlassFish and I have 2 clients: one is generated by server: JSF; and the other one is not on the server: Terminal client written in Java.
Everything was working fine till this moment. Every time when I added a new row into a table, my auto incrementing Primary Key (numeric id) was finely raised by 1 and a row was added.
Now, every time when I freshly start a client and I add a data row into a table via client (does not matter which one - both do the same mess), id is raised to the closest 51 (i.e. last id was 303, so after adding, the id of the new row in the table is 351). If my client is still running since the last addition and I add the next row, it works fine: the next id is 352. It continues without problems till I restart my client. When I add a new row after restarting client, it does the same stuff again: raising id to the closest 51 and then raising but 1.
Is this some kind of bug due to relation of Java and MySQL? This has never happened to me before when I used PHP.
EDIT: For manipulating with database I am using ORM technique via EclipseLink (QuestionDAO.java):
This is what I do in my data layer when I am adding a question:
public void add( Question q )
{
em . persist(q);
}
This is what I do in my business layer when I am calling a method from data layer (Facade.java):
public String addQuestion( String text, String subject )
{
Subject s = subjectDAO.find( subject );
if( s != null )
{
Question q = new Question( text, s );
questionDAO . add(q);
return "Question successfully added";
}
else
return "Subject does not exist";
}
Also here is a method from REST:
#POST
#Path("add")
public String add( #FormParam("text") String text, #FormParam("subject") String subject )
{
String s = facade.addQuestion(text, subject);
return s;
}
Annotations in my Entity class:
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#Basic(optional = false)
#Column(name = "question_number")
private Integer questionNumber;
I do not believe that a mistake is in my code yet.
I have a feed containing posts that are currently ordered by "updatedAt". My original intention was to push up posts to the top of the feed which were last replied to (I do this by incrementing a "replyCount" field in each post when a user leaves a comment), not totally cognizant of the fact that another field, "likeCount" is also being updated when user's "like" a post. I would still like to push those posts that were recently "replied to" to the top of the feed, but do not at the expense of the weird UX behavior that associated with liking posts pushing them up as well. I'm not sure how to separate the two.
What should I do here? Can I maybe add another column called "lastReplyCountTime" and then sort queries based on that? Maybe set lastReplyCountTime for all posts to the current time when saved to the database, and then only update that value when a post receives a reply?
String groupId = ParseUser.getCurrentUser().getString("groupId");
ParseQuery<ParseObject> query = new ParseQuery<>(ParseConstants.CLASS_POST);
query.whereContains(ParseConstants.KEY_GROUP_ID, groupId);
/*query.addDescendingOrder(ParseConstants.KEY_CREATED_AT);*/
query.orderByDescending("updatedAt");
query.findInBackground((posts, e) -> {
if (mSwipeRefreshLayout.isRefreshing()) {
mSwipeRefreshLayout.setRefreshing(false);
}
if (e == null) {
// We found messages!
mPosts = posts;
String[] usernames;
usernames = new String[mPosts.size()];
int i = 0;
for(ParseObject post : mPosts) {
usernames[i] = yeet.getString(ParseConstants.KEY_SENDER_NAME);
i++;
}
FeedAdapter adapter = new FeedAdapter(
getListView().getContext(),
mYeets);
setListAdapter(adapter);
}
});
}
You have 2 options:
Like you suggested you can create another date property let's call it sortedUpdatedAt and update it with the current date each time you are updating the relevant values
If you still want to use updatedAt you can wrap your like object in a separate parse object (class). This class will be saved as a relation in the parent class and then each time the user "like" you can just update only this class and not the whole object. This way the updatedAt of your parent object will not be changed.
I think that option 1 is great since it's not so complicated and you can do it very quick.
I am trying to add a filter to check for duplicate values that a user might input. I am not sure where I am going going wrong in my query.
My query doesnot enter the loop to check if the name already exists.
I am fairly new to google-could. If someone can tell me on how I can fix my problem or if there is a better solution.
else if ( commandEls[0].equals( "add_director" ) ) {
String name = commandEls[1];
String gender = commandEls[2];
String date_of_birth = commandEls[3];
boolean duplicate = false;
//add a director record with the given fields to the datastore, don't forget to check for duplicates
Entity addDirectorEntity = new Entity("Director");
// check if the entity already exits
// if !duplicate add, else "Already exisits"
Query directorExists = new Query("Movies");
// Director Name is the primary key
directorExists.addFilter("directorName",Query.FilterOperator.EQUAL, name);
System.out.print(name);
PreparedQuery preparedDirectorQuery = datastore.prepare(directorExists);
System.out.print("outside");
for(Entity directorResult : preparedDirectorQuery.asIterable()){
// result already exists in the database
String dName = (String) directorResult.getProperty(name);
System.out.print(dName);
System.out.print("finish");
duplicate = true;
}
if(!duplicate){
addDirectorEntity.setProperty("directorName",name);
addDirectorEntity.setProperty("directorGender",gender);
addDirectorEntity.setProperty("directorDOB",date_of_birth);
try{
datastore.put(addDirectorEntity);
results = "Command executed successfully!";
}
catch(Exception e){
results = "Error";
}
}
else {
results = "Director already exists!";
}
}
Non-ancestor queries (like the one in your example) are eventually consistent, so they cannot reliably detect duplicate property values. Ancestor queries are fully consistent, but they require structuring your data using entity groups, and that comes at the cost of write throughput.
If the directorName property in your example is truly unique, you could use it as the name in the key of your Director entities. Then, when you are inserting a new Director entity, you can first check if it already exists (inside of a transaction).
There's no general, built-in way in Datastore to ensure the uniqueness of a property value. This related feature request contains discussion of some possible strategies for approximating a uniqueness constraint.
I'd also recommend reading up on queries and consistency in the Datastore.
That is a valid thing to do but i figured out my problem.
I am making an Entity for Director where as That should be for movies.
I have an object with 70 attributes. For ease of use I created 2 objects, a 'main' object and a 'details' object, with 1:1 relationship based on an auto-generated integer ID. I had a SEARCH screen that allowed searching on any of the main attributes, for which I build Restriction objects for whatever the user typed in. What was nice was that I did this all through iterating through the fields and building criterion - I didn't need ugly code to specifically handle each of the 30 attributes.
Now they want to search on the details fields as well. My previous screen-field-iterating code works perfectly with no changes (the whole reason for making it 'generic'), however I cannot get the JOIN to work to query on details fields.
class House {
Integer houseID;
String address;
. . .
HouseDetails houseDetails;
}
class HouseDetails {
Integer houseID;
String color;
. . .
}
I tried to create an alias and add it to the criteria :
criteria.createAlias("houseDetails", "houseDetails");
but I get this error :
org.hibernate.QueryException: could not resolve property: color of: House
Here's the thing - I know this would work if I prefix my restrictions with the alias name, but I do NOT want to have to know which table (House or HouseDetails) the field comes from. That would ruin all the automatic looping code and create specific code for each field.
Since SQL can do this as long as the column names are unique :
select * from house, housedetails where house.houseID = housedetails.houseID
and color = 'blue';
I'm wondering how can I get this to work using criteria??
As an aside, but related to this : Is there a way to perform something like Java's introspection on Hibernate HBM.XML mapping files? A number of times I've wanted to do this to solve problems but never found an answer. For the above problem, if I could easily find out which table contained each field, I could add the prefix to the Restriction. Something like this :
// Map of search keys (columns) to searching values
for ( String key : parms.keySet() ) {
String val = parms.get(key);
if ( HIBERNATE-SAYS-KEY-IS-FROM-DETAILS-TABLE ) {
key = "houseDetails." + key;
}
criteria.add(Restrictions.eq(key,val));
}
You can make method to find table name for passed column name.
By using SessionFactory.getClassMetaData() you can get all the information about that class. Once you have ClassMetaData then you can get all the property names. An demo method is shown below:
public String findTableName(String columnName)
{
boolean found=false;
Map<String, ClassMetadata> classMetaData = sessionFactory.getAllClassMetadata();
for (Entry<String, ClassMetadata> metaData : classMetaData.entrySet())
{
String[] propertyNames = metaData.getValue().getPropertyNames();
for (String property : propertyNames)
{
if(property == columnName)
{
return metaData.getKey() + "." + property;
found=true;
break;
}
}
if(found)
break;
}
}
The alias mechanism in hibernate and the Criteria API is pretty well specified. I suggest going through the documentation a little a bit.
I think what you want is something like this:
Criteria criteria = session.createCriteria(House.class);
criteria.createAlias("houseDetails.color", "houseColor");
criteria.add(Restrictions.eq("houseColor", "red"));