I am trying to do txt based database system. I'm stuck here now. What I want to do is enter the location of the data and then update it. I separate the data with this character. "|"
Structure like this:
ID |Name |Job |Phone Number
---+-----+--------+------------
55 |John |Plumber |555444
The id part is to find out which row it is in, and the name part is in the column.
data_Update(filename, id, "Name", "Bob Ross");
I want to do a function like this.
You could do it in following manner:
Read the file and for each line of text add a entry in your HashMap
Map<Integer, Map<String,Object>> personMap
Where key represent Id of the person, And Value represent mapping of field name to field value for the current entry.
In your db_update method, locate the person by id and update e.g.
personMap.get(Id).put(fieldname,value)
Related
I want to add new key to a map. memberFees is a field of a document under collection named Faculty
I want to store only the key of each member fee inside the memberFees map.
Here is an example of my memberFees map:
I only added that keys in the example manually.
This is what I did:
writeBatches.get(batchIndex).update(db.collection("Faculty")
.document(faculty.getId()), "memberFees/" + memberFeeId, true);
This code throws an error:
java.lang.IllegalArgumentException: Invalid document reference. Document references must have an even number of segments, but Faculty/7DEj7mlTPBf3cVSCtQO3/memberFees has 3
at com.google.firebase.firestore.DocumentReference.forPath(DocumentReference.java:81)
When updating a nested field, use . to separate the field names. Your code uses /, which Firestore interprets as a subcollection.
So:
writeBatches.get(batchIndex).update(db.collection("Faculty")
.document(faculty.getId()), "memberFees." + memberFeeId, true)
// 👆
I have requirement where I need to read a file which is generated by another application and file has 201 numeric column name like: -10.0, -9.9, -9.8, -9.7 .......0.....+9.7, +9.8, +9.9, +10.0 so total I have 201 columns in the file. I am reading many files through Flink but file has string type column name and I am creating an model Object with the attributes as columns name available in file as below
DataSet<Person>> csvInput = env.readCsvFile("file:///path/to/my/textfile")
.pojoType(Person.class, "name", "age", "zipcode");
above code will ready file and Person object will be populated with the values available in the File.
I am facing challenge in new requirement where file columns name is numeric and in Java I cannot create a variable with numeric value along with decimal like -10.0 etc.
like private String -10.0 not allowed in java
I am seeking for a solution, could any one please help me out here.
I have object in manage bean , and it include address . I want display to user interface home number input text , street name input text...ect . but in database just has address column . how create input text for each one the aggregation it in address ( when submit all values , it will data store in address column in database ) . how do this business?
I don't want change my database and use input mask.
thanks.
I using jsf ,primefaces, java 2ee
When you click on submit store home number input text , street name input text...ect in JSON format into the address filed then when you are loading parse that and take value based on the key.
{
"homenumber": "xyz",
"name":"abc",
"street":"123"
}
when you are loading the page you would be having the entity object and take the address column value and parse like below and assign to each managed bean field.
ObjectMapper mapper = new ObjectMapper();
String json =entity.getAddress();
JsonNode actualObj = mapper.readTree(json);beanHomeNumber= actualObj.get("homenumber").toString();beanStreet= actualObj.get("street").toString();beanName= actualObj.get("name").toString();
Here beanHomeNumber, beanStreet, beanName are managed bean fileds.
I want to create a file in this format:
device1,t1,t2,t3,t4,t5
device2,t1,t2,t3,t4,t5
device3,t6,t7,t8,t9,t10
device4,t6,t7,t8,t9,t10
Here, t1, t2, ..., tn are time stamps.
Every value tn is generated based on one execution of JAR file along with that device name gets generated too.
I am able to generate a format like this using the JAR file now:
For example:
Current format in csv file:
device1,t1,device2,t2,device2,t3,device1,t4,device2,t5,device2,t6,device1,t7,device2,t8
I want this in this format in csv file:
device1-t1,t4,t7
device2-t2,t3,t5,t6,t8
So here, I have to put the time stamp belonging to specific devices on the right-hand side.
Please let me know how can I sort it in Java.
I will answer this question here as per my understanding of your question.
What you can do is to create a hashmap which stores device name as hashmap key.
And then for values create a sortedCollection.
Feed your timestamp in this sorted collection and keep updating this HashMap for the corresponding device name key.
As and when you will update your sorted timestamp collections, they will automatically be stored in sorted manner.
your hashmap will look like :
key : value (collection)
device1 : t1, t4, t7
device2 : t2, t5, t8 (add more timestamp in the end of this collection)
Then feed this hashmap data in the CSV file.
This is to do from java end.
If you want to sort in csv whenever a new timestamp is added for a device, then I dont think so that you can do this from java. Then you would have to keep some logic in csv file once all your data is added in csv file.
This is the solution:
I got output as:
Entire map:{Device1=[[t8], t9], Device2=[[[[[t2], t3], t5], t7], t10]}
BufferedReader reader = new BufferedReader(new FileReader("results.csv"));
String eachline;
// int i=2, j=2;
while((eachline = reader.readLine()) != null)
{
String[] fields = eachline.split(",");
if(Integer.parseInt(fields[2])==0)//data is = 0
{
if(tree.get(fields[0])!=null)//returns null if this key not present
{
values.add(tree.get(fields[0]));//get entire key value pair for that particular field
}
values.add(fields[1]);//to prev value, add next value
tree.put(fields[0], values.toString());// write to hashmap along with value
values.clear();
}
}
System.out.println("Entire map:"+tree);
Let's say I JOIN two relations like:
-- part looks like:
-- 1,5.3
-- 2,4.9
-- 3,4.9
-- original looks like:
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,akhila,3.3,IT,C,1.3,0.3
jnd = JOIN part BY $0, original BY $0;
The output will be:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
2,4.9,2,Remya,3.3,EEE,B,1.6,0.3
3,4.9,3,akhila,3.3,IT,C,1.3,0.3
Notice that $0 is shown twice in each tuple. EG:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
^ ^
|-----|
I can remove the duplicate key manually by doing:
jnd = foreach jnd generate $0,$1,$3,$4 ..;
Is there a way to remove this dynamically? Like remove(the duplicate key joiner).
Have faced the same kind of issue while working on Data Set Joining and other data processing techniques where in output the column names get repeated.
So was working on UDF which will remove the duplicates column by using schema name of that field and retaining the first unique column occurrence data.
Pre-Requisite:
Name of all the fields should be present
You need to download this UDF file and make it jar so as to use it.
UDF file location from GitHub :
GitHub UDF Java File Location
We will take the above question as example.
--Data Set A contains this data
-- 1,5.3
-- 2,4.9
-- 3,4.9
--Data Set B contains this data
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,Akhila,3.3,IT,C,1.3,0.3
PIG Script:
REGISTER /home/user/
DSA = LOAD '/home/user/DSALOC' AS (ROLLNO:int,CGPA:float);
DSB = LOAD '/home/user/DSBLOC' AS (ROLLNO:int,NAME:chararray,SUB1:float,BRANCH:chararray,GRADE:chararray,SUB2:float);
JOINOP = JOIN DSA BY ROLLNO,DSB BY ROLLNO;
We will get column name after joining as
DSA::ROLLNO:int,DSA::CGPA:float,DSB::ROLLNO:int,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
For making it to
DSA::ROLLNO:int,DSA::CGPA:float,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
DSB::ROLLNO:int is removed.
We need to use the UDF as
JOINOP_NODUPLICATES = FOREACH JOINOP GENERATE FLATTEN(org.imagine.REMOVEDUPLICATECOLUMNS(*));
Where org.imagine.REMOVEDUPLICATECOLUMNS is the UDF.
This UDF removes duplicate columns by using Name in schema.So DSA::ROLLNO:int is retained and DSB::ROLLNO:int is removed from the dataset.