Tried Importing Excel Data to Mongo db in the Following Document Format
[
{"productId":"",
"programeName":"",
"programeThumbImageURL":"",
"programeURL":"",
"programEditors":["editor1","editor2"],
"programChapters":[
{
"chapterName":"chapter1",
"authorNames":["authorName1","authorname2"]
},
{"chapterName":"chapter2"},
"authorNames":["authorName1","authorName2"]
}
,...
]},...]
There are many products in the Excel with with chapterNames has multiple authors. following is the code which tried executing and i could do inserting data. But the i couldn't merge the authorNames corresponding to a particular chapterName as above. So currently there are programChapters array contains objects as duplicated chapterNames. Following code shows my experiment towards this.
private static XSSFWorkbook myWorkBook;
public static void main(String[] args) throws IOException {
String[] programs = {"programName1","programName2","programName3","programName4",...};
#SuppressWarnings("deprecation")
Mongo mongo = new Mongo("localhost", 27017);
#SuppressWarnings("deprecation")
DB db = mongo.getDB("dbName");
DBCollection collection = db.getCollection("programsCollection");
File myFile =
new File("dsm_article_author_details.xlsx");
FileInputStream fis = new FileInputStream(myFile); // Finds the workbook instance for XLSX file
myWorkBook = new XSSFWorkbook(fis);
XSSFSheet mySheet = myWorkBook.getSheetAt(0); // Get iterator to all the rows in current sheet
#SuppressWarnings("unused")
Iterator<Row> rowIterator = mySheet.iterator(); // Traversing over each row of XLSX file
for (String program : programs) {
String programName = "";
String chapterName = "";
String authorName = "";
BasicDBObject product = new BasicDBObject();
BasicDBList programChaptersList = new BasicDBList();
// For Each Row , Create Chapters Object here
for (int i = 0; i <= mySheet.getLastRowNum(); i++) { // points to the starting of excel i.e
// excel first row
Row row = (Row) mySheet.getRow(i); // sheet number
System.out.println("Row is :" + row.getRowNum());
BasicDBObject programChapters = new BasicDBObject();
if (row.getCell(0).getCellType() == Cell.CELL_TYPE_STRING) {
programName = row.getCell(0).getStringCellValue();
System.out.println("programName : " + programName);
}
if (row.getCell(1).getCellType() == Cell.CELL_TYPE_STRING) {
chapterName = row.getCell(1).getStringCellValue().replaceAll("\n", "");
System.out.println("chapterName : " + chapterName);
}
if (row.getCell(2).getCellType() == Cell.CELL_TYPE_STRING) {
authorName = row.getCell(2).getStringCellValue();
System.out.println("authorName : " + authorName);
}
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
if (programName.trim().equals(program.trim())) {
programChaptersList.add(programChapters);
}
}
product.put("programName", program);
product.put("programThumbImageURL", "");
product.put("programeURL", "");
product.put("programChapters", programChaptersList);
collection.insert(product);
System.out.println("*#*#*#*#*#");
}
}
I hope this is the part went wrong. Need to store all chapterNames in an array and compare with each upcoming value and according to that create new objects and store it in a list
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
Can someone suggest me, available solutions :-)
I hope this is the part went wrong. Need to store all chapterNames in an array and compare with each upcoming value and according to that create new objects and store it in a list
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
Related
current i am working to my school project using android studio, it is an attendance system where I store my data to Firestore and the user are able to download/export the data to become Excel file. What I am trying to do is how can I get the all data in a single document of a Collection in firestore
here's the code but it is only getting the first data in a document and it is showing in all the rows
export.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
firebaseFirestore.collection("QR").document("QRScanned").collection(LoginProfessorTabFragment.superName)
.document(TeacherDash.subjectName1).collection("Record of Attendance")
.get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
#Override
public void onComplete(#NonNull Task<QuerySnapshot> task) {
if (task.isSuccessful()){
HSSFWorkbook hssfWorkbook = new HSSFWorkbook();
HSSFSheet hssfSheet = hssfWorkbook.createSheet(TeacherDash.subjectName1);
for (int i = 0; i < 4; i++) {//for creating equal amount of row from the database
HSSFRow row = hssfSheet.createRow(i);
for (int j = 0; j <= cellCount; j++) {//creating each cell depends on the cell counter
for (DocumentSnapshot documentSnapshot : task.getResult()){
String a = documentSnapshot.getString("Name");
String b = documentSnapshot.getString("Date");
String c = documentSnapshot.getString("Time");
String d = documentSnapshot.getString("StudentNumber");
String e = documentSnapshot.getString("Course");
String f = documentSnapshot.getString("Subject");
String g = documentSnapshot.getString("Room");
String h = documentSnapshot.getString("Schedule");
arrayExport.add(a);
arrayExport.add(b);
arrayExport.add(c);
arrayExport.add(d);
arrayExport.add(e);
arrayExport.add(f);
arrayExport.add(g);
arrayExport.add(h);
arrayRemoveAll.add(a);
arrayRemoveAll.add(b);
arrayRemoveAll.add(c);
arrayRemoveAll.add(d);
arrayRemoveAll.add(e);
arrayRemoveAll.add(f);
arrayRemoveAll.add(g);
arrayRemoveAll.add(h);
row.createCell(0).setCellValue(arrayExport.get(0));
row.createCell(1).setCellValue(arrayExport.get(1));
row.createCell(2).setCellValue(arrayExport.get(2));
row.createCell(3).setCellValue(arrayExport.get(3));
row.createCell(4).setCellValue(arrayExport.get(4));
row.createCell(5).setCellValue(arrayExport.get(5));
row.createCell(6).setCellValue(arrayExport.get(6));
row.createCell(7).setCellValue(arrayExport.get(7));
}
}
}
try {
if (!filePath.exists()) {
filePath.createNewFile();
Toast.makeText(TeacherDash.this, "Download success", Toast.LENGTH_SHORT).show();
}
FileOutputStream fileOutputStream = new FileOutputStream(filePath);
hssfWorkbook.write(fileOutputStream);
if (fileOutputStream != null) {
fileOutputStream.flush();
fileOutputStream.close();
}
} catch (Exception exception) {
exception.printStackTrace();
}
}
}
});
}
});
You are looping over things multiple times where you probably don't need to be. If you want to get multiple documents from a collection and have each document be a single row in the spreadsheet where the document fields fill the columns within that row, then you only need a single loop - over documents. It would look something like this:
HSSFWorkbook hssfWorkbook = new HSSFWorkbook();
HSSFSheet hssfSheet = hssfWorkbook.createSheet(TeacherDash.subjectName1);
int rowNum = 0;
for (DocumentSnapshot documentSnapshot : task.getResult()){
// Create a new row for each document
HSSFRow row = hssfSheet.createRow(rowNum);
++rowNum;
// Get the data from the firestore document
// you want to put in this row
String name = documentSnapshot.getString("Name");
String date = documentSnapshot.getString("Date");
String time = documentSnapshot.getString("Time");
String num = documentSnapshot.getString("StudentNumber");
String course = documentSnapshot.getString("Course");
String sub = documentSnapshot.getString("Subject");
String room = documentSnapshot.getString("Room");
String sched = documentSnapshot.getString("Schedule");
// Fill the contents of that row
row.createCell(0).setCellValue(name);
row.createCell(1).setCellValue(date);
row.createCell(2).setCellValue(time);
row.createCell(3).setCellValue(num);
row.createCell(4).setCellValue(course);
row.createCell(5).setCellValue(sub);
row.createCell(6).setCellValue(room);
row.createCell(7).setCellValue(sched);
}
Update: Can we filter the student document record by date range? Ex: get all the students data ONLY from 7-26-2022 upto 7-27-2022 (which will comes from a date range picker). We need to generate a report on where the student go from 7-26 to 7-27 by looking at the field room and time. It is basically a contact tracing feature. We also need to put it in an excel file.
I believe we can also use the code that we have above but with a few modifications. We are thinking on replacing our database structure from:
firebaseFirestore.collection("QR").document("QRScanned").collection(LoginProfessorTabFragment.superName).document(TeacherDash.subjectName1).collection("Record of Attendance")
and replace the last .collection("Record of Attendance) to .collection() so that we can have an organized data. Then starts to query by the date range?
Thank you for answering our question.
I've run into an issue with my code. I'm attempting to read a CSV file through dataframe and then add a new column with values from an ArrayList.
However, I cannot seem to use either the ArrayList or an array without an error. It wants me to enter the values for the new column manually. How can I get around this, please?
Exception in thread "main" org.apache.spark.SparkRuntimeException: The feature is not supported: literal for '[[153.41, [153.41, .... Then it keeps going until "of class java.util.ArrayList.
at org.apache.spark.sql.errors.QueryExecutionErrors$.literalTypeUnsupportedError"
I've put the line in bold and added a comment on the same line
public static void dataframe(){
// TODO Auto-generated method stub
SparkSession spark = SparkSession.builder().appName("RDD or DataFrame").getOrCreate();
String path = "C:\\Users\\Paolo Agyei\\Desktop\\Computer Science\\Java\\SparkSimpleApp\\data.csv";
Dataset<Row> csvDataset = spark.read().format("csv").option("header", "true")
.load(path);
// Filtering columns by value
Dataset<Row> result = csvDataset.filter( col("status").equalTo("authorized"));
result = result.filter( col("card_present_flag").equalTo("0"));
// Collecting columns to be split
List<Row> long_lat = csvDataset.select("long_lat").collectAsList();
List<Row> merchant_long_lat = csvDataset.select("merchant_long_lat").collectAsList();
// Lists to hold result of long_lat
**ArrayList<String> longing = new ArrayList<String>();**
ArrayList<String> lat = new ArrayList<String>();
// Lists to hold result of merchant_long_lat
ArrayList<String> merch_long = new ArrayList<String>();
ArrayList<String> merch_lat = new ArrayList<String>();
for (Row row: long_lat) {
convert = row.toString().split(" -",2);
**longing.add(convert[0]);**
lat.add(convert[1]);
}
for (Row row: merchant_long_lat) {
convert = row.toString().split("-",2);
merch_long.add(convert[0]);
if(convert.length>1)
merch_lat.add(convert[1]);
else
merch_lat.add("null");
}
// Adding new columns
**result = result.withColumn("long",lit(longing));** // Issue
/*
result = result.withColumn("lat", null);
result = result.withColumn("merch_long", null);
result = result.withColumn("merch_lat", null);
result = result.drop("long_lat","merchant_long_lat");
result.show();
*/
System.out.println("Hello World!");
}
Instead of print in console I need to print into excel file.
Now Output is -
Document Id : 101
MKT42LL/A,3C111LL/A,MKRW2LL/A,
Document Id : 102
APPLE/A,MHCR3LL/A-E,B2BIPADMINI64W,RM62LL/A,
I need to print this in excel file row1(cell1=101, cell2=MKT42LL/A, cell3=MKRW2LL/A)like this.
// create obj for get source excel file methods
readDocSourceFile objDocSourceExcel = new readDocSourceFile();
HashMap<String, List<String>> docSource = objDocSourceExcel.getDocSource();
Set<String> keys = docSource.keySet();
Iterator<String> itr = keys.iterator();
// create obj for get metadata excel file methods
readMetadataFile objMetaSourceExcel = new readMetadataFile();
HashMap<String, List<String>> metaSource = objMetaSourceExcel.getMetaSource();
while (itr.hasNext())
{
String key = itr.next();
if (metaSource.containsKey(key))
{
System.out.println("Document Id : " + key);
List<String> docSourceData = docSource.get(key);
List<String> metaData = metaSource.get(key);
docSourceData.removeAll(metaData);
// print all metadata of docSourcefile which not exist in metadata file
for (int m = 0; m < docSourceData.size(); m++) {
System.out.print(docSourceData.get(m)+",");
}
System.out.println("\n----------------------");
}
} // end while loop
fileOut.close();
Yes, with the help of the library Apache Poi.
Need to get "value" based on given "key" from Excel file
I have excel file
File name Test xlsx
and sheet name sheet1
And sheet contains following key and value pairs and. JIRA ticket is unique .
Test case description
testdata key
Testdatavalue
testdata2 key
Testdata2 Value
testdata3 key
Testdata3 value
Sampiletest description1
Testcase-jira-1
user1id
Harshadh
Password
123ggg
Sampiletest2 discription
Testcase-jira-2
user2
Ramu
Password123
333ggg
Sampiletest3 discription
Test case jira-3
user3
latha
Password556
73hhh
Up to N number of rows
Here, I needs to get the data in following way by using Java Selenium Cucumber. I am going to use above test data to pass in Cucumber step definition class file by BDD way.
How can we get the data in definition file for following way
1)If pass Key value from current row how can we get the value of value for provide test input for webSeleinum element
Example 4th row data
Sampiletest3 discription|Test case jira-3| user3|latha|Password556|73hhh
.....
If I call the "user3" that should return "Password556"
Same way any row I need to get the value.
Please guide me
You can try the below code.
Feature file:
In examples, you can give the row numbers and sheet name to use the data for itterations.
Scenario Outline: Login to the application with multiple users.
Given get data from datasheet with "<test_id>" and "<sheetName>"
And login to the application
Examples:
| test_id | sheetName |
| 1 | Login |
| 2 | Login |
Excel data:
Read the data from excel and store it in a hashmap:
Create a class to read the data (Example: ExcelReader)
Use org.apache.poi.ss.usermodel and org.apache.poi.xssf.usermodel imports
public class ExcelReader {
private File file;
private FileInputStream inputStream;
private String testID;
private String sheetName;
private int testIdColumn;
private int numberOfColumns;
private XSSFCell cell;
public HashMap<String, String> fieldsAndValues;
public ExcelReader(String testId, String sheetName) {
file = new File(System.getProperty("user.dir") + "Excel location path");
try {
inputStream = new FileInputStream(file);
} catch (FileNotFoundException e) {
System.out.println("File not found at given location: " + e);
}
this.testID = testId;
this.sheetName = sheetName;
this.readExcelAndCreateHashMapForData();
}
public HashMap<String, String> readExcelAndCreateHashMapForData() {
try {
fieldsAndValues = new HashMap<String, String>();
XSSFWorkbook workBook = new XSSFWorkbook(inputStream);
XSSFSheet sheet = workBook.getSheet(sheetName);
/* Get number of rows */
int lastRow = sheet.getLastRowNum();
int firstRow = sheet.getFirstRowNum();
int numberOfRows = lastRow - firstRow;
/*
* Get test_Id column number.
*/
outerloop: for (int row = 0; row < numberOfRows; row++) {
numberOfColumns = sheet.getRow(row).getLastCellNum();
for (int cellNumber = 0; cellNumber < numberOfColumns; cellNumber++) {
cell = sheet.getRow(row).getCell(cellNumber);
cell.setCellType(Cell.CELL_TYPE_STRING);
if (sheet.getRow(row).getCell(cellNumber).getStringCellValue().equalsIgnoreCase("test_ID")) {
testIdColumn = sheet.getRow(row).getCell(cellNumber).getColumnIndex();
break outerloop;
}
}
}
/*
* Search for the test id value.
*/
outerloop: for (int i = 0; i <= numberOfRows; i++) {
cell = sheet.getRow(i).getCell(testIdColumn);
cell.setCellType(Cell.CELL_TYPE_STRING);
if (testID.equals(sheet.getRow(i).getCell(testIdColumn).getStringCellValue())) {
for (int j = 0; j < numberOfColumns; j++) {
XSSFCell key = sheet.getRow(testIdColumn).getCell(j);
XSSFCell value = sheet.getRow(i).getCell(j);
key.setCellType(Cell.CELL_TYPE_STRING);
if (value == null) {
// Not capturing blank cells.
} else if (value.getCellType() == XSSFCell.CELL_TYPE_BLANK) {
// Not capturing blank cells.
} else {
value.setCellType(Cell.CELL_TYPE_STRING);
String fieldName = sheet.getRow(testIdColumn).getCell(j).getStringCellValue().trim();
String fieldValue = sheet.getRow(i).getCell(j).getStringCellValue().trim();
fieldsAndValues.put(fieldName, fieldValue);
}
}
System.out.println("Fields and values: " + Arrays.toString(fieldsAndValues.entrySet().toArray()));
break outerloop;
}
}
} catch (Exception e) {
System.out.println("Exception occurred at getting the sheet: " + e);
}
/* Return the hash map */
return fieldsAndValues;
}
}
StepDefinition:
ExcelReader excelReader;
#Given("get data from datasheet with \"(.*)\" and \"(.*)\"$")
public void get_data_from_datasheet(String testId, String sheetName) {
excelReader = new ExcelReader(testId, sheetName);
}
#And("login to the application")
public void loginApplication(){
driver.findElement(By.xpath("element")).sendKeys(excelReader.fieldsAndValues.get("UserName"));
driver.findElement(By.xpath("element")).sendKeys(excelReader.fieldsAndValues.get("PassWord"));
driver.findElement(By.xpath("element")).click();
}
I would recommend putting all the data for a scenario inside of Gherkin documents, but you might have a valid use cases for pulling data from excel. However, in my experience, these type of requirements are rare. The reason why it is not recommended is, your BDD feature files are your requirements and should contain the right level of information to document the expected behavior of the system. If your data comes from an excel, then it just makes the requirement reading bit more difficult and makes it difficult to maintain.
Saying that if there is a strong reason for you to have these data stored in excel, you could easily achieve this using NoCodeBDD. All you have to do is map the column names and upload the excel and the tool take care of the rest. Please check this .gif to see how it is done. https://nocodebdd.live/examples-using-excel
Disclaimer: I am the founder of NoCodeBDD.
If you are using Junit5 here is an example on how it is done https://newbedev.com/data-driven-testing-in-cucumber-using-excel-files-code-example
You can use external data-source to provide examples using qaf-cucumber. It will enable to provide data-file to be used to provide examples from external data-source, which includes csv, json, xml, excel file or database query.
We cannot directly integrete Excel file data to Gerkin file
.
Instead write separate method in step file to get data from excel and do your cases.
I use following code get the data - common code
public static JSONArray Read_Excel_Data(String filename, String sheetname) throws IOException {
FileInputStream fileIn = null;
Workbook workbookout = null;
JSONArray totalData = new JSONArray();
try{
log.info("Filename and Sheet name : "+filename+", "+ sheetname );
fileIn = new FileInputStream(new File(filename));
workbookout = new XSSFWorkbook(fileIn);
Sheet sh = workbookout.getSheet(sheetname);
int totRows = sh.getLastRowNum();
Row hearderRow = sh.getRow(0);
int totCols = hearderRow.getLastCellNum();
log.info("Total [ Rows and Colums ] : [ "+totRows+" and "+ totCols +" ] ");
for(int i=1; i <= totRows; i++ ){
log.info("Progressing row : "+i);
Row tempRw = sh.getRow(i);
JSONObject jo = new JSONObject();
for(int j=0; j<totCols; j++ ){
Cell tempCell = tempRw.getCell(j);
Cell HeaderCell = hearderRow.getCell(j);
try{
jo.put(HeaderCell.getStringCellValue(), tempCell.getStringCellValue());
log.info("Value in "+i+" / "+j+" :::::::::::: > "+tempCell.getStringCellValue() );
}catch (NullPointerException npe){
log.warn(":::::::::::: > Null Value in [ "+i+" / "+j+" ] ");
}
}
totalData.add(jo);
}
workbookout.close();
fileIn.close();
System.out.println("Total data :::::::: "+totalData.toJSONString());
}catch(Exception e){
e.printStackTrace();
log.error("Error Occured !!"+e.toString());
workbookout.close();
fileIn.close();
}
return totalData;
}
I have a standalone application, which connects to a SQL database and saves ResultSet in a list of Map. This is what I have so far:
List<Map<String, Object>> rows;
stmt = conn.createStatement();
Resultset rs = stmt.executeQuery(queryString);
ResultSetMetaData rsmd; //Properties of Resultset object and column count
while(rs.next){
Map<String, Object> rowResult = new HashMap<String, Object>(columnCount);
for(int i =1; i <=columnCount; i++){
rowResult.put(rsmd.getColumnName(i), rs.getObject(i));
}
rows.add(rowResult);
}
//WRITE TO CSV
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
//Write the record to file
writer.writeNext(rows);
//close the writer
writer.close();
How do I add this "rows" of List to a csv with columns? Any clues and suggestions. Your help is appreciated.
Since every record will have the same columns in the same order, then I would just use a List<List<Object>> for the rows.
For the headers, you don't need to get them on every row. Just get them once like so:
List<String> headers = new ArrayList<>();
for (int i = 1; i <= columnCount; i++ )
{
String colName = rsmd.getColumnName(i);
headers.add(colName);
}
Next, you can get the rows like this:
List<List<Object>> rows = new ArrayList<>();
while(rs != null && rs.next)
{
List<Object> row = new ArrayList<>();
for(int i =1; i <=columnCount; i++)
{
row.add(rs.getObject(i));
}
rows.add(row);
}
Finally, to create the CSV file, you can do this:
// create the CSVWriter
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
// write the header line
for (String colName : headers)
{
writer.write(colName);
}
writer.endRecord();
// write the data records
for (List<Object> row : rows)
{
for (Object o : row)
{
// handle nulls how you wish here
String val = (o == null) ? "null" : o.toString();
writer.write(val);
}
writer.endRecord();
}
// you should close the CSVWriter in a finally block or use a
// try-with-resources Statement
writer.close;
Note: In my code examples, I'm using Type Inference
See: Try-With-Resources Statement.
Honestly for what you are trying to do I would recommend you use the writeAll method in CSVWriter and pass in the ResultSet.
writer.writeAll(rs, true);
The second parameter is the include column names so the first row in your csv file will be the column names. Then when you read the file you can translate that back into your Map if you want to (though it will be all strings unless you know when you are reading it what the types are).
Hope that helps.
Scott :)