My Task is to read some data from file, split it and then parse it to correct instance of class. So far so good, but I have a problem with the class Education, because every person has different education degree, so I don't know how to check that. The same problem comes with Insurance class, because for every person we have information for different period of time. So that is my problem basicaly.
I don't know how many instances of Education I have to create.
Here are two examples of the data that I have to read and parse:
--1--
"Plamen;Stoichev;Izmirliev;M;16.7.1980;206;Bulgaria;Sofiya;Studentski;1016;Opalchenska;21;;;P; William Gladstoine;15.9.1986;15.6.1993;S;William Gladstoine ;15.9.1993;30.6.1998;3.343;B; William Gladstoine;1.10.1999;1.6.2003;3.045" +
"2016;11;1015.20;2016;10;605.93;2016;9;701.61;2016;7;981.86;2016;4;1042.57;2016;3;919.87;2016;2;720.00;2015;12;969.75;2015;6;641.16;2015;3;811.76;2015;2;757.07;2014;12;1321.31;2014;11;863.39;2014;9;1539.51;2014;7;1159.62;2014;5;1295.59;2014;3;910.59;2014;1;1033.80";
--2--
Violeta;Konstantinova;Orlova;F;23.8. 1982;148;Bulgaria;Sofia;Izgrev;1008;King Boris III;123;5;26;P;William Gladstoine ;15.9.1988;15.6.1995;S;William Gladstoine ;15.9.1995;30.6.2000;4.069
2016;11;1587.70;2016;8;1524.04;2016;6;1273.10;2016;4;1129.08; 2016;2;1469.79;2015;12;927.91;2015;10;1116.83;2015;6;1143.05;2015;4;1348.82;
The SocialInsuranceRecord class has three parameters year, month, tax amount.
The Education class has level of degree, name of school, enrollment date, graduation date, average grade;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.time.LocalDate;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import DataObjects.address.Address;
import DataObjects.education.Education;
import DataObjects.insurance.SocialInsuranceRecord;
import DataObjects.personaldetails.Citizen;
import DataObjects.personaldetails.Gender;
public class GetDataFromFile {
private List<Citizen> citizents = new ArrayList<>();
private static final String FILE_PATH = "D:\\data.dat";
public List<Citizen> getCitizens() {
List<Citizen> list = new ArrayList<>();
try {
Scanner scanner = new Scanner(new String(Files.readAllBytes(Paths.get(FILE_PATH))));
while(scanner.hasNext()) {
Citizen citizen = parseCitizent(scanner.nextLine());
list.add(citizen);
}
}catch(Exception ex) {
ex.printStackTrace();
}
return list;
}
public Citizen parseCitizent(String nextLine) {
String[] split = nextLine.split("\\s*;");
Citizen citizen = new Citizen(split[0], split[1], split[2], Gender.valueOf(split[3]), LocalDate.parse(split[4]), Integer.parseInt(split[5]));
Address address = new Address(split[6], split[7], split[8], split[9], split[10], split[11], Integer.parseInt(split[11]), Integer.parseInt(split[12]));
citizen.setAddress(address);
List<Education> educations = new ArrayList<>();
// to do...
citizen.set_educations(educations);
List<SocialInsuranceRecord> socilInsuranceRecords = new ArrayList<>();
// to do...
citizen.set_socialInsuranceRecords(socilInsenter code hereuranceRecords);
return citizen;
}
}
Related
I want to add a quartz job scheduler on my project, but when I run the project there is an error like that, how do I get the quartz job scheduler to read my database.properties?
Thank you very much for your answer, Mr Igor, my problem is I already have a database.properties but when I run my project, the quartz job scheduler runs, and the controller job will connect to the database, an error appears like this even though I have also opened the connection
This is my database.properties :
development.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
development.username=xxxx
development.password=xxxx
development.url=jdbc:sqlserver://10.10.5.45;databaseName=xxxxxxxxxxxxx;
This is my quartz job class :
package app.controllers.api.prosesgaji;
import java.time.LocalDate;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
import org.javalite.activejdbc.Base;
import org.javalite.common.Convert;
import org.javalite.http.Http;
import org.javalite.http.Post;
import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import app.controllers.api.pemeliharaandata.KomponenGajiPegawaiHitungGajiController;
import app.models.Mstpegawai;
import core.io.enums.HttpResponses;
public class QuartzJobProsesGaji implements Job {
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
// TODO Auto-generated method stub
LocalDate localDate = LocalDate.now();
int bulan = localDate.getMonthValue();
int tgl = localDate.getDayOfMonth();
try {
String tglGaji = Convert.toString(localDate);
String nip = "";
String kddati1 = "";
String kddati2 = "";
int kdStapeg = 0;
String tmtStop = "";
int no = 1;
System.out.println("\nStart Create Gaji");
Base.open();
Base.openTransaction();
List<Map> dataPegawai = new ArrayList<>();
dataPegawai = Mstpegawai.getPegawaiQuartzTestBeberapaPegawai(tglGaji);
Mstpegawai mstPegawai = new Mstpegawai();
for (Map map : dataPegawai) {
System.out.println("\nPegawai ke = "+no);
nip = Convert.toString(map.get("nip"));
System.out.println(" - nip = "+nip);
kddati1 = Convert.toString(map.get("kddati1"));
System.out.println(" - kddati1 = "+kddati1);
kddati2 = Convert.toString(map.get("kddati2"));
System.out.println(" - kddati2 = "+kddati2);
kdStapeg = Convert.toInteger(map.get("kdstapeg"));
System.out.println(" - kdStapeg = "+kdStapeg);
tmtStop = Convert.toString(map.get("tmtstop"));
System.out.println(" - tmtStop = "+tmtStop);
KomponenGajiPegawaiHitungGajiController hitungGaji = new KomponenGajiPegawaiHitungGajiController();
hitungGaji.prosesGajiInduk(mstPegawai.fromMap(map), tglGaji);
no++;
}
Base.close();
System.out.println("\nDone Create Gaji\n");
} catch (Exception e) {
e.printStackTrace();
}
}
}
So, why Base.open() didnt work?
You need to familiarize yourself with the docs: http://javalite.io/database_configuration. Chances are you did not provide a config file http://javalite.io/database_configuration#property-file-configuration but are using a DBConnectionFilter, which has no idea where to connect.
I am working on a requirement where I need to parse CSV record fields against multiple validations. I am using supercsv which has support for field level processors to validate data.
My requirement is to validate each record/row field against multiple validations and save them to the database with success/failure status. for failure records I have to display all the failed validations using some codes.
Super CSV is working file but it is checking only first validation for a filed and if it is failed , ignoring second validation for the same field.Please look at below code and help me on this.
package com.demo.supercsv;
import java.io.FileReader;
import java.io.IOException;
import java.io.StringWriter;
import java.util.ArrayList;
import java.util.List;
import org.supercsv.cellprocessor.Optional;
import org.supercsv.cellprocessor.constraint.NotNull;
import org.supercsv.cellprocessor.constraint.StrMinMax;
import org.supercsv.cellprocessor.constraint.StrRegEx;
import org.supercsv.cellprocessor.constraint.UniqueHashCode;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.io.CsvBeanReader;
import org.supercsv.io.CsvBeanWriter;
import org.supercsv.io.ICsvBeanReader;
import org.supercsv.io.ICsvBeanWriter;
import org.supercsv.prefs.CsvPreference;
public class ParserDemo {
public static void main(String[] args) throws IOException {
List<Employee> emps = readCSVToBean();
System.out.println(emps);
System.out.println("******");
writeCSVData(emps);
}
private static void writeCSVData(List<Employee> emps) throws IOException {
ICsvBeanWriter beanWriter = null;
StringWriter writer = new StringWriter();
try{
beanWriter = new CsvBeanWriter(writer, CsvPreference.STANDARD_PREFERENCE);
final String[] header = new String[]{"id","name","role","salary"};
final CellProcessor[] processors = getProcessors();
// write the header
beanWriter.writeHeader(header);
//write the beans data
for(Employee emp : emps){
beanWriter.write(emp, header, processors);
}
}finally{
if( beanWriter != null ) {
beanWriter.close();
}
}
System.out.println("CSV Data\n"+writer.toString());
}
private static List<Employee> readCSVToBean() throws IOException {
ICsvBeanReader beanReader = null;
List<Employee> emps = new ArrayList<Employee>();
try {
beanReader = new CsvBeanReader(new FileReader("src/employees.csv"),
CsvPreference.STANDARD_PREFERENCE);
// the name mapping provide the basis for bean setters
final String[] nameMapping = new String[]{"id","name","role","salary"};
//just read the header, so that it don't get mapped to Employee object
final String[] header = beanReader.getHeader(true);
final CellProcessor[] processors = getProcessors();
Employee emp;
while ((emp = beanReader.read(Employee.class, nameMapping,
processors)) != null) {
emps.add(emp);
if (!CaptureExceptions.SUPPRESSED_EXCEPTIONS.isEmpty()) {
System.out.println("Suppressed exceptions for row "
+ beanReader.getRowNumber() + ":");
for (SuperCsvCellProcessorException e :
CaptureExceptions.SUPPRESSED_EXCEPTIONS) {
System.out.println(e);
}
// for processing next row clearing validation list
CaptureExceptions.SUPPRESSED_EXCEPTIONS.clear();
}
}
} finally {
if (beanReader != null) {
beanReader.close();
}
}
return emps;
}
private static CellProcessor[] getProcessors() {
final CellProcessor[] processors = new CellProcessor[] {
new CaptureExceptions(new NotNull(new StrRegEx("\\d+",new StrMinMax(0, 2)))),//id must be in digits and should not be more than two charecters
new CaptureExceptions(new Optional()),
new CaptureExceptions(new Optional()),
new CaptureExceptions(new NotNull()),
// Salary
};
return processors;
}
}
Exception Handler:
package com.demo.supercsv;
import java.util.ArrayList;
import java.util.List;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.exception.SuperCsvCellProcessorException;
import org.supercsv.util.CsvContext;
public class CaptureExceptions extends CellProcessorAdaptor {
public static List<SuperCsvCellProcessorException> SUPPRESSED_EXCEPTIONS =
new ArrayList<SuperCsvCellProcessorException>();
public CaptureExceptions(CellProcessor next) {
super(next);
}
public Object execute(Object value, CsvContext context) {
try {
return next.execute(value, context);
} catch (SuperCsvCellProcessorException e) {
// save the exception
SUPPRESSED_EXCEPTIONS.add(e);
if(value!=null)
return value.toString();
else
return "";
}
}
}
sample csv file
ID,Name,Role,Salary
a123,kiran,CEO,"5000USD"
2,Kumar,Manager,2000USD
3,David,developer,1000USD
when I run my program supercsv exception handler displaying this message for the ID value in the first row
Suppressed exceptions for row 2:
org.supercsv.exception.SuperCsvConstraintViolationException: 'a123' does not match the regular expression '\d+'
processor=org.supercsv.cellprocessor.constraint.StrRegEx
context={lineNo=2, rowNo=2, columnNo=1, rowSource=[a123, kiran, CEO, 5000USD]}
[com.demo.supercsv.Employee#23bf011e, com.demo.supercsv.Employee#50e26ae7, com.demo.supercsv.Employee#40d88d2d]
for field Id length should not be null and more than two and it should be neumeric...I have defined field processor like this.
new CaptureExceptions(new NotNull(new StrRegEx("\\d+",new StrMinMax(0, 2))))
but super csv ignoring second validation (maxlenght 2) if given input is not neumeric...if my input is 100 then its validating max lenght..but how to get two validations for wrong input.plese help me on this
SuperCSV cell processors will work in sequence. So, if it passes the previous constraint validation then it will check next one.
To achieve your goal, you need to write a custom CellProcessor, which will check whether the input is a number (digit) and length is between 0 to 2.
So, that both of those checks are done in a single step.
I'm trying to extract establishments data from GooglePlaces API. It used to work initially, but after making a specific method for extracting the places (instead of in the main method), the program crushes. When debbuged it, it gets stuck in a java method called park (under LockSupport class from java). Reading about it, it says that this happens when there is more than 1 thread and there are sync problems. I'm very new at this and I don't know how to solve this in my code. In my mind, there is only 1 thread in this code, but I'm pretty sure I'm wrong. Please help. It crashes in a "for" commented below. Thanks so much!
package laundry;
import java.util.ArrayList;
import java.util.List;
import se.walkercrou.places.GooglePlaces;
import se.walkercrou.places.Param;
import se.walkercrou.places.Place;
import java.io.FileWriter; //add to import list
import java.io.IOException;
import java.io.Writer;
import static java.lang.Math.sqrt;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.microedition.location.Coordinates;
import se.walkercrou.places.exception.GooglePlacesException;
import se.walkercrou.places.exception.RequestDeniedException;
public class Laundry {
public static void main(String[] args) throws IOException {
List<Place> detailedPlaces = new ArrayList<>();
List<double[]> circlesPoint = new ArrayList<>();
double radio = 10;
Coordinates startingPoint = new Coordinates (-38.182476,144.552079,0);//geelong south west corner of the grid
Coordinates finalPoint = new Coordinates(-37.574381,145.415879,0); //north east of melbourne
GooglePlaces cliente = new GooglePlaces("keyof googlplaces");
MyResult result1=exploreGrid(startingPoint,finalPoint, radio, detailedPlaces, circlesPoint,cliente);
writeResultsCircles(result1.getPoints(),"c:\\outputCircles.txt" );
writeResultsPlaces(result1.getPlaces(), "c:\\outputPlaces.txt");
}
private static MyResult exploreGrid(Coordinates SWpoint,Coordinates NEpoint,double rad, List<Place> lugares, List<double[]> points,GooglePlaces client){
int iterationRow=0;
Coordinates workingPoint = new Coordinates(SWpoint.getLatitude(),SWpoint.getLongitude(),(float) 0.0);
List<Place> places = new ArrayList<>();
while (workingPoint.getLatitude()<NEpoint.getLatitude()){
while (workingPoint.getLongitude()<NEpoint.getLongitude()){
try {
places = client.getNearbyPlaces(workingPoint.getLatitude(), workingPoint.getLongitude(), rad*1000,GooglePlaces.MAXIMUM_RESULTS ,Param.name("types").value("laundry"));
if (places.size()==60){//si se llega al tope de resultados de getNearbyPlaces
iterationRow=1;
}
for (Place place : places) {
lugares.add(place.getDetails());//here is where it crashes
}
}catch (GooglePlacesException ex) {
System.out.println(ex.getCause());
}
double[] prePoint = {workingPoint.getLatitude(),workingPoint.getLongitude(),rad};
points.add(prePoint);
workingPoint.setLongitude(workingPoint.getLongitude()+rad*sqrt(3)*0.01134787);
}
iterationRow++;
if (isEven(iterationRow)){
workingPoint.setLongitude(SWpoint.getLongitude());
} else {
workingPoint.setLongitude(SWpoint.getLongitude()+rad*sqrt(3)*0.01134787/2);
}
workingPoint.setLatitude(workingPoint.getLatitude()+rad*3/2*0.00899416);
}
return new MyResult(lugares,points);
}
}
This is the entire source code for the java file.
package gephifyer;
import java.awt.Color;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import org.gephi.data.attributes.api.AttributeColumn;
import org.gephi.data.attributes.api.AttributeController;
import org.gephi.data.attributes.api.AttributeModel;
import org.gephi.graph.api.DirectedGraph;
import org.gephi.graph.api.GraphController;
import org.gephi.graph.api.GraphModel;
import org.gephi.io.exporter.api.ExportController;
import org.gephi.io.importer.api.Container;
import org.gephi.io.importer.api.EdgeDefault;
import org.gephi.io.importer.api.ImportController;
import org.gephi.io.importer.spi.FileImporter;
import org.gephi.io.processor.plugin.DefaultProcessor;
import org.gephi.partition.api.Partition;
import org.gephi.partition.api.PartitionController;
import org.gephi.partition.plugin.NodeColorTransformer;
import org.gephi.preview.api.PreviewController;
import org.gephi.preview.api.PreviewModel;
import org.gephi.preview.api.PreviewProperty;
import org.gephi.preview.types.DependantOriginalColor;
import org.gephi.project.api.ProjectController;
import org.gephi.project.api.Workspace;
import org.gephi.ranking.api.Ranking;
import org.gephi.ranking.api.RankingController;
import org.gephi.ranking.plugin.transformer.AbstractSizeTransformer;
import org.gephi.statistics.plugin.Modularity;
import org.openide.util.Lookup;
import org.gephi.layout.plugin.force.StepDisplacement;
import org.gephi.layout.plugin.force.yifanHu.YifanHu;
import org.gephi.layout.plugin.force.yifanHu.YifanHuLayout;
import org.gephi.layout.plugin.openord.*;
public class Gephifyer {
public void doStuff(String[] args)
{
String filename = new String();
try{
filename = args[0];
} catch (ArrayIndexOutOfBoundsException ex) {
System.out.println("Supply the subreddit name as the argument.");
System.exit(0);
}
ProjectController pc = Lookup.getDefault().lookup(ProjectController.class);
pc.newProject();
Workspace workspace = pc.getCurrentWorkspace();
ImportController importController = Lookup.getDefault().lookup(ImportController.class);
Container container;
try{
File file = new File(filename + ".csv");
//File file = new File(getClass().getResource("askscience.csv").toURI());
container = importController.importFile(file);
container.getLoader().setEdgeDefault(EdgeDefault.DIRECTED);
container.setAllowAutoNode(false); // don't create missing nodes
} catch (Exception ex) {
ex.printStackTrace();
return;
}
// Append imported data to graph api
importController.process(container, new DefaultProcessor(), workspace);
GraphModel graphModel = Lookup.getDefault().lookup(GraphController.class).getModel();
DirectedGraph directedGraph = graphModel.getDirectedGraph();
// Now let's manipulate the graph api, which stores / serves graphs
System.out.println("Nodes: " + directedGraph.getNodeCount() + "\nEdges: " + directedGraph.getEdgeCount());
//Run OpenOrd.
//OpenOrdLayout layout = new OpenOrdLayout(null);
YifanHuLayout layout = new YifanHuLayout(null, new StepDisplacement(0.95f));
layout.setGraphModel(graphModel);
layout.resetPropertiesValues();
layout.initAlgo();
layout.goAlgo();
while (layout.canAlgo()) // This is only possible because OpenOrd has a finite number of iterations.
{
layout.goAlgo();
}
AttributeModel attributemodel = Lookup.getDefault().lookup(AttributeController.class).getModel();
// Get modularity for coloring
Modularity modularity = new Modularity();
modularity.setUseWeight(true);
modularity.setRandom(true);
modularity.setResolution(1.0);
modularity.execute(graphModel, attributemodel);
// Partition with modularity
AttributeColumn modcol = attributemodel.getNodeTable().getColumn(Modularity.MODULARITY_CLASS);
PartitionController partitionController = Lookup.getDefault().lookup(PartitionController.class);
Partition p = partitionController.buildPartition(modcol, directedGraph);
NodeColorTransformer nodeColorTransformer = new NodeColorTransformer();
nodeColorTransformer.randomizeColors(p);
partitionController.transform(p, nodeColorTransformer);
// Ranking
RankingController rankingController = Lookup.getDefault().lookup(RankingController.class);
Ranking degreeRanking = rankingController.getModel().getRanking(Ranking.NODE_ELEMENT, Ranking.INDEGREE_RANKING);
AbstractSizeTransformer sizeTransformer = (AbstractSizeTransformer) rankingController.getModel().getTransformer(Ranking.NODE_ELEMENT, org.gephi.ranking.api.Transformer.RENDERABLE_SIZE);
sizeTransformer.setMinSize(5.0f);
sizeTransformer.setMaxSize(40.0f);
rankingController.transform(degreeRanking,sizeTransformer);
// Finally, the preview model
PreviewController previewController = Lookup.getDefault().lookup(PreviewController.class);
PreviewModel previewModel = previewController.getModel();
previewModel.getProperties().putValue(PreviewProperty.SHOW_NODE_LABELS, Boolean.TRUE);
previewModel.getProperties().putValue(PreviewProperty.NODE_LABEL_COLOR, new DependantOriginalColor(Color.BLACK));
previewModel.getProperties().putValue(PreviewProperty.NODE_LABEL_FONT, previewModel.getProperties().getFontValue(PreviewProperty.NODE_LABEL_FONT).deriveFont(8));
previewModel.getProperties().putValue(PreviewProperty.EDGE_CURVED, Boolean.FALSE);
previewModel.getProperties().putValue(PreviewProperty.EDGE_OPACITY, 50);
previewModel.getProperties().putValue(PreviewProperty.EDGE_RADIUS, 10f);
previewModel.getProperties().putValue(PreviewProperty.BACKGROUND_COLOR, Color.TRANSLUCENT);
previewController.refreshPreview();
System.out.println("starting export");
ExportController ec = Lookup.getDefault().lookup(ExportController.class);
try{
ec.exportFile(new File(filename + ".svg"));
}
catch (IOException ex){
ex.printStackTrace();
return;
}
System.out.println("Done.");
}
public static void main(String[] args)
{
Gephifyer g = new Gephifyer();
g.doStuff(args);
}
}
At its heart, it's the various demos' code cobbled together to do what I want it to do.
I expect a graph that looks like this svg file, but the result is this svg file. That is, the problem is that the above code yields a graph where the arrows aren't fully connected to the nodes, making it look a bit messy. I can't for my life tell where in the code that is happening, though I guess it would be in the preview model part.
previewModel.getProperties().putValue(PreviewProperty.EDGE_RADIUS, 10f); sets the distance of the arrows from the node.
I have two classes viz. ExistInsert.java and TryExist.java . The complete code for ExistInsert is given below:
package tryexist;
import java.util.ArrayList;
import java.util.List;
import org.exist.xmldb.XQueryService;
import org.jfree.chart.ChartFactory;
import org.jfree.chart.ChartFrame;
import org.jfree.chart.JFreeChart;
import org.jfree.chart.plot.PlotOrientation;
import org.jfree.data.category.DefaultCategoryDataset;
import org.xmldb.api.DatabaseManager;
import org.xmldb.api.base.Collection;
import org.xmldb.api.base.Database;
import org.xmldb.api.base.Resource;
import org.xmldb.api.base.ResourceIterator;
import org.xmldb.api.base.ResourceSet;
public class ExistInsert {
public static String URI = "xmldb:exist://localhost:8899/exist/xmlrpc";
public static String driver = "org.exist.xmldb.DatabaseImpl";
public static List mylist = new ArrayList();
public List insert_data(String xquery){
try{
Class c1 = Class.forName(driver);
Database database=(Database) c1.newInstance();
String collectionPath= "/db";
DatabaseManager.registerDatabase(database);
Collection col=DatabaseManager.getCollection(URI+collectionPath);
XQueryService service = (XQueryService) col.getService("XQueryService","1.0");
service.setProperty("indent", "yes");
ResourceSet result = service.query(xquery);
ResourceIterator i = result.getIterator();
while(i.hasMoreResources()){
Resource r =i.nextResource();
mylist.add(((String)r.getContent()));
}
}
catch(Exception e){
System.out.println(e);
}
return mylist;
}
public void draw_bar(List values, List years ){
try{
//DefaultPieDataset data = new DefaultPieDataset();
DefaultCategoryDataset dataset = new DefaultCategoryDataset();
for(int j=0;j<values.size();j++){
dataset.addValue();
}
//JFreeChart chart = ChartFactory.createPieChart("TEST PEICHART", data, true, true, Locale.ENGLISH);
JFreeChart chart2 = ChartFactory.createLineChart("Assets", "X","Y",dataset , PlotOrientation.VERTICAL, true, true, true);
ChartFrame frame = new ChartFrame("TEST", chart2);
frame.setVisible(true);
frame.setSize(500, 500);
}
catch(Exception e){
System.out.println(e);
}
}
}
Here the function insert_data executes a xquery and return the result into list of String. The function draw_bar draws a barchart using the arguments viz values and years as list. The main problem I faced was converting the List into the Comparable, which is the requirement of dataset.addValue() . In my main program TryExist.java I have:
package tryexist;
import java.util.ArrayList;
import java.util.List;
public class Tryexist {
public static void main(String[] args) throws Exception{
ExistInsert exist = new ExistInsert();
String query = "Some query Here"
List resp = exist.insert_data(query);
List years = new ArrayList();
for (int i=2060;i<=2064;i++){
years.add(i);
}
System.out.println(years);
System.out.println(resp);
exist.draw_bar(resp,years);
}
}
Now executing query returns years and resp as [2060, 2061, 2062, 2063, 2064] and [32905657, 3091102752, 4756935449, 7954664475, 11668355950] respectively. Then How do I edit dataset.addValue() in ExistInsert.java so that I can pass above obtained values resp and years into draw_bar to make a bar diagram for the data passed.?
A complete example using DefaultCategoryDataset, BarChartDemo1, is included in the distribution and illustrated below. Click on the class name to see the source code. The example uses instances of String as column and row keys, but any Comparable can by used, as discussed here.