Criteria , have possibilities? - java

Well, I have the following questions, to perform the join between the tables by setting the nickname (alias), I need to make a decode, used the alias alias, but to use because it does not recognize the use of pure sql.
How do I return the name that defines the criteria for the tables? I'm using sqlGroupProjection, if you can suggest another way.
Criteria criteria = dao.getSessao().createCriteria(Chamado.class,"c");
criteria.createAlias("c.tramites","t").setFetchMode("t", FetchMode.JOIN);
projetos.add( Projections.rowCount(),"qtd");
criteria.add(Restrictions.between("t.dataAbertura", Formata.getDataD(dataInicio, "dd/MM/yyyy"), Formata.getDataD(dataFim, "dd/MM/yyyy")));
projetos.add(Projections.sqlGroupProjection("decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO",
"decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)",
new String[]{"COD_ESTADO"},
new Type[]{Hibernate.INTEGER}));
criteria.setProjection(projetos);
List<Relatorio> relatorios = criteria.setResultTransformer(Transformers.aliasToBean(Relatorio.class)).list();
SQL generated by criteria:
select count(*) as y0_,
decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO
from CHAMADOS this_
inner join TRAMITES t1_ on this_.COD_CHAMADO = t1_.COD_CHAMADO
where t1_.DT_ABERTURA between ? and ?
group by decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)

Related

How to remove curly braces before writing JSON object in java

I am trying to make a simple JSON-DB in Java since the current library on maven is horrendously overcomplicated. I have this method that takes in a key and value to put into a JSONObject and write it to my database.json file.
public static void put(String path, String key, Object[] value){
//creates new JSONObject where data will be stored
JSONObject jsondb = new JSONObject();
try{
//adds key and value to JSONObject
jsondb.put(key, value);
} catch (JSONException e){
e.printStackTrace();
} //end try-catch
try(PrintWriter out = new PrintWriter(new FileWriter(path, true))) {
out.write(jsondb.toString());
out.write(',');
} catch (Exception e){
e.printStackTrace();
} //end try-catch
} //end put()
Here is my main method where I write some data to the file
public static void main(String[] args) throws Exception {
String path = "app.json";
Object[] amogus = {"Hello", 1, 2, 3, 4, 5};
Object[] amogus1 = {"Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0};
JsonDB db = new JsonDB(path);
db.put(path, "arr", amogus);
db.put(path, "arr1", amogus1);
}
What happens is that it save each data in a set of curly braces. So when I write to the file more than once like I do in my main method it saves it like this:
{"arr": ["Hello", 1, 2, 3, 4, 5]}{"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]}
This causes VSCode to throw an error since this isnt valid JSON. How would I make the method remove the curly braces and add commas to make the above JSON valid JSON? I cant seem to find the documentation for this library (The library is org.json on maven).
This is a syntax issue. Your json is not valid because the json syntax want your data to be between curly braces like this:
{
"arr": ["Hello", 1, 2, 3, 4, 5],
"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
}
instead of this, which is not a valid json:
{"arr": ["Hello", 1, 2, 3, 4, 5]}
{"arr1": ["Hello", 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]}
Create your data in your map object and let your json library convert (serialize) your object in a valid json.
I solved this problem by reading all of the content in the file and appending the JSON object that I wanted to write to the file and then write it all at once.

How to insert data into excel table using Apache Calcite?

I am using Apache Calcite for reading data from excel.
Excel has 'salary' table with following fields
Integer id
Integer emp_id
Integer salary
I have following model.json
{
"version": "1.0",
"defaultSchema": "excelSchema",
"schemas": [{
"name" : "excelSchema",
"type": "custom",
"factory": "com.syncnicia.testbais.excel.ExcelSchemaFactory",
"operand": {
"directory": "sheets/"
}
}]
}
This is my calcite connection code
Connection connection = DriverManager.getConnection("jdbc:calcite:model=src/main/resources/model.json");
CalciteConnection calciteConnection = connection.unwrap(CalciteConnection.class);
I am able to get data from above connection using following code.
Statement st1 = calciteConnection.createStatement();
ResultSet resultSet = st1.executeQuery("select * from \"excelSchema\".\"salary\"");
System.out.println("SALARY DATA IS");
while (resultSet.next()){
System.out.println("SALARY data is : ");
for (int i2 = 1; i2 <= resultSet.getMetaData().getColumnCount(); i2++) {
System.out.print(resultSet.getMetaData().getColumnLabel(i2)+" = "+resultSet.getObject(i2)+", ");
}
}
Above code is working fine it shows all entries from salary tables, but when I am trying to insert into same table i.e excel using following code
String insertSql = "INSERT INTO \"excelSchema\".\"salary\" values(5,345,0909944)";
Statement insertSt = calciteConnection.createStatement();
boolean insertResult = insertSt.execute(insertSql);
System.out.println("InsertResult is "+insertResult);
I am getting following exception
Exception in execute qry Error while executing SQL "INSERT INTO "employeeSchema"."salary" values(5,345,0909944)": There are not enough rules to produce a node with desired properties: convention=ENUMERABLE, sort=[].
Missing conversion is LogicalTableModify[convention: NONE -> ENUMERABLE]
There is 1 empty subset: rel#302:Subset#1.ENUMERABLE.[], the relevant part of the original plan is as follows
299:LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false])
293:LogicalValues(subset=[rel#298:Subset#0.NONE.[]], tuples=[[{ 5, 345, 909944 }]])
Root: rel#302:Subset#1.ENUMERABLE.[]
Original rel:
LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false]): rowcount = 1.0, cumulative cost = {2.0 rows, 1.0 cpu, 0.0 io}, id = 296
LogicalValues(tuples=[[{ 5, 345, 909944 }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 293
Sets:
Set#0, type: RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)
rel#298:Subset#0.NONE.[], best=null, importance=0.81
rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={inf}
rel#305:Subset#0.ENUMERABLE.[], best=rel#304, importance=0.405
rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={1.0 rows, 1.0 cpu, 0.0 io}
Set#1, type: RecordType(BIGINT ROWCOUNT)
rel#300:Subset#1.NONE.[], best=null, importance=0.9
rel#299:LogicalTableModify.NONE.[](input=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false), rowcount=1.0, cumulative cost={inf}
rel#302:Subset#1.ENUMERABLE.[], best=null, importance=1.0
rel#303:AbstractConverter.ENUMERABLE.[](input=RelSubset#300,convention=ENUMERABLE,sort=[]), rowcount=1.0, cumulative cost={inf}
Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster0{
label="Set 0 RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)";
rel293 [label="rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={inf}",shape=box]
rel304 [label="rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={1.0 rows, 1.0 cpu, 0.0 io}",color=blue,shape=box]
subset298 [label="rel#298:Subset#0.NONE.[]"]
subset305 [label="rel#305:Subset#0.ENUMERABLE.[]"]
}
subgraph cluster1{
label="Set 1 RecordType(BIGINT ROWCOUNT)";
rel299 [label="rel#299:LogicalTableModify\ninput=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false\nrows=1.0, cost={inf}",shape=box]
rel303 [label="rel#303:AbstractConverter\ninput=RelSubset#300,convention=ENUMERABLE,sort=[]\nrows=1.0, cost={inf}",shape=box]
subset300 [label="rel#300:Subset#1.NONE.[]"]
subset302 [label="rel#302:Subset#1.ENUMERABLE.[]",color=red]
}
root -> subset302;
subset298 -> rel293;
subset305 -> rel304[color=blue];
subset300 -> rel299; rel299 -> subset298;
subset302 -> rel303; rel303 -> subset300;
} caused by org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=ENUMERABLE, sort=[].
Missing conversion is LogicalTableModify[convention: NONE -> ENUMERABLE]
There is 1 empty subset: rel#302:Subset#1.ENUMERABLE.[], the relevant part of the original plan is as follows
299:LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false])
293:LogicalValues(subset=[rel#298:Subset#0.NONE.[]], tuples=[[{ 5, 345, 909944 }]])
Root: rel#302:Subset#1.ENUMERABLE.[]
Original rel:
LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false]): rowcount = 1.0, cumulative cost = {2.0 rows, 1.0 cpu, 0.0 io}, id = 296
LogicalValues(tuples=[[{ 5, 345, 909944 }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 293
Sets:
Set#0, type: RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)
rel#298:Subset#0.NONE.[], best=null, importance=0.81
rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={inf}
rel#305:Subset#0.ENUMERABLE.[], best=rel#304, importance=0.405
rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={1.0 rows, 1.0 cpu, 0.0 io}
Set#1, type: RecordType(BIGINT ROWCOUNT)
rel#300:Subset#1.NONE.[], best=null, importance=0.9
rel#299:LogicalTableModify.NONE.[](input=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false), rowcount=1.0, cumulative cost={inf}
rel#302:Subset#1.ENUMERABLE.[], best=null, importance=1.0
rel#303:AbstractConverter.ENUMERABLE.[](input=RelSubset#300,convention=ENUMERABLE,sort=[]), rowcount=1.0, cumulative cost={inf}
Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster0{
label="Set 0 RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)";
rel293 [label="rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={inf}",shape=box]
rel304 [label="rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={1.0 rows, 1.0 cpu, 0.0 io}",color=blue,shape=box]
subset298 [label="rel#298:Subset#0.NONE.[]"]
subset305 [label="rel#305:Subset#0.ENUMERABLE.[]"]
}
subgraph cluster1{
label="Set 1 RecordType(BIGINT ROWCOUNT)";
rel299 [label="rel#299:LogicalTableModify\ninput=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false\nrows=1.0, cost={inf}",shape=box]
rel303 [label="rel#303:AbstractConverter\ninput=RelSubset#300,convention=ENUMERABLE,sort=[]\nrows=1.0, cost={inf}",shape=box]
subset300 [label="rel#300:Subset#1.NONE.[]"]
subset302 [label="rel#302:Subset#1.ENUMERABLE.[]",color=red]
}
root -> subset302;
subset298 -> rel293;
subset305 -> rel304[color=blue];
subset300 -> rel299; rel299 -> subset298;
subset302 -> rel303; rel303 -> subset300;
}
Please help me with how to insert data into excel using Apache Calcite.
Unfortunately Calcite doesn't support insertion for most of the available adapters. (I believe only for JDBC data sources at the moment.)

Trouble deserializing Json from Google Analytics

Hi I'm attempting to deserializer.deserialize this data from Google Analytics
[[/s417, 14945, 93.17823577906019], [/s413, 5996, 72.57178438000356],
[/s417/, 3157, 25.690567351200837], [/s420, 2985, 44.12472727272727],
[/s418, 2540, 64.60275150472916], [/s416, 2504, 69.72643979057591],
[/s415, 2379, 44.69660861594867], [/s422, 2164, 57.33786505538772],
[/s421, 2053, 48.18852894317578], [/s414, 1839, 93.22588376273218],
[/s412, 1731, 54.8431860609832], [/s411, 1462, 71.26186830015314],
[/s419, 1423, 51.88551401869159], [/, 63, 11.303571428571429],
[/s420/, 22, 0.3333333333333333], [/s413/, 21, 7.947368421052632],
[/s416/, 16, 96.0], [/s421/, 15, 0.06666666666666667], [/s411/, 13,
111.66666666666667], [/s422/, 13, 0.07692307692307693], [/g150, 11, 0.09090909090909091], [/s414/, 10, 2.0], [/s418/, 10, 0.4444444444444444], [/s415/, 9, 0.2222222222222222], [/s412/, 8, 0.6666666666666666], [/s45, 6, 81.0], [/s164, 5, 45.25], [/s28, 5, 16.2], [/s39, 5, 25.2], [/s27, 4, 59.5], [/s29, 4, 26.5], [/s365, 3, 31.666666666666668], [/s506, 3, 23.333333333333332], [/s1139, 2, 30.5], [/s296, 2, 11.0], [/s311, 2, 13.5], [/s35, 2, 55.0], [/s363, 2, 15.5], [/s364, 2, 17.5], [/s419/, 2, 0.0], [/s44, 2, 85.5], [/s482, 2, 28.5], [/s49, 2, 29.5], [/s9, 2, 77.0], [/s146, 1, 13.0], [/s228, 1, 223.0], [/s229, 1, 54.0], [/s231, 1, 0.0], [/s30, 1, 83.0], [/s312, 1, 15.0], [/s313, 1, 155.0], [/s316, 1, 14.0], [/s340, 1, 22.0], [/s350, 1, 0.0], [/s362, 1, 24.0], [/s43, 1, 54.0], [/s442, 1, 87.0], [/s465,
1, 14.0], [/s468, 1, 67.0], [/s47, 1, 41.0], [/s71, 1, 16.0], [/s72,
1, 16.0], [/s87, 1, 48.0], [/s147, 0, 0.0], [/s417, 0, 0.0]]
With this
#Immutable
private static JSONDeserializer<List<List<String>>> deserializer = new JSONDeserializer<List<List<String>>>();
And it's failing silently on the deserialization.
Only error I'm getting is from the xhtml
com.sun.faces.context.PartialViewContextImpl$PhaseAwareVisitCallback
visit
SEVERE: javax.el.ELException: /views/guide/edit.xhtml #257,102 value="#{GuideEditController.visitsByScene}": flexjson.JSONException:
Missing value at character 2
Any clues?
marekful had the right idea
replaceAll("[^\d,[]\,]+", "") to remove the offending characters did the trick

How to add categories in ScatterWithSmoothLines type chart into java aspose.slides api?

HI I am trying to using first time aspose.slides for java api to create chart into pptx file. I need to some double values at Y Axis and some String value at X Axis. but i am able to doing this. because i am very confused to use this library. i have a lot study this library.
my code is -
Presentation pres = new Presentation();
ISlide slide = pres.getSlides().get_Item(0);
//Creating the default chart
IChart chart = slide.getShapes().addChart(ChartType.ScatterWithSmoothLines, 0, 0, 400, 400);
//Getting the default chart data worksheet index
int defaultWorksheetIndex = 0;
//Getting the chart data worksheet
IChartDataWorkbook fact = chart.getChartData().getChartDataWorkbook();
//Delete demo series
chart.getChartData().getSeries().clear();
//Add new series
chart.getChartData().getSeries().add(fact.getCell(defaultWorksheetIndex, 1, 1, "Series 1"), chart.getType());
chart.getChartData().getSeries().add(fact.getCell(defaultWorksheetIndex, 1, 3, "Series 2"), chart.getType());
//Adding new categories
chart.getChartData().getCategories().add(fact.getCell(defaultWorksheetIndex, 1, 0, "Caetegoty 1"));
chart.getChartData().getCategories().add(fact.getCell(defaultWorksheetIndex, 2, 0, "Caetegoty 2"));
chart.getChartData().getCategories().add(fact.getCell(defaultWorksheetIndex, 3, 0, "Caetegoty 3"));
chart.getChartData().getCategories().add(fact.getCell(defaultWorksheetIndex, 4, 0, "Caetegoty 4"));
chart.getChartData().getCategories().add(fact.getCell(defaultWorksheetIndex, 5, 0, "Caetegoty 5"));
//Take first chart series
IChartSeries series = chart.getChartData().getSeries().get_Item(0);
//Add new point (1:3) there.
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 2, 1, 1), fact.getCell(defaultWorksheetIndex, 2, 2, 3));
//Add new point (2:10)
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 3, 1, 2), fact.getCell(defaultWorksheetIndex, 3, 2, 10));
//Edit the type of series
series.setType (ChartType.ScatterWithStraightLinesAndMarkers);
//Changing the chart series marker
series.getMarker().setSize(10);
series.getMarker().setSymbol(MarkerStyleType.Star);
//Take second chart series
series = chart.getChartData().getSeries().get_Item(1);
//Add new point (5:2) there.
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 2, 3, 5), fact.getCell(defaultWorksheetIndex, 2, 4, 2));
//Add new point (3:1)
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 3, 3, 3), fact.getCell(defaultWorksheetIndex, 3, 4, 1));
//Add new point (2:2)
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 4, 3, 2), fact.getCell(defaultWorksheetIndex, 4, 4, 2));
//Add new point (5:1)
series.getDataPoints().addDataPointForScatterSeries(fact.getCell(defaultWorksheetIndex, 5, 3, 5), fact.getCell(defaultWorksheetIndex, 5, 4, 1));
//Changing the chart series marker
series.getMarker().setSize(10);
series.getMarker().setSymbol(MarkerStyleType.Circle);
pres.save("/home/echasro/Desktop/TODAY/AsposeScatterChart.pptx", SaveFormat.Pptx);
I am creating this slide like -like =http://i.stack.imgur.com/IFfIQ.jpg
but i need to add categories like = 12/12/2014,12/13/2014,14/12/2014 etc at X Axis.
I need this type chart on pptx file -Link of required format file
Please suggest me any idea do this complete.
Thank for reading

I get different results reading the same file from the file system and from inside a jar

I have a file that my Java application takes as input which I read 6 bytes at a time. When I read it in off the file system everything works fine. If I build everything into a jar the first 4868 reads work fine, but after that it starts returning the byte arrays in the wrong order and also ends up having read more data at the end.
Here is a simplified version of my code which reproduces the problem:
InputStream inputStream = this.getClass().getResourceAsStream(filePath);
byte[] byteArray = new byte[6];
int counter = 0;
while ((inputStream.read(byteArray) != -1))
{
counter++;
System.out.println("Read #" + counter +": " + Arrays.toString(byteArray));
}
System.out.println("Done.");
This is the [abbreviated] output I get when reading off of the file system:
...
Read #4867: [5, 0, 57, 7, 113, -26]
Read #4868: [2, 0, 62, 7, 114, -26]
Read #4869: [2, 0, 68, 7, 115, -26]
Read #4870: [3, 0, 75, 7, 116, -26]
Read #4871: [2, 0, 83, 7, 117, -26]
...
Read #219687: [1, 0, 4, -8, 67, 33]
Read #219688: [1, 0, 2, -8, 68, 33]
Read #219689: [5, 0, 1, -8, 67, 33]
Done.
And here is what I get reading from a jar:
...
Read #4867: [5, 0, 57, 7, 113, -26]
Read #4868: [2, 0, 62, 7, 113, -26] //everything is fine up to this point
Read #4869: [7, 114, -26, 2, 0, 68]
Read #4870: [7, 115, -26, 3, 0, 75]
Read #4871: [7, 116, -26, 2, 0, 83]
...
Read #219687: [95, 33, 1, 0, 78, -8]
Read #219688: [94, 33, 1, 0, 76, -8]
Read #219689: [95, 33, 1, 0, 74, -8]
...
Read #219723: [67, 33, 1, 0, 2, -8]
Read #219724: [68, 33, 5, 0, 1, -8]
Read #219725: [67, 33, 5, 0, 1, -8]
Done.
I unzipped the jar and confirmed that the files being read are identical, so what could cause the reader to return different results?
Your reading loop is wrong.
inputStream.read() method returns number of bytes it really read. You have to check this number before transforming the data into string.
When you are reading from file the bytes are not arrived all together. At one of the iterations of your loop you probably read 4 of expected 6 bytes, so your transformation to string does not work.
If you are reading integers I'd recommend you to wrap your raw input string using Scanner or good old DataInputStream and read integers directly.

Categories