Need to add a field to the database which will record a sequence number related to that (foreign) id.
Example table data (current):
ID ACCOUNT some_other_stuff
1 1 ...
2 1 ...
3 1 ...
4 2 ...
5 2 ...
6 1 ...
I need to add a sequenceid column which increments separately for each account, achieving:
ID ACCOUNT SEQ some_other_stuff
1 1 1 ...
2 1 2 ...
3 1 3 ...
4 2 1 ...
5 2 2 ...
6 1 4 ...
Note that the sequence is related to account.
Unfortunately this cannot be done with JPA and hibernate. The only solution would be to do it manually in the service. You can use #Generated value on a column but that relies on the database to provide the value. And you cannot create a custom sequence implementation and use #GeneratedValue because that works only for the ID column.
Related
I am very new to CYPHER QUERY LANGUAGE AND i am working on relationships between nodes.
I have a CSV file of table containing multiple columns and 1000 rows.
Template of my table is :
cdrType ANUMBER BNUMBER DUARTION
2 123 456 10
2 890 456 5
2 123 666 2
2 123 709 7
2 345 789 20
I have used these commands to create nodes and property keys.
LOAD CSV WITH HEADERS FROM "file:///2.csv" AS ROW
CREATE (:ANUMBER {aNumber:ROW.aNumber} ),
CREATE (:BNUMBER {bNumber:ROW.bNumber} )
Now I need to create relation between all rows in the table and I think FOREACH loop is best in my case. I created this query but it gives me an error. Query is :
MATCH (a:ANUMBER),(b:BNUMBER)
FOREACH(i in RANGE(0, length(ANUMBER)) |
CREATE UNIQUE (ANUMBER[i])-[s:CALLED]->(BNUMBER[i]))
and the error is :
Invalid input '[': expected an identifier character, whitespace,
NodeLabel, a property map, ')' or a relationship pattern (line 3,
column 29 (offset: 100)) " CREATE UNIQUE
(a:ANUMBER[i])-[s:CALLED]->(b:BNUMBER[i]))"
I need relation for every row. like in my case. 123 - called -> 456 , 890 - called -> 456. So I need visual representation of this calling data that which number called which one. For this I need to create relation between all rows.
any one have idea how to solve this ?
What about :
LOAD CSV WITH HEADERS FROM "file:///2.csv" AS ROW
CREATE (a:ANUMBER {aNumber:ROW.aNumber} )
CREATE (b:BNUMBER {bNumber:ROW.bNumber} )
MERGE (a)-[:CALLED]->(b);
It's not more complex than that i.m.o.
Hope this helps !
Regards,
Tom
I am very new to spark.The below is the requirement am getting to
1st RDD
empno first-name last-name
0 fname lname
1 fname1 lname1
2nd rdd
empno dept-no dept-code
0 1 a
0 1 b
1 1 a
1 2 a
3rd rdd
empno history-no address
0 1 xyz
0 2 abc
1 1 123
1 2 456
1 3 a12
I have to generate a file combining all the RDDs for each employee, and the average emp-count is 200k
Desired output:
seg-start emp-0
seg-emp 0-fname-lname
seg-dept 0-1-a
seg-dept 0-1-b
seg-his 0-1-xyz
seg-his 0-2-abc
seg-end emp-0
seg-start emp-1
......
seg-end emp-1
How can I achieve this by combining RDDs? Please note that the data is not written straight forward as it was shown here, we are converting data to business valid format(ex:- e0xx5fname5lname is 0-fname-lname), so need help from the experts here, as the current batch program runs for hours to write data, thinking of using spark to process this efficiently.
I have a Spark dataframe(oldDF) that looks like:
Id | Category | Count
898989 5 12
676767 12 1
334344 3 2
676767 13 3
And I want to create a new dataframe with column names of Category with value of Count grouped by Id.
The reason why I can't specify schema or would rather not is because the categories change a lot.Is there any way to do it dynamically?
An output I would like to see as Dataframe from the one above:
Id | V3 | V5 | V12 | V13
898989 0 12 0 0
676767 0 0 1 3
334344 2 0 0 0
With Spark 1.6
oldDf.groupBy("Id").pivot("category").sum("count)
You need to do your groupby operation first, then you can apply implement a pivot operation as explained here
I am starting to work with Weka in R and I got stuck at the first step. I converted my csv file into arff file and I did this using an online converter, but when i tried to read it into R I got the following error message.
require(RWeka)
A <- read.arff("Environmental variables all overviewxlsx.arff")
Error in .jnew("weka/core/Instances", .jcast(reader, "java/io/Reader")) :
java.io.IOException: no valid attribute type or invalid enumeration, read Token[[°C]], line 6
Does anyone have an idea that could help me?
Thanks!
p.s. the proper package (RWeka) is already installed.
Because read.arff() returns a dataframe you could skip the conversion process and use read.csv().
train_arff<-read.arff(file.choose())
str(train_arff)
'data.frame': 14 obs. of 5 variables:
$ outlook : Factor w/ 3 levels "sunny","overcast",..: 1 1 2 3 3 3 2 1 1 3 ...
$ temperature: Factor w/ 3 levels "hot","mild","cool": 1 1 1 2 3 3 3 2 3 2 ...
$ humidity : Factor w/ 2 levels "high","normal": 1 1 1 1 2 2 2 1 2 2 ...
$ windy : logi FALSE TRUE FALSE FALSE FALSE TRUE ...
$ play : Factor w/ 2 levels "yes","no": 2 2 1 1 1 2 1 2 1 1 ...
train_csv<-read.csv(file.choose())
str(train_csv)
'data.frame': 14 obs. of 5 variables:
$ outlook : Factor w/ 3 levels "overcast","rainy",..: 3 3 1 2 2 2 1 3 3 2 ...
$ temperature: Factor w/ 3 levels "cool","hot","mild": 2 2 2 3 1 1 1 3 1 3 ...
$ humidity : Factor w/ 2 levels "high","normal": 1 1 1 1 2 2 2 1 2 2 ...
$ windy : logi FALSE TRUE FALSE FALSE FALSE TRUE ...
$ play : Factor w/ 2 levels "no","yes": 1 1 2 2 2 1 2 1 2 2 ...
Otherwise your .arff file should have this format
Here is sample table data which is dynamic.
ColId Name JobId Instance
1 aaaaaaaaa 1 2dc757b
2 bbbbbbbbb 1 2dc757b
3 aaaaaaaaa 1 010dbb8
4 bbbbbbbbb 1 010dbb8
5 bbbbbbbbb 1 faa2733
6 aaaaaaaaa 1 faa2733
7 aaaaaaaaa 1 bc13d69
8 aaaaaaaaa 1 9428f4d
I want output like
ColId Name JobId Instance
1 aaaaaaaaa 1 2dc757b
3 aaaaaaaaa 1 010dbb8
5 bbbbbbbbb 1 faa2733
7 aaaaaaaaa 1 bc13d69
8 aaaaaaaaa 1 9428f4d
What should be the JPA query so that I can retrieve entire row having only single 'Instance'(there is no max min condition involved).
I need one row for each 'Instance' value
FROM table t GROUP BY t.instance should suit your needs.
Something like JPQL "Select entity from Entity entity where entity.id in (select min(subinstance.id) from Entity subinstance group by subinstance.instance)"
Functions like count, min, avg etc are allowed over columns not included in the group by statement, so any such should work if it returns a single id value from the grouping.