what is the unique id of informix temp table? - java

I create a temp table which consist of several table with UNION ALL statement like here below. I want later map this table to the entity for repository in spring. With other words I wanna map temp table to entity in spring jpa or hibernate.
select * from name UNION ALL
select * from soft where id >3
into temp namesoft_tmp
I tried the following.
select * from namesoft_tmp
but i can't see what is the column which can point me to the conclusion that this is primary key.
What is the unique id(primary key) of table namesoft_tmp?
How can i add auto generated id to temp table?
How can i excute select statement based on unique id?**

In general, the result of a UNION ALL query does not have a primary key; there is no guarantee that there are not duplicate rows in the result set.
Imagine a table describing the table of elements — called elements.
SELECT * FROM elements WHERE atomic_number < 10
UNION ALL
SELECT * FROM elements WHERE symbol MATCHES '[A-F]*'
INTO TEMP union_all;
Here, the elements Boron (B), Carbon (C), Beryllium (Be) and Fluorine (F) are all listed twice.
However, you can use:
SELECT ROWID, * FROM union_all ORDER BY atomic_number;
to get a unique identifier, the ROWID, in the result set. Note that this unique identifier is unique at any given time, but is not guaranteed to be stable. If you delete rows and add them again, the ROWID of the replaced rows may be different from before. But the ROWID will be unique until you edit the table.
+-------+--------+--------+--------------+-----------+--------+-------+
| rowid | atomic | symbol | name | atomic | period | group |
| | number | | | weight | | |
+-------+--------+--------+--------------+-----------+--------+-------+
| 257 | 1 | H | Hydrogen | 1.0079 | 1 | 1 |
| 258 | 2 | He | Helium | 4.0026 | 1 | 18 |
| 259 | 3 | Li | Lithium | 6.9410 | 2 | 1 |
| 260 | 4 | Be | Beryllium | 9.0122 | 2 | 2 |
| 266 | 4 | Be | Beryllium | 9.0122 | 2 | 2 |
| 267 | 5 | B | Boron | 10.8110 | 2 | 13 |
| 261 | 5 | B | Boron | 10.8110 | 2 | 13 |
| 268 | 6 | C | Carbon | 12.0110 | 2 | 14 |
| 262 | 6 | C | Carbon | 12.0110 | 2 | 14 |
| 263 | 7 | N | Nitrogen | 14.0070 | 2 | 15 |
| 264 | 8 | O | Oxygen | 15.9990 | 2 | 16 |
| 265 | 9 | F | Fluorine | 18.9980 | 2 | 17 |
| 269 | 9 | F | Fluorine | 18.9980 | 2 | 17 |
| 270 | 13 | Al | Aluminium | 26.9820 | 3 | 13 |
| 271 | 17 | Cl | Chlorine | 35.4530 | 3 | 17 |
| 272 | 18 | Ar | Argon | 39.9480 | 3 | 18 |
| 273 | 20 | Ca | Calcium | 40.0780 | 4 | 2 |
| 274 | 24 | Cr | Chromium | 51.9960 | 4 | 6 |
| 275 | 26 | Fe | Iron | 55.8450 | 4 | 8 |
| 276 | 27 | Co | Cobalt | 58.9330 | 4 | 9 |
| 277 | 29 | Cu | Copper | 63.5460 | 4 | 11 |
| 278 | 33 | As | Arsenic | 74.9220 | 4 | 15 |
| 279 | 35 | Br | Bromine | 79.9040 | 4 | 17 |
| 280 | 47 | Ag | Silver | 107.8700 | 5 | 11 |
| 281 | 48 | Cd | Cadmium | 112.4100 | 5 | 12 |
| 282 | 55 | Cs | Caesium | 132.9100 | 6 | 1 |
| 283 | 56 | Ba | Barium | 137.3300 | 6 | 2 |
| 284 | 58 | Ce | Cerium | 140.1200 | 6 | L |
| 285 | 63 | Eu | Europium | 151.9600 | 6 | L |
| 286 | 66 | Dy | Dyprosium | 162.5000 | 6 | L |
| 287 | 68 | Er | Erbium | 167.2600 | 6 | L |
| 288 | 79 | Au | Gold | 196.9700 | 6 | 11 |
| 289 | 83 | Bi | Bismuth | 208.9800 | 6 | 15 |
| 290 | 85 | At | Astatine | 209.9900 | 6 | 17 |
| 291 | 87 | Fr | Francium | 223.0200 | 7 | 1 |
| 292 | 89 | Ac | Actinium | 227.0300 | 7 | A |
| 293 | 95 | Am | Americium | 243.0600 | 7 | A |
| 294 | 96 | Cm | Curium | 247.0700 | 7 | A |
| 295 | 97 | Bk | Berkelium | 247.0700 | 7 | A |
| 296 | 98 | Cf | Californium | 251.0800 | 7 | A |
| 297 | 99 | Es | Einsteinium | 252.0800 | 7 | A |
| 298 | 100 | Fm | Fermium | 257.1000 | 7 | A |
| 299 | 105 | Db | Dubnium | 270.1300 | 7 | 5 |
| 300 | 107 | Bh | Bohrium | 270.1300 | 7 | 7 |
| 301 | 110 | Ds | Darmstadtium | 281.1700 | 7 | 10 |
| 302 | 112 | Cn | Copernicium | 285.1800 | 7 | 12 |
| 303 | 114 | Fl | Flerovium | 289.1900 | 7 | 14 |
+-------+--------+--------+--------------+-----------+--------+-------+

Related

Hierarchical data manipulation in Apache Spark

I am having a Dataset in Spark (v2.1.1) with 3 columns (as shown below) containing hierarchical data.
My target objective is to assign incremental numbering to each row based on the parent-child hierarchy. Graphically it can be said that the hierarchical data is a collection of trees.
As per below table, I already have the rows grouped based on 'Global_ID'. Now I would like to generate the 'Value' column in
an incremental order but based on the hierarchy of data from
'Parent' and 'Child' columns.
Tabular Representation (Value is the desired output):
+-----------+--------+-------+ +-----------+--------+-------+-------+
| Current Dataset | | Desired Dataset (Output) |
+-----------+--------+-------+ +-----------+--------+-------+-------+
| Global_ID | Parent | Child | | Global_ID | Parent | Child | Value |
+-----------+--------+-------+ +-----------+--------+-------+-------+
| 111 | 111 | 123 | | 111 | 111 | 111 | 1 |
| 111 | 135 | 246 | | 111 | 111 | 123 | 2 |
| 111 | 123 | 456 | | 111 | 123 | 789 | 3 |
| 111 | 123 | 789 | | 111 | 123 | 456 | 4 |
| 111 | 111 | 111 | | 111 | 111 | 135 | 5 |
| 111 | 135 | 468 | | 111 | 135 | 246 | 6 |
| 111 | 135 | 268 | | 111 | 135 | 468 | 7 |
| 111 | 268 | 321 | | 111 | 135 | 268 | 8 |
| 111 | 138 | 139 | | 111 | 268 | 321 | 9 |
| 111 | 111 | 135 | | 111 | 111 | 138 | 10 |
| 111 | 111 | 138 | | 111 | 138 | 139 | 11 |
| 222 | 222 | 654 | | 222 | 222 | 222 | 12 |
| 222 | 654 | 721 | | 222 | 222 | 987 | 13 |
| 222 | 222 | 222 | | 222 | 222 | 654 | 14 |
| 222 | 721 | 127 | | 222 | 654 | 721 | 15 |
| 222 | 222 | 987 | | 222 | 721 | 127 | 16 |
| 333 | 333 | 398 | | 333 | 333 | 333 | 17 |
| 333 | 333 | 498 | | 333 | 333 | 398 | 18 |
| 333 | 333 | 333 | | 333 | 333 | 498 | 19 |
| 333 | 333 | 598 | | 333 | 333 | 598 | 20 |
+-----------+--------+-------+ +-----------+--------+-------+-------+
Tree Representation (Desired value is represented next to each node):
+-----+ +-----+
1 | 111 | 17 | 333 |
+--+--+ +--+--+
| |
+---------------+--------+-----------------+ +----------+----------+
| | | | | |
+--v--+ +--v--+ +--v--+ +--v--+ +--v--+ +--v--+
2 | 123 | 5 | 135 | 10 | 138 | | 398 | | 498 | | 598 |
+--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+
+-----+-----+ +--------+--------+ | 18 19 20
| | | | | |
+--v--+ +--v--+ +--v--+ +--v--+ +--v--+ +--v--+
| 789 | | 456 | | 246 | | 468 | | 268 | | 139 | +-----+
+-----+ +-----+ +-----+ +-----+ +--+--+ +-----+ 12 | 222 |
3 4 6 7 8 | 11 +--+--+
+--v--+ |
| 321 | +------+-------+
+--+--+ | |
9 +--v--+ +--v--+
13 | 987 | 14 | 654 |
+--+--+ +--+--+
|
+--v--+
15 | 721 |
+--+--+
|
+--v--+
16 | 127 |
+--+--+
Code Snippet:
Dataset<Row> myDataset = spark
.sql("select Global_ID, Parent, Child from RECORDS");
JavaPairRDD<Row,Long> finalDataset = myDataset.groupBy(new Column("Global_ID"))
.agg(functions.sort_array(functions.collect_list(new Column("Parent").as("parent_col"))),
functions.sort_array(functions.collect_list(new Column("Child").as("child_col"))))
.orderBy(new Column("Global_ID"))
.withColumn("vars", functions.explode(<Spark UDF>)
.select(new Column("vars"),new Column("parent_col"),new Column("child_col"))
.javaRDD().zipWithIndex();
// Sample UDF (TODO: Actual Implementation)
spark.udf().register("computeValue",
(<Column Names>) -> <functionality & implementation>,
DataTypes.<xxx>);
After lot of research and going through many suggestions in blogs, I have tried the below approaches but of no avail to achieve the result for my scenario.
Tech Stack :
Apache Spark (v2.1.1)
Java 8
AWS EMR Cluster (Spark App Deployment)
Data Volume:
Approximately ~20 million rows in the Dataset
Approaches Tried:
Spark GraphX + GraphFrames:
Using this combination, I could only achieve the relation between vertices and edges but it doesn't fit for my use case.
Reference: https://graphframes.github.io/user-guide.html
Spark GraphX Pregel API:
This is the closest I could get to achieving the expected result but unfortunately I could not find a Java code snippet for the same.
The example provided in one of the blogs is in Scala which I am not
well versed with.
Reference: https://dzone.com/articles/processing-hierarchical-data-using-spark-graphx-pr
Any suggestions for alternatives (or) modifications in current approaches would be really helpful as I am totally lost in figuring out the solution for this use case.
Appreciate your help! Thank you!
Note: The below solution is in scala spark. You can easily translate to java code.
Check this out. I tried doing it using Spark Sql you can get an idea. Basically idea is to sort the child, parent and globalid while aggregating and grouping them. Once grouped and sorted by globalid expand the rest. You will get ordered result table to which later you can zipWithIndex to add the rank (value)
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions.udf
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val t = Seq((111,111,123), (111,111,111), (111,123,789), (111,268,321), (222,222,654), (222,222,222), (222,721,127), (333,333,398), (333,333,333), (333,333,598))
val ddd = sc.parallelize(t).toDF
val zip = udf((xs: Seq[Int], ys: Seq[Int]) => xs zip ys)
val dd1 = ddd
.groupBy($"_1")
.agg(sort_array(collect_list($"_2")).as("v"),
sort_array(collect_list($"_3")).as("w"))
.orderBy(asc("_1"))
.withColumn("vars", explode(zip($"v", $"w")))
.select($"_1", $"vars._1", $"vars._2").rdd.zipWithIndex
dd1.collect
Output
res24: Array[(org.apache.spark.sql.Row, Long)] = Array(([111,111,111],0), ([111,111,123],1), ([111,123,321],2),
([111,268,789],3), ([222,222,127],4), ([222,222,222],5), ([222,721,654],6),([333,333,333],7), ([333,333,398],8), ([333,333,598],9))

SQL query performance REQUIRES use of IS NULL condition (in JAVA)

I've a query that has a quite strange behavior (I cannot show it all, it's quite long and with various joins).
Inside the query I've a test over a value (a number) like this:
AND (
R.CAPACITY IS NULL
OR
R.CAPACITY = 0
OR
(R.CAPACITY BETWEEN :minSeats AND :maxSeats)
)
It works without any problem. It is fast, when I ran it directly from IDE (I use IDEA) as when I run it from code (tests or triggering it from the interface).
What I don't understand is that, if the first two conditions
R.CAPACITY IS NULL
OR
R.CAPACITY = 0
are removed the query becomes extremely slow. Not everywhere, because in IDEA it is still quite fast. But, from code it's impossible to use it.
I've tried to change the data on the database, to create (changing the data) indexes, to change the WHERE condition with a JOIN where the data is filtered by this conditions. No way. Nothing works. It's always really fast (as expected, by the way) in IDEA but slow, too much slow, in code. I run it against an Oracle DB (version 9).
Strange is also that the field has no NULL values, but to be fast the query requires a check against them.
Has anyone idea why this happens?
EDIT (sorry, I've switched the plans)
Explain plan of FAST query is:
Plan hash value: 3802388129
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 7 | 1463 | 1887 (8)| 00:00:23 |
| 1 | SORT UNIQUE | | 7 | 1463 | 1886 (8)| 00:00:23 |
|* 2 | HASH JOIN OUTER | | 7 | 1463 | 1885 (8)| 00:00:23 |
| 3 | VIEW | | 7 | 1288 | 754 (10)| 00:00:10 |
|* 4 | HASH JOIN | | 7 | 2891 | 754 (10)| 00:00:10 |
|* 5 | HASH JOIN OUTER | | 7 | 2688 | 750 (10)| 00:00:10 |
| 6 | NESTED LOOPS | | 7 | 476 | 750 (10)| 00:00:09 |
| 7 | NESTED LOOPS | | 7 | 147 | 743 (10)| 00:00:09 |
|* 8 | INDEX UNIQUE SCAN | DATUM_U1 | 1 | 8 | 1 (0)| 00:00:01 |
| 9 | VIEW | VW_NSO_1 | 7 | 91 | 742 (10)| 00:00:09 |
|* 10 | FILTER | | | | | |
| 11 | SORT GROUP BY | | 7 | 189 | 742 (10)| 00:00:09 |
| 12 | MERGE JOIN CARTESIAN | | 1170K| 30M| 679 (2)| 00:00:09 |
| 13 | MERGE JOIN CARTESIAN | | 2808 | 53352 | 54 (0)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | TABLEA | 108 | 1728 | 29 (0)| 00:00:01 |
| 15 | BUFFER SORT | | 26 | 78 | 25 (0)| 00:00:01 |
| 16 | INDEX FAST FULL SCAN| SYS_C0012272 | 26 | 78 | 0 (0)| 00:00:01 |
| 17 | BUFFER SORT | | 417 | 3336 | 742 (10)| 00:00:09 |
| 18 | INDEX FAST FULL SCAN | SYS_C0012270 | 417 | 3336 | 0 (0)| 00:00:01 |
|* 19 | TABLE ACCESS BY INDEX ROWID| TABLEA | 1 | 47 | 1 (0)| 00:00:01 |
|* 20 | INDEX UNIQUE SCAN | SYS_C0012267 | 1 | | 0 (0)| 00:00:01 |
| 21 | VIEW | | 1 | 316 | 5 (100)| 00:00:01 |
|* 22 | FILTER | | | | | |
| 23 | MERGE JOIN CARTESIAN | | 595 | 98K| 10 (0)| 00:00:01 |
| 24 | VIEW | index$_join$_006 | 5 | 445 | 3 (0)| 00:00:01 |
|* 25 | HASH JOIN | | | | | |
| 26 | INDEX FAST FULL SCAN | TABLEB | 5 | 445 | 1 (0)| 00:00:01 |
| 27 | INDEX FAST FULL SCAN | TABLEC | 5 | 445 | 1 (0)| 00:00:01 |
| 28 | BUFFER SORT | | 119 | 9520 | 7 (0)| 00:00:01 |
| 29 | TABLE ACCESS FULL | TABLED | 119 | 9520 | 1 (0)| 00:00:01 |
| 30 | TABLE ACCESS FULL | TABLEE | 28 | 812 | 3 (0)| 00:00:01 |
| 31 | VIEW | | 1636 | 40900 | 1130 (7)| 00:00:14 |
|* 32 | FILTER | | | | | |
|* 33 | HASH JOIN | | 3845K| 245M| 1130 (7)| 00:00:14 |
|* 34 | TABLE ACCESS FULL | TABLEF | 106 | 1378 | 12 (9)| 00:00:01 |
| 35 | NESTED LOOPS | | 290K| 14M| 1096 (5)| 00:00:14 |
|* 36 | TABLE ACCESS FULL | TABLEG | 290K| 13M| 1064 (2)| 00:00:13 |
|* 37 | INDEX UNIQUE SCAN | PK_TABLEH | 1 | 5 | 0 (0)| 00:00:01 |
|* 38 | TABLE ACCESS BY INDEX ROWID | TABLEG | 1 | 36 | 5 (0)| 00:00:01 |
|* 39 | INDEX RANGE SCAN | TABLEI | 3 | | 3 (0)| 00:00:01 |
|* 40 | TABLE ACCESS BY INDEX ROWID | TABLEG | 1 | 36 | 5 (0)| 00:00:01 |
|* 41 | INDEX RANGE SCAN | TABLEI | 3 | | 3 (0)| 00:00:01 |
|* 42 | TABLE ACCESS FULL | TABLEL | 1 | 23 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
Plan of SLOW query:
Plan hash value: 453029505
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 434 | 1348 (7)| 00:00:17 |
| 1 | SORT UNIQUE | | 2 | 434 | 1347 (7)| 00:00:17 |
|* 2 | HASH JOIN OUTER | | 2 | 434 | 1346 (7)| 00:00:17 |
| 3 | VIEW | | 2 | 384 | 215 (9)| 00:00:03 |
| 4 | NESTED LOOPS | | 2 | 826 | 215 (9)| 00:00:03 |
|* 5 | HASH JOIN OUTER | | 2 | 768 | 213 (9)| 00:00:03 |
| 6 | NESTED LOOPS | | 2 | 136 | 212 (9)| 00:00:03 |
| 7 | NESTED LOOPS | | 2 | 42 | 210 (9)| 00:00:03 |
|* 8 | INDEX UNIQUE SCAN | DATUM_U1 | 1 | 8 | 1 (0)| 00:00:01 |
| 9 | VIEW | VW_NSO_1 | 2 | 26 | 209 (9)| 00:00:03 |
|* 10 | FILTER | | | | | |
| 11 | SORT GROUP BY | | 2 | 58 | 209 (9)| 00:00:03 |
| 12 | MERGE JOIN CARTESIAN | | 291K| 8252K| 194 (2)| 00:00:03 |
| 13 | MERGE JOIN CARTESIAN | | 699 | 14679 | 37 (0)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | TABLEA | 27 | 486 | 29 (0)| 00:00:01 |
| 15 | BUFFER SORT | | 26 | 78 | 8 (0)| 00:00:01 |
| 16 | INDEX FAST FULL SCAN| SYS_C0012272 | 26 | 78 | 0 (0)| 00:00:01 |
| 17 | BUFFER SORT | | 417 | 3336 | 209 (9)| 00:00:03 |
| 18 | INDEX FAST FULL SCAN | SYS_C0012270 | 417 | 3336 | 0 (0)| 00:00:01 |
|* 19 | TABLE ACCESS BY INDEX ROWID| TABLEA | 1 | 47 | 1 (0)| 00:00:01 |
|* 20 | INDEX UNIQUE SCAN | SYS_C0012267 | 1 | | 0 (0)| 00:00:01 |
| 21 | VIEW | | 1 | 316 | 5 (100)| 00:00:01 |
|* 22 | FILTER | | | | | |
| 23 | MERGE JOIN CARTESIAN | | 595 | 98K| 10 (0)| 00:00:01 |
| 24 | VIEW | index$_join$_006 | 5 | 445 | 3 (0)| 00:00:01 |
|* 25 | HASH JOIN | | | | | |
| 26 | INDEX FAST FULL SCAN | TABLEB | 5 | 445 | 1 (0)| 00:00:01 |
| 27 | INDEX FAST FULL SCAN | TABLEC | 5 | 445 | 1 (0)| 00:00:01 |
| 28 | BUFFER SORT | | 119 | 9520 | 7 (0)| 00:00:01 |
| 29 | TABLE ACCESS FULL | TABLED | 119 | 9520 | 1 (0)| 00:00:01 |
| 30 | TABLE ACCESS BY INDEX ROWID | TABLEE | 1 | 29 | 1 (0)| 00:00:01 |
|* 31 | INDEX UNIQUE SCAN | SYS_C0012290 | 1 | | 0 (0)| 00:00:01 |
| 32 | VIEW | | 1636 | 40900 | 1130 (7)| 00:00:14 |
|* 33 | FILTER | | | | | |
|* 34 | HASH JOIN | | 3845K| 245M| 1130 (7)| 00:00:14 |
|* 35 | TABLE ACCESS FULL | TABLEF | 106 | 1378 | 12 (9)| 00:00:01 |
| 36 | NESTED LOOPS | | 290K| 14M| 1096 (5)| 00:00:14 |
|* 37 | TABLE ACCESS FULL | TABLEG | 290K| 13M| 1064 (2)| 00:00:13 |
|* 38 | INDEX UNIQUE SCAN | PK_TABLEH | 1 | 5 | 0 (0)| 00:00:01 |
|* 39 | TABLE ACCESS BY INDEX ROWID | TABLEG | 1 | 36 | 5 (0)| 00:00:01 |
|* 40 | INDEX RANGE SCAN | TABLEI | 3 | | 3 (0)| 00:00:01 |
|* 41 | TABLE ACCESS BY INDEX ROWID | TABLEG | 1 | 36 | 5 (0)| 00:00:01 |
|* 42 | INDEX RANGE SCAN | TABLEI | 3 | | 3 (0)| 00:00:01 |
|* 43 | TABLE ACCESS FULL | TABLEL | 1 | 23 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Gametree not working correctly Java. Is the addChild() function ok?

import java.util.ArrayList;
import java.util.List;
public class Tree
{
private Board board;
private List<Tree> children;
private Tree parent;
public Tree(Board board1)
{
this.board = board1;
this.children = new ArrayList<Tree>();
}
public Tree(Tree t1)
{
}
public Tree createTree(Tree tree, boolean isHuman, int depth)
{
Player play1 = new Player();
ArrayList<Board> potBoards = new ArrayList<Board>(play1.potentialMoves(tree.board, isHuman));
if (board.gameEnd() || depth == 0)
{
return null;
}
//Tree oldTree = new Tree(board);
for (int i = 0; i < potBoards.size() - 1; i++)
{
Tree newTree = new Tree(potBoards.get(i));
createTree(newTree, !isHuman, depth - 1);
tree.addChild(newTree);
}
return tree;
}
private Tree addChild(Tree child)
{
Tree childNode = new Tree(child);
childNode.parent = this;
this.children.add(childNode);
return childNode;
}
}
Hi there. I'm trying to make a gameTree that will be handled by minimax in the future. I think the error either happened in the AddChild function or the potentialMoves? The potentialMoves returns all potential moves a player or computer can make. For example in Othello a player can either go
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
0| | | | | | | | |
+-+-+-+-+-+-+-+-+
1| | | | | | | | |
+-+-+-+-+-+-+-+-+
2| | | |b| | | | |
+-+-+-+-+-+-+-+-+
3| | | |b|b| | | |
+-+-+-+-+-+-+-+-+
4| | | |b|w| | | |
+-+-+-+-+-+-+-+-+
5| | | | | | | | |
+-+-+-+-+-+-+-+-+
6| | | | | | | | |
+-+-+-+-+-+-+-+-+
7| | | | | | | | |
+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+
0| | | | | | | | |
+-+-+-+-+-+-+-+-+
1| | | | | | | | |
+-+-+-+-+-+-+-+-+
2| | | | | | | | |
+-+-+-+-+-+-+-+-+
3| | |b|b|b| | | |
+-+-+-+-+-+-+-+-+
4| | | |b|w| | | |
+-+-+-+-+-+-+-+-+
5| | | | | | | | |
+-+-+-+-+-+-+-+-+
6| | | | | | | | |
+-+-+-+-+-+-+-+-+
7| | | | | | | | |
+-+-+-+-+-+-+-+-+
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
0| | | | | | | | |
+-+-+-+-+-+-+-+-+
1| | | | | | | | |
+-+-+-+-+-+-+-+-+
2| | | | | | | | |
+-+-+-+-+-+-+-+-+
3| | | |w|b| | | |
+-+-+-+-+-+-+-+-+
4| | | |b|b|b| | |
+-+-+-+-+-+-+-+-+
5| | | | | | | | |
+-+-+-+-+-+-+-+-+
6| | | | | | | | |
+-+-+-+-+-+-+-+-+
7| | | | | | | | |
+-+-+-+-+-+-+-+-+
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
0| | | | | | | | |
+-+-+-+-+-+-+-+-+
1| | | | | | | | |
+-+-+-+-+-+-+-+-+
2| | | | | | | | |
+-+-+-+-+-+-+-+-+
3| | | |w|b| | | |
+-+-+-+-+-+-+-+-+
4| | | |b|b|| | |
+-+-+-+-+-+-+-+-+
5| | | | |b| | | |
+-+-+-+-+-+-+-+-+
6| | | | | | | | |
+-+-+-+-+-+-+-+-+
7| | | | | | | | |
+-+-+-+-+-+-+-+-+
for the first turn. The potential moves does not permanently change the board that is being played on. It returns an ArrayList.
I have this in my main:
Tree gameTree = new Tree(boardOthello);
Tree pickTree = gameTree.createTree(gameTree, true, 2);
Does the addChild() function look ok or is there something else I'm missing in my code?

How to get an encrypted token minimized?

I have generated an encrypted token from Blowfish. Eg:- 7$127O$137kI$137mK$07WK$01$26m$05zYbJmCmUw$16nF$11G$27A2Gv$19Jm8$26eJ9kUv$07$118q$05$02$24KP8j$208$16$06$100P$11
Just out of curiosity , can I get this token more minimized/simplify/striped somehow. Can anything to be done in order to get this more so simplified , may be any encoding. Please help me out with this.
Thanks
Huffman has a static cost, the Huffman table. On the other side, algorithms of the Lempel-Ziv family have very good compression and performance.
If your string is going to be english text you better use Smaz: https://github.com/antirez/smaz/tree/master
Try Huffman encoding with your string. For the string 7$127O$137kI$137mK$07WK$01$26m$05zYbJmCmUw$16nF$11G$27A2Gv$19Jm8$26eJ9kUv$07$118q$05$02$24KP8j$208$16$06$100P$11
the compression ratio is 0.536830357143.
Analysis
Memory requirements: ASCII: 896 bit , Huffman: 593 bit
Entropy: ASCII: 1.01833029706, Huffman: 1.05223637886
Average code length: ASCII: 8 bit , Huffman: 4.29464285714 bit
Compression rate: 0.536830357143
+---------+-------+-------+-----------+---------+---------+
| Seq.no. | Chars | ASCII | Frequency | Huffman | ASCII |
+---------+-------+-------+-----------+---------+---------+
| 0 | '$' | 36 | 22 | 0 | 100100 |
| 1 | '0' | 48 | 10 | 1111 | 110000 |
| 2 | '1' | 49 | 14 | 11 | 110001 |
| 3 | '2' | 50 | 8 | 1011 | 110010 |
| 4 | '3' | 51 | 2 | 110100 | 110011 |
| 5 | '4' | 52 | 1 | 101111 | 110100 |
| 6 | '5' | 53 | 2 | 111001 | 110101 |
| 7 | '6' | 54 | 5 | 100 | 110110 |
| 8 | '7' | 55 | 7 | 1000 | 110111 |
| 9 | '8' | 56 | 4 | 11011 | 111000 |
| 10 | '9' | 57 | 2 | 110011 | 111001 |
| 11 | 'A' | 65 | 1 | 1100001 | 1000001 |
| 12 | 'C' | 67 | 1 | 1100000 | 1000011 |
| 13 | 'F' | 70 | 1 | 1001100 | 1000110 |
| 14 | 'G' | 71 | 2 | 110101 | 1000111 |
| 15 | 'I' | 73 | 1 | 101110 | 1001001 |
| 16 | 'J' | 74 | 3 | 1010 | 1001010 |
| 17 | 'K' | 75 | 3 | 10010 | 1001011 |
| 18 | 'O' | 79 | 1 | 1010001 | 1001111 |
| 19 | 'P' | 80 | 2 | 110010 | 1010000 |
| 20 | 'U' | 85 | 2 | 101010 | 1010101 |
| 21 | 'W' | 87 | 1 | 1100010 | 1010111 |
| 22 | 'Y' | 89 | 1 | 1001101 | 1011001 |
| 23 | 'b' | 98 | 1 | 1010000 | 1100010 |
| 24 | 'e' | 101 | 1 | 1100011 | 1100101 |
| 25 | 'j' | 106 | 1 | 10110 | 1101010 |
| 26 | 'k' | 107 | 2 | 101001 | 1101011 |
| 27 | 'm' | 109 | 5 | 11101 | 1101101 |
| 28 | 'n' | 110 | 1 | 1001110 | 1101110 |
| 29 | 'q' | 113 | 1 | 1001111 | 1110001 |
| 30 | 'v' | 118 | 2 | 111000 | 1110110 |
| 31 | 'w' | 119 | 1 | 1010110 | 1110111 |
| 32 | 'z' | 122 | 1 | 1010111 | 1111010 |
+---------+-------+-------+-----------+---------+---------+

Generate Excel having Formula using Apache POI

In the example below how to add the formula in individual user totals as the number of rows for the user can vary.
+------+--------------+---------+--------------+-------+
| Name | Date | Billable| Non-Billable | Total |
+------+--------------+---------+--------------+-------+
| abc | 06/23/2012 | 860 | 10 | 870 |
| | User Totals: | 860 | 10 | 870 |
| xyz | 07/12/2012 | 45 | 0 | 45 |
| | User Totals: | 45 | 0 | 45 |
| ccc | 09/19/2013 | 165 | 35 | 200 |
| | 10/15/2013 | 240 | 0 | 240 |
| | User Totals: | 405 | 35 | 440 |
| | Grand Totals | 1310| 45 | 1355 |
+------+--------------+---------+--------------+-------+

Categories