I am running PIG script using Java API (pigserver.registerscript)
I need to find out number of records processed and number of output records
using java API.
How to implement the same.
I have simple pig script as follows
A = LOAD '$input_file_path' USING PigStorage('$') as (id:double,name:chararray,code:int);
Dump A;
B = FOREACH A GENERATE PigUdf2(id),name;
Dump B;
& java code is .....
PigStats ps;
HashMap<String, String> m = new HashMap();
Path p = new Path("/home/shweta/Desktop/debugging_pig_udf/pig_in");
m.put("input_file_path", p.toString()); PigServer pigServer = new PigServer(ExecType.LOCAL);
pigServer.registerScript("/home/shweta/Desktop/debugging_pig_udf/pig_script.pig",m);
I need to find out Input records and output records COUNT using java API
Related
I'm on a heatmap project for my university, we have to get some data (212Go) from a txt file (coordinates, height), then put it in HBase to retrieve it on a web client with Express.
I practiced using a 144Mo file, this is working :
SparkConf conf = new SparkConf().setAppName("PLE");
JavaSparkContext context = new JavaSparkContext(conf);
JavaRDD<String> data = context.textFile(args[0]);
Connection co = ConnectionFactory.createConnection(getConf());
createTable(co);
Table table = co.getTable(TableName.valueOf(TABLE_NAME));
Put put = new Put(Bytes.toBytes("KEY"));
for (String s : data.collect()) {
String[] tmp = s.split(",");
put.addImmutable(FAMILY,
Bytes.toBytes(tmp[2]),
Bytes.toBytes(tmp[0]+","+tmp[1]));
}
table.put(put);
But I now that I use the 212Go file, I got some memory errors, I guess the collect method gather all the data in memory, so 212Go is too much.
So now I'm trying this :
SparkConf conf = new SparkConf().setAppName("PLE");
JavaSparkContext context = new JavaSparkContext(conf);
JavaRDD<String> data = context.textFile(args[0]);
Connection co = ConnectionFactory.createConnection(getConf());
createTable(co);
Table table = co.getTable(TableName.valueOf(TABLE_NAME));
Put put = new Put(Bytes.toBytes("KEY"));
data.foreach(line ->{
String[] tmp = line.split(",");
put.addImmutable(FAMILY,
Bytes.toBytes(tmp[2]),
Bytes.toBytes(tmp[0]+","+tmp[1]));
});
table.put(put);
And I'm getting "org.apache.spark.SparkException: Task not serializable", I searched about it and tried some fixing, without success, upon what I read here : Task not serializable: java.io.NotSerializableException when calling function outside closure only on classes not objects
Actually I don't understand everything in this topic, I'm just a student, maybe the answer to my problem is obvious, maybe not, anyway thanks in advance !
As a rule of thumb - serializing database connections (any type) doesn't make sense. There are not designed to be serialized and deserialized, Spark or not.
Create connection for each partition:
data.foreachPartition(partition -> {
Connection co = ConnectionFactory.createConnection(getConf());
... // All required setup
Table table = co.getTable(TableName.valueOf(TABLE_NAME));
Put put = new Put(Bytes.toBytes("KEY"));
while (partition.hasNext()) {
String line = partition.next();
String[] tmp = line.split(",");
put.addImmutable(FAMILY,
Bytes.toBytes(tmp[2]),
Bytes.toBytes(tmp[0]+","+tmp[1]));
}
... // Clean connections
});
I also recommend reading Design Patterns for using foreachRDD from the official Spark Streaming programming guide.
I have a list of list which i am looking forward to run it using akka and would want to do a operation when all of the child lists are done processing. But the Complete is running before all child's are completed.
Basically i am trying to read all the sheets in the excel and then read each rows from the excel. For this i am looking to use akka to process each sheets seperately and also in each sheet i am looking to process each rows seperately.
Sample Code:
List<List<String>> workbook = new ArrayList<List<String>>();
List<String> Sheet1 = new ArrayList<String>();
Sheet1.add("S");
Sheet1.add("a");
Sheet1.add("d");
List<String> Sheet2 = new ArrayList<String>();
Sheet2.add("S");
Sheet2.add("a1");
Sheet2.add("d");
workbook.add(Sheet1);
workbook.add(Sheet2);
final ActorSystem system = ActorSystem.create("Sys");
final ActorMaterializer materializer = ActorMaterializer.create(system);
Source.from(workbook).map(sheet -> {
return Source.from(sheet).runWith(Sink.foreach(data -> {
System.out.println(data);
Thread.sleep(1000);
}), materializer).toCompletableFuture();
}).runWith(Sink.ignore(), materializer).whenComplete((a, b) -> {
System.out.println("Complete");
});
system.terminate();
The Current output is:
S
S
Complete
a
a1
d
d
The Expected output is:
S
S
a
a1
d
d
Complete
Could anyone please help ?
Your use of a "stream within a stream" may be overcomplicating the process.
You could instead use Flow.flatMapConcat. I can only provide an example in scala but hopefully it translates easily to java:
val flattenFlow : Flow[List[String], String, NotUsed] =
Flow[List[String].flatMapConcat(sheet => Source(sheet))
val Source[String] flattenedSource = Source(worksheet).via(flattenFlow)
There is a blog post with a example of using flatMapConcat in java but I don't know if my guessed type Flow.of(List<String>.class) is valid code.
I am using a simple UDF in Pig Latin / MapReduce. the Pig Latin query is:
REGISTER \PigStringOperations.jar
sensitive = LOAD '/mdsba/sample2.csv' using PigStorage(',') as (AGE:int,EDU:chararray,SEX:chararray,SALARY:chararray);
BV= group sensitive by (EDU,SEX) ;
BVA= foreach BV generate sensitive.AGE as AGE;
anon = FOREACH BVA GENERATE PigStringOperations.StringSplit(sensitive.AGE);
DUMP anon;
The UDF is a simple java program
as shown below
public String exec(Tuple input) throws IOException
String data = (String)input.get(0);
if (data.contains(" "))
{
this.data2 = data.split(" ");
return this.data2[0].toString();
}
return data;}}
This is taken from the adult database Adult database sample
The AGE output from grouping (EDU,SEX) varies from one tuple to another, as shown below
AGE(12,10,35,20)
AGE(4,56,10)
AGE(70)
Each time I run the program I recieve the following error:
ERROR 1066: Unable to open iterator for alias anon. Backend error : org.apache.pig.backend.executionengine.ExecException: ERROR 0: Scalar has more than one row in the output. 1st : (,EDU,SEX,SALARY), 2nd :(39,Bachelors,Male,<=50K)
After grouping the data the sensitive.AGE will be a bag! Consider this when you plan your UDF.
If you would do DESCRIBE on your projections, like:
DESCRIBE BVA;
That would help you to understand the data structure and plan your processing accordingly.
Using test cases I was able to see how ELKI can be used directly from Java but now I want to read my data from MongoDB and then use ELKI to cluster geographic (long, lat) data.
I can only cluster data from a CSV file using ELKI. Is it possible to connect de.lmu.ifi.dbs.elki.database.Database with MongoDB? I can see from the java debugger that there is a databaseconnection field in de.lmu.ifi.dbs.elki.database.Database.
I query MongoDB creating POJO for each row and now I want to cluster these objects using ELKI.
It is possible to read data from MongoDB and write it in a CSV file then use ELKI to read that CSV file but I would like to know if there is a simpler solution.
---------FINDINGS_1:
From ELKI - Use List<String> of objects to populate the Database I found that I need to implement de.lmu.ifi.dbs.elki.datasource.DatabaseConnection and specifically override the loadData() method which returns an instance of MultiObjectsBundle.
So I think I should wrap a list of POJO with MultiObjectsBundle. Now i'm looking at the MultiObjectsBundle and it looks like the data should be held in columns. Why columns datatype is List> shouldnt it be List? just a list of items you want to cluster?
I'm a little confused. How is ELKI going to know that it should look at the long and lat for POJO? Where do I tell ELKI to do this? Using de.lmu.ifi.dbs.elki.data.type.SimpleTypeInformation?
---------FINDINGS_2:
I have tried to use ArrayAdapterDatabaseConnection and I have tried implementing DatabaseConnection. Sorry I need thing in very simple terms for me to understand.
This is my code for clustering:
int minPts=3;
double eps=0.08;
double[][] data1 = {{-0.197574246, 51.49960695}, {-0.084605692, 51.52128377}, {-0.120973687, 51.53005939}, {-0.156876, 51.49313},
{-0.144228881, 51.51811784}, {-0.1680743, 51.53430039}, {-0.170134484,51.52834133}, { -0.096440751, 51.5073853},
{-0.092754157, 51.50597426}, {-0.122502346, 51.52395143}, {-0.136039674, 51.51991453}, {-0.123616824, 51.52994371},
{-0.127854211, 51.51772703}, {-0.125979294, 51.52635795}, {-0.109006325, 51.5216612}, {-0.12221963, 51.51477076}, {-0.131161087, 51.52505093} };
// ArrayAdapterDatabaseConnection dbcon = new ArrayAdapterDatabaseConnection(data1);
DatabaseConnection dbcon = new MyDBConnection();
ListParameterization params = new ListParameterization();
params.addParameter(de.lmu.ifi.dbs.elki.algorithm.clustering.DBSCAN.Parameterizer.MINPTS_ID, minPts);
params.addParameter(de.lmu.ifi.dbs.elki.algorithm.clustering.DBSCAN.Parameterizer.EPSILON_ID, eps);
params.addParameter(DBSCAN.DISTANCE_FUNCTION_ID, EuclideanDistanceFunction.class);
params.addParameter(AbstractDatabase.Parameterizer.DATABASE_CONNECTION_ID, dbcon);
params.addParameter(AbstractDatabase.Parameterizer.INDEX_ID,
RStarTreeFactory.class);
params.addParameter(RStarTreeFactory.Parameterizer.BULK_SPLIT_ID,
SortTileRecursiveBulkSplit.class);
params.addParameter(AbstractPageFileFactory.Parameterizer.PAGE_SIZE_ID, 1000);
Database db = ClassGenericsUtil.parameterizeOrAbort(StaticArrayDatabase.class, params);
db.initialize();
GeneralizedDBSCAN dbscan = ClassGenericsUtil.parameterizeOrAbort(GeneralizedDBSCAN.class, params);
Relation<DoubleVector> rel = db.getRelation(TypeUtil.DOUBLE_VECTOR_FIELD);
Relation<ExternalID> relID = db.getRelation(TypeUtil.EXTERNALID);
DBIDRange ids = (DBIDRange) rel.getDBIDs();
Clustering<Model> result = dbscan.run(db);
int i =0;
for(Cluster<Model> clu : result.getAllClusters()) {
System.out.println("#" + i + ": " + clu.getNameAutomatic());
System.out.println("Size: " + clu.size());
System.out.print("Objects: ");
for(DBIDIter it = clu.getIDs().iter(); it.valid(); it.advance()) {
DoubleVector v = rel.get(it);
ExternalID exID = relID.get(it);
System.out.print("DoubleVec: ["+v+"]");
System.out.print("ExID: ["+exID+"]");
final int offset = ids.getOffset(it);
System.out.print(" " + offset);
}
System.out.println();
++i;
}
The ArrayAdapterDatabaseConnection produces two clusters, I just had to play around with the value of epsilon, when I set epsilon=0.008 dbscan started creating clusters. When i set epsilon=0.04 all the items were in 1 cluster.
I have also tried to implement DatabaseConnection:
#Override
public MultipleObjectsBundle loadData() {
MultipleObjectsBundle bundle = new MultipleObjectsBundle();
List<Station> stations = getStations();
List<DoubleVector> vecs = new ArrayList<DoubleVector>();
List<ExternalID> ids = new ArrayList<ExternalID>();
for (Station s : stations){
String strID = Integer.toString(s.getId());
ExternalID i = new ExternalID(strID);
ids.add(i);
double[] st = {s.getLongitude(), s.getLatitude()};
DoubleVector dv = new DoubleVector(st);
vecs.add(dv);
}
SimpleTypeInformation<DoubleVector> type = new VectorFieldTypeInformation<>(DoubleVector.FACTORY, 2, 2, DoubleVector.FACTORY.getDefaultSerializer());
bundle.appendColumn(type, vecs);
bundle.appendColumn(TypeUtil.EXTERNALID, ids);
return bundle;
}
These long/lat are associated with an ID and I need to link them back to this ID to the values. Is the only way to go that using the ID offset (in the code above)? I have tried to add ExternalID column but I don't know how to retrieve the ExternalID for a particular NumberVector?
Also after seeing Using ELKI's Distance Function I tried to use Elki's longLatDistance but it doesn't work and I could not find any examples to implement it.
The interface for data sources is called DatabaseConnection.
JavaDoc of DatabaseConnection
You can implement a MongoDB-based interface to get the data.
It is not complicated interface, it has a single method.
I am using a 3 node standalone spark cluster with 1 master and 2 workers, along with a 2 node cassandra ring, here is a sample code of what I am trying to do
SparkConf conf = new SparkConf(true);
SparkContext sc = new SparkContext(HOST, APP_NAME, conf);
String query = "Select address from " + CASSANDRA_KEYSPACE + "." + CASSANDRA_COLUMN_FAMILY + " where ras_ = '01'";
CassandraSQLContext sqlContext = new CassandraSQLContext(sc);
DataFrame resultsFrame = sqlContext.sql(query);
JavaRDD<Row> resultsRDD = resultsFrame.javaRDD();
JavaRDD<String> dataRDD = resultsRDD.map(row -> row.getString(0));
dataRDD.saveAsTextFile("output");
From the System.out.println, I know I have some data as a result of my query, but in my project home, in the output directory, the only files I am getting are _SUCCESS and ._SUCCESS.crc and none of the part-* files. Is this expected behavior ? if not, where am I going wrong ?
Well, looks like we have the same situation here since we both use more than one node, turns out the file is not guaranteed to save on which node.
In my case, it was not saved on the master which i run the script but one of the slaves.
Hope helpful.