I need to upload a .csv file and save the records in bigtable.
My application successfully parse 200 records in the csv files and save to table.
Here is my code to save the data.
for (int i=0;i<lines.length -1;i++) //lines hold total records in csv file
{
String line = lines[i];
//The record have 3 columns integer,integer,Text
if(line.length() > 15)
{
int n = line.indexOf(",");
if (n>0)
{
int ID = lInteger.parseInt(ine.substring(0,n));
int n1 = line.indexOf(",", n + 2);
if(n1 > n)
{
int Col1 = Integer.parseInt(line.substring(n + 1, n1));
String Col2 = line.substring(n1 + 1);
myTable uu = new myTable();
uu.setId(ID);
uu.setCol1(MobNo);
Text t = new Text(Col2);
uu.setCol2(t);
PersistenceManager pm = PMF.get().getPersistenceManager();
pm.makePersistent(uu);
pm.close();
}
}
}
}
But when no of records grow it gives timeout error.
The csv file may have upto 800 records.
Is it possible to do that in App-Engine?
(something like batch update)
GAE limit you app request to 30 sec, and you can't run long task.
Best approach is to split this CSV into smaller chunks, and process them individually, one after one. In the case when you can upload it only as one big file, you can store it as binary data, and then process (split and parse) using Task Queue (note that it's also limited to 10 minutes per request, but you can always make a chain of tasks). Or you can user backend to process.
You could store your CSV file in the Blobstore (gzipped or not) and use a MapReduce job to read and persist each line in the Datastore.
Related
I am trying to iterate the rows from TableResult using getValues() as below.
if I use getValues(), it's retrieving only the first page rows. I want to iterate all the rows using getValues() and NOT using iterateAll().
In the below code, the problem is its going infinite time. not ending. while(results.hasNextPage()) is not ending. what is the problem in the below code?
{
query = "select from aa.bb.cc";
QueryJobConfiguration queryConfig =
QueryJobConfiguration.newBuilder(query)
.setPriority(QueryJobConfiguration.Priority.BATCH)
.build();
TableResult results = bigquery.query(queryConfig);
int i = 0;
int j=0;
while(results.hasNextPage()) {
j++;
System.out.println("page " + j);
System.out.println("Data Extracted::" + i + " records");
for (FieldValueList row : results.getNextPage().getValues()) {
i++;
}
}
System.out.println("Total Count::" + results.getTotalRows());
System.out.println("Data Extracted::" + i + " records");
}
I have only 200,000 records in the source table. below is the out put and I forcefully stopped the process.
page 1
Data Extracted::0 records
page 2
Data Extracted::85242 records
page 3
Data Extracted::170484 records
page 4
Data Extracted::255726 records
page 5
Data Extracted::340968 records
page 6
Data Extracted::426210 records
page 7
Data Extracted::511452 records
page 8
Data Extracted::596694 records
.......
.......
.......
.......
In short, you need to update TableResults variable with your getNextPage() variable. If you don't update it you will always be looping the same results over and over. Thats why you are getting tons of records in your output.
If you check the following samples: Bigquery Pagination and Using Java Client Library. There are ways that we can deal with pagination results. Although not specific for single run queries.
As show on the code below, which is partially based on pagination sample, you need to use the output of getNextPage() to update results variable and proceed to perform the next iteration inside the while up until it iterates all pages but the last.
QueryRun.Java
package com.projects;
// [START bigquery_query]
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.TableResult;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobId;
import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.BigQuery.QueryResultsOption;
import java.util.UUID;
import sun.jvm.hotspot.debugger.Page;
public class QueryRun {
public static void main(String[] args) {
String projectId = "bigquery-public-data";
String datasetName = "covid19_ecdc_eu";
String tableName = "covid_19_geographic_distribution_worldwide";
String query =
"SELECT * "
+ " FROM `"
+ projectId
+ "."
+ datasetName
+ "."
+ tableName
+ "`"
+ " LIMIT 100";
System.out.println(query);
query(query);
}
public static void query(String query) {
try {
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
// Create a job ID so that we can safely retry.
JobId jobId = JobId.of(UUID.randomUUID().toString());
Job queryJob = bigquery.create(JobInfo.newBuilder(queryConfig).setJobId(jobId).build());
TableResult results = queryJob.getQueryResults(QueryResultsOption.pageSize(10));
int i = 0;
int j =0;
// get all paged data except last line
while(results.hasNextPage()) {
j++;
for (FieldValueList row : results.getValues()) {
i++;
}
results = results.getNextPage();
print_msg(i,j);
}
// last line run
j++;
for (FieldValueList row : results.getValues()) {
i++;
}
print_msg(i,j);
System.out.println("Query performed successfully.");
} catch (BigQueryException | InterruptedException e) {
System.out.println("Query not performed \n" + e.toString());
}
}
public static void print_msg(int i,int j)
{
System.out.println("page " + j);
System.out.println("Data Extracted::" + i + " records");
}
}
// [END bigquery_query]
output:
SELECT * FROM `bigquery-public-data.covid19_ecdc_eu.covid_19_geographic_distribution_worldwide` LIMIT 100
page 1
Data Extracted::10 records
page 2
Data Extracted::20 records
page 3
Data Extracted::30 records
page 4
Data Extracted::40 records
page 5
Data Extracted::50 records
page 6
Data Extracted::60 records
page 7
Data Extracted::70 records
page 8
Data Extracted::80 records
page 9
Data Extracted::90 records
page 10
Data Extracted::100 records
Query performed successfully.
As a final note, there are not official sample about pagination for queries so I'm not totally sure of the recommended way to handle pagination with java. Its not quite clear on the BigQuery for Java documentation page. If you can update your question with your approach to pagination I would appreciate.
If you have issues running the attached sample please see Using the BigQuery Java client sample, its github page and its pom.xml file inside of it and check if you are in compliance with it.
Probably I Am late in the response.
But reading on the java client guide (https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries#complete_source_code)
it says that:
Iterate over the QueryResponse to get all the rows in the results. The iterator automatically handles pagination. Each FieldList exposes the columns by numeric index or column name.
Sai that, it should be easier to simply use the iterateAll() method.
Let me know if I Am wrong.
I am trying to execute a query over a table in BigQuery using its Java client libraries. I create a Job and then get the result of Job using job.getQueryResults().iterateAll() method.
This way is working but for large data like 600k it takes time around 80-120 seconds. I see BigQuery gets data in 40-45k batches which takes around 5-7 sec each.
I want to get the results faster and I found over internet that if we can get the temporary table created by BigQuery from the Job and the read the data in avro or some other format from that table if will be really fast, but in BigQuery API(using version: 1.124.7) I don't see that way.
Does anyone know how to do that in Java, or how to get data faster in case of large number of records.
Any help is appreciated.
Code to Read Table(Takes 20 sec)
Table table = bigQueryHelper.getBigQueryClient().getTable(TableId.of("project","dataset","table"));
String format = "CSV";
String gcsUrl = "gs://name/test.csv";
Job job = table.extract(format, gcsUrl);
// Wait for the job to complete
try {
Job completedJob = job.waitFor(RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob != null && completedJob.getStatus().getError() == null) {
log.info("job done");
// Job completed successfully
} else {
log.info("job has error");
// Handle error case
}
} catch (InterruptedException e) {
// Handle interrupted wait
}
Code to read same table using Query(Takes 90 Sec)
Job job = bigQueryHelper.getBigQueryClient().getJob(JobId.of(jobId));
for (FieldValueList row : job.getQueryResults().iterateAll()) {
System.out.println(row);
}
I tried certain ways and based on that found the best way of doing it, just thought to post here to help some one in future.
1: If we use job.getQueryResults().iterateAll() on job or directly on table, it takes same time. So if we don't give batch size BigQuery will use batch size of around 35-45k and fetch the data. So for 600k rows (180Mb) it takes 70-100 sec.
2: We can use the temp table details from created job and use extract job feature of table to write the result in GCS, this will be faster and takes around 30-35 sec. This approach would not download on local for that we again need to use ..iterateAll() on temp table and it will be take same time as 1.
Example pseudo code:
try {
Job job = getBigQueryClient().getJob(JobId.of(jobId));
long start = System.currentTimeMillis();
// FieldList list = getFields(job);
Job completedJob =
job.waitFor(
RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob != null && completedJob.getStatus().getError() == null) {
log.info("job done");
String gcsUrl = "gs://bucketname/test";
//getting the temp table information of the Job
TableId destinationTableInfo =
((QueryJobConfiguration) job.getConfiguration()).getDestinationTable();
log.info("Total time taken in getting schema ::{}", (System.currentTimeMillis() - start));
Table table = bigQueryHelper.getBigQueryClient().getTable(destinationTableInfo);
//Using extract job to write the data in GCS
Job newJob1 =
table.extract(
CsvOptions.newBuilder().setFieldDelimiter("\t").build().toString(), gcsUrl);
System.out.println("DestinationInfo::" + destinationTableInfo);
Job completedJob1 =
newJob1.waitFor(
RetryOption.initialRetryDelay(Duration.ofSeconds(1)),
RetryOption.totalTimeout(Duration.ofMinutes(3)));
if (completedJob1 != null && completedJob1.getStatus().getError() == null) {
log.info("job done");
} else {
log.info("job has error");
}
} else {
log.info("job has error");
}
} catch (InterruptedException e) {
e.printStackTrace();
}
3: This is the best way which I wanted. It downloads/writes the result faster in local file. It downloads data in around 20 sec. This is the new way BigQuery provides and can be checked using below links:
https://cloud.google.com/bigquery/docs/reference/storage#background
List item
https://cloud.google.com/bigquery/docs/reference/storage/libraries#client-libraries-install-java
I got a CSV file, basically a list of cities with some codes.
In my app users write their city of birth, a list of cities appears suggesting it, when chose the city's code is used for other stuff.
Can I just move the .csv file in an Android Studio folder and just use it as a database made with sql lite?
If no, should I make the sql lite database in Android Studio (a DatabaseManager class with SqlOpenHelper and some queries if i got it), then copy the .csv? How can I just "copy" that?
EDIT: Sorry but I realized that my CSV file had too much columns and that'd be ugly and tiring to manually add the columns. So I used DB Browser for SQLite, now I got a .db file. Can I just put it in a specific database folder and querying it in my app?
Can I just move the .csv file in an Android Studio folder and just use
it as a database made with sql lite?
No.
A sqlite database, i.e. the file, has to be formatted so that the SQLite routines can access the data enclosed therein. e.g. the first 16 bytes of the file MUST BE SQLite format 3\000 and so on, as per Database File Format
If no, should I make the sql lite database in Android Studio (a
DatabaseManager class with SqlOpenHelper and some queries if i got
it), then copy the .csv?
You have various options e.g. :-
You could copy the csv file into an appropriate location so that it will be part of the package (e.g. the assets folder) and then have a routine to generate the appropriate rows in the appropriate table(s). This would require creating the database within the App.
You could simply hard code the inserts within the App. Again this would require creating the database within the App.
You could use an SQLite Tool to create a pre-populated database, copy this into the assets folder (assets/databases if using SQLiteAssetHelper) and copy the database from the assets folder. No need to have a csv file in this case.
Example of option 1
As an example that is close to option 1 (albeit that the data isn't stored in the database) the following code extracts data from a csv file from the assets folder.
This option is used in this case as the file changes on an annual basis, so changing the file and then distributing the App applies the changes.
The file looks like :-
# This file contains annual figures
# 5 figures are required for each year and are comma seperated
# 1) The year to which the figures are relevant
# 2) The annualised MTAWE (Male Total Average Weekly Earnings)
# 3) The annual Parenting Payment Single (used to determine fixed assessment)
# 4) The fixed assessment annual rate
# 5) The Child Support Minimum Annual Rate
# Lines starting with # are comments and are ignored
2006,50648,13040,1040,320
2007,52073,13315,1102,330
2008,54756,13980,1122,339
2009,56425,13980,1178,356
2010,58854,14615,1193,360
2011,61781,15909,1226,370
2012,64865,16679,1269,383
2013,67137,17256,1294,391
2014,70569,18197,1322,399
2015,70829,18728,1352,408
2016,71256,19011,1373,414
2017,72462,19201,1390,420
2018,73606,19568,1416,427
It is stored in the assets folder of the App as annual_changes.txt The following code is used to obtain the values (which could easily be added to a table) :-
private void BuildFormulaValues() {
mFormulaValues = new ArrayList<>();
mYears = new ArrayList<>();
StringBuilder errors = new StringBuilder();
try {
InputStream is = getAssets().open(formula_values_file);
BufferedReader bf = new BufferedReader(new InputStreamReader(is));
String line;
while ((line = bf.readLine()) != null ) {
if (line.substring(0,0).equals("#")) {
continue;
}
String[] values = line.split(",");
if (values.length == 5) {
try {
mFormulaValues.add(
new FormulaValues(
this,
Long.parseLong(values[0]),
Long.parseLong(values[1]),
Long.parseLong(values[2]),
Long.parseLong(values[3]),
Long.parseLong(values[4])
)
);
} catch (NumberFormatException e) {
if (errors.length() > 0) {
errors.append("\n");
}
errors.append(
this.getResources().getString(
R.string.invalid_formula_value_notnumeric)
);
continue;
}
mYears.add(values[0]);
} else {
if (errors.length() > 0) {
errors.append("\n");
errors.append(
getResources().getString(
R.string.invalid_formula_value_line)
);
}
}
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
if (errors.length() > 0) {
String emsg = "Note CS CareCalculations may be inaccurate due to the following issues:-\n\n" +
errors.toString();
Toast.makeText(
this,
emsg,
Toast.LENGTH_SHORT
).show();
}
}
Try this for adding the.csv info to your DB
FileReader file = new FileReader(fileName);
BufferedReader buffer = new BufferedReader(file);
String line = "";
String tableName = "TABLE_NAME";
String columns = "_id, name, dt1, dt2, dt3";
String str1 = "INSERT INTO " + tableName + " (" + columns + ") values(";
String str2 = ");";
db.beginTransaction();
while ((line = buffer.readLine()) != null) {
StringBuilder sb = new StringBuilder(str1);
String[] str = line.split(",");
sb.append("'" + str[0] + "',");
sb.append(str[1] + "',");
sb.append(str[2] + "',");
sb.append(str[3] + "'");
sb.append(str[4] + "'");
sb.append(str2);
db.execSQL(sb.toString());
}
db.setTransactionSuccessful();
db.endTransaction();
For my current project i'm using Cassandra Db for fetching data frequently. Within every second at least 30 Db requests will hit. For each request at least 40000 rows needed to fetch from Db. Following is my current code and this method will return Hash Map.
public Map<String,String> loadObject(ArrayList<Integer> tradigAccountList){
com.datastax.driver.core.Session session;
Map<String,String> orderListMap = new HashMap<>();
List<ResultSetFuture> futures = new ArrayList<>();
List<ListenableFuture<ResultSet>> Future;
try {
session =jdbcUtils.getCassandraSession();
PreparedStatement statement = jdbcUtils.getCassandraPS(CassandraPS.LOAD_ORDER_LIST);
for (Integer tradingAccount:tradigAccountList){
futures.add(session.executeAsync(statement.bind(tradingAccount).setFetchSize(3000)));
}
Future = Futures.inCompletionOrder(futures);
for (ListenableFuture<ResultSet> future : Future){
for (Row row: future.get()){
orderListMap.put(row.getString("cliordid"), row.getString("ordermsg"));
}
}
}catch (Exception e){
}finally {
}
return orderListMap;
}
My data request query is something like this,
"SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1 WHERE tradacntid = ?".
My Cassandra cluster has 2 nodes with 32 concurrent read and write thread for each and my Db schema as follow
CREATE TABLE omsks_v1.ordersstringv1_copy1 (
tradacntid int,
cliordid text,
ordermsg text,
PRIMARY KEY (tradacntid, cliordid)
) WITH bloom_filter_fp_chance = 0.01
AND comment = ''
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE'
AND caching = {
'keys' : 'ALL',
'rows_per_partition' : 'NONE'
}
AND compression = {
'sstable_compression' : 'LZ4Compressor'
}
AND compaction = {
'class' : 'SizeTieredCompactionStrategy'
};
My problem is getting Cassandra timeout exception, how to optimize my code to handle all these requests
It would be better if you would attach the snnipet of that Exception (Read/write exception). I assume you are getting read time out. You are trying to fetch a large data set on a single request.
For each request at least 40000 rows needed to fetch from Db
If you have a large record and resultset is too big, it throws exception if results cannot be returned within a time limit mentioned in Cassandra.yaml.
read_request_timeout_in_ms
You can increase the timeout but this is not a good option. It may resolve the issue (may not throw exception but will take more time to return result).
Solution: For big data set you can get the result using manual pagination (range query) with limit.
SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1
WHERE tradacntid > = ? and cliordid > ? limit ?;
Or use range query
SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1 WHERE tradacntid
= ? and cliordid >= ? and cliordid <= ?;
This will be much more faster than fetching the whole resultset.
You can also try by reducing the fetch size. Although it will return the whole resultset.
public Statement setFetchSize(int fetchSize) to check if exception is thrown.
setFetchSize controls the page size, but it doesn't control the
maximum rows returned in a ResultSet.
Another point to be noted:
What's the size of tradigAccountList?
Too many requests at a time also may lead to timeout. Large size of tradigAccountList and a lot of read requests are done at a time (load balancing of requests are handled by Cassandra and how many requests can be handled depends on cluster size and some other factors) may cause this exception .
Some related Links:
Cassandra read timeout
NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet
Cassandra .setFetchSize() on statement is not honoured
I run the following code on my local (mac) machine and on a remote unix server.:
public void deleteValue(final String id, final String value) {
log.info("Removing value " + value);
final Collection<String> valuesBeforeRemoval = getValues(id);
final MutationBatch m = keyspace.prepareMutationBatch();
m.withRow(VALUES_CF, id).deleteColumn(value);
try {
m.execute();
} catch (final ConnectionException e) {
log.error("Unable to delete location " + value, e);
}
final Collection<String> valuesAfterRemoval = getValues(id);
if (valuesAfterRemoval.size()!=(valuesBeforeRemoval.size()-1)) {
log.error("value " + value + " was supposed to be removed from list " + valuesBeforeRemoval + " but it wasn't: " + valuesAfterRemoval);
}
...
}
protected Collection<String> getValues(final String id) {
try {
final OperationResult<ColumnList<String>> operationResult = keyspace
.prepareQuery(VALUES_CF).getKey(id).execute();
final ColumnList<String> result = operationResult.getResult();
if (result.isEmpty()) {
log.info("No value found for id: " + id);
return new ArrayList<String>();
}
return result.getColumnNames();
} catch (final ConnectionException e) {
log.error("Unable to retrieve session " + id, e);
}
return new ArrayList<String>();
}
Locally, that line is never executed, which makes sense:
log.error("value " + value + " was supposed to be removed from list " + valuesBeforeRemoval + " but it wasn't: " + valuesAfterRemoval);
but that line is executed on my dev server:
[ERROR] [main] [n.o.w.s.d.SessionDaoCassandraImpl] [2013-03-08 13:12:24,801]
[] - value 3 was supposed to be removed from list [3, 2, 1, 0, 7, 6, 5, 4, 9, 8] but it wasn't: [3, 2, 1, 0, 7, 6, 5, 4, 9, 8]
I am using com.netflix.astyanax
Both my local machine and the remote dev server connect to the very
same cassandra instance.
Both my local machine and the remote dev server run the very same test
creating a new row family, and adding 10 records before one is deleted.
When the error occurs on dev, log.error("Unable to delete
location " + value, e); was not executed (i.e. running the deletion
command didn't produce any exception).
I am 100% positive that no other code is affecting the content of the
database while I am running the test on dev so this isn't some
strange concurrency issue.
What could possibly explain that the deleteColumn(value) request runs without producing any error but still does not remove the column from the database?
ADDITIONAL INFO
Here is how I created the keyspace:
create keyspace sessiondata
with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = {replication_factor:1};
Here is how I created the column family values, referenced as VALUES_CF in the code above:
create column family values
with comparator = UTF8Type
;
Here is how the keyspace referenced in the java code above is defined:
final AstyanaxContext.Builder contextBuilder = getBuilder();
final AstyanaxContext<Keyspace> keyspaceContext = contextBuilder
.forKeyspace(keyspaceName).buildKeyspace(
ThriftFamilyFactory.getInstance());
keyspaceContext.start();
keyspace = keyspaceContext.getEntity();
where getBuilder is:
private Builder getBuilder() {
final AstyanaxConfigurationImpl conf = new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
.setRetryPolicy(new RunOnce());
final ConnectionPoolConfigurationImpl poolConf = new ConnectionPoolConfigurationImpl("MyPool")
.setPort(port)
.setMaxConnsPerHost(1)
.setSeeds(value);
return new AstyanaxContext.Builder()
.forCluster(cluster)
.withAstyanaxConfiguration(conf)
.withConnectionPoolConfiguration(poolConf)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor());
}
SECOND UPDATE
First, the issues are not solely related to deletes. I observe similar problems when updating records in the database, reading them, and not being able to read the updates I just wrote
Second, I created a test that does 100 times the following operations:
write a row into cassandra
update that row in cassandra
read back that row from cassandra and check whether the row was indeed updated, and checking again regularly after delays if it wasn't
What I observe from that test is that:
again, when I run that code locally, all 100 iterations pass right away (no retry ever needed)
when I run that code on the remote server, some of the iterations pass, some fail. When they fail, no matter how large the delay (I wait up to 10 seconds), the test always fail.
At this point, I am really not sure how any cassandra setup could explain this behavior since I connect to the very same server for my tests and since the delays I insert are much larger than any additional latency I may need to run the test when connecting from my local machine.
The only relevant difference seems to be which machine the code is running on.
THIRD UPDATE
If in the test mentioned in the previous update, I insert a delay between the 2 writes, the code starts passing if the delay is >= 1,000 ms. A delay of, say, 100 ms doesn't help. I also modified the builder to set the default read and write consistencies to the most demanding: ALL, and that had no impact on the results of the test (still failing about half of the time unless delay between writes >1s):
final AstyanaxConfigurationImpl conf = new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
.setRetryPolicy(new RunOnce()).setDefaultReadConsistencyLevel(ConsistencyLevel.CL_ALL).setDefaultWriteConsistencyLevel(ConsistencyLevel.CL_ALL);
To debug, try printing the full row instead of just the column names. When I say the full row I mean the column name, column value and the time stamp. A long shot is clocks are wrong on one of your test machines and this is throwing out your tests on the other.
Another thing to double check is that ip is indeed what you think it is, in both your application and cassandra. When you retrieve it print it between something, like println("-" + ip "-"). Before and after your try block for the execute in deleteSecureLocation do a get for only that column, not the entire row. I'm not too sure how to do that in astynax, on the cli it would be get[id][ip].
Something to keep in mind is that a delete won't fail even if there's nothing to delete. To cassandra it's a write, the only thing that will make it a delete is if on read it's the latest timestamped entry against that row/column name.