I'm trying to get all items from a Apache Ignite cache.
Currently I can get an individual item using
ClientCache<Integer, BinaryObject> cache = igniteClient.cache("myCache").withKeepBinary();
BinaryObject temp = cache.get(1);
To get all keys, Ive tried the following:
try(QueryCursor<Entry<Integer,BinaryObject>> cursor = cache.query(new ScanQuery<Integer, BinaryObject>(null))) {
for (Object p : cursor)
System.out.println(p.toString());
}
This returns a list of org.apache.ignite.internal.client.thin.ClientCacheEntry which is internal, and I cannot call getValue.
How can I get all items for this cache?
By using Iterator you can get all values and key from cache. below are the sample code to retrieve all values from cache.
Iterator<Entry<Integer, BinaryObject>> itr = cache.iterator();
while(itr.hasNext()) {
BinaryObject object = itr.next().getValue();
System.out.println(object);
}
The following may help you to iterate over all the records in the cache.
import javax.cache.Cache.Entry;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.binary.BinaryObject;
public class App5BinaryObject {
public static void main(String[] args) {
Ignition.setClientMode(true);
try (Ignite client = Ignition
.start("/Users/amritharajherle/git_new/ignite-learning-by-examples/complete/cfg/ignite-config.xml")) {
IgniteCache<BinaryObject, BinaryObject> cities = client.cache("City").withKeepBinary();
int count = 0;
for (Entry<BinaryObject, BinaryObject> entry : cities) {
count++;
BinaryObject key = entry.getKey();
BinaryObject value = entry.getValue();
System.out.println("CountyCode=" + key.field("COUNTRYCODE") + ", DISTRICT = " + value.field("DISTRICT")
+ ", POPULATION = " + value.field("POPULATION") + ", NAME = " + value.field("NAME"));
}
System.out.println("total cities count = " + count);
}
}
}
Using Ignite Rest API we can fetch certain Number of records[Size]. I strive a lot and finally found API.
http://<Server_IP>:8080/ignite?cmd=qryscanexe&pageSize=10&cacheName=
Add Auth Header As per Ignite Cluster User.
Related
I am trying to restrict the results of my BabelNet query to a specific (Babel)domain. To do that, I'm trying to find out a way to compare the synsets' domains with the domain I need (Geographical). However, I'm having trouble getting the right output, since although the 2 strings match, it still gives me the wrong output. I'm surely doing something wrong here, but I'm out of ideas.
After many trials, the following code was the one that gave me the nearest result to the desired output:
public class GeoRestrict {
public static void main(String[] args) throws IOException {
String file = "/path/to/file/testdata.txt";
BabelNet bn = BabelNet.getInstance();
BufferedReader br = new BufferedReader(new FileReader(file));
String word = null;
while ((word = br.readLine()) != null) {
BabelNetQuery query = new BabelNetQuery.Builder(word)
.build();
List<BabelSynset> wordSynset = bn.getSynsets(query);
for (BabelSynset synset : wordSynset) {
BabelSynsetID id = synset.getID();
System.out.println("\n" + "Synset ID for " + word.toUpperCase() + " is: " + id);
HashMap<Domain, Double> domains = synset.getDomains();
Set<Domain> keys = domains.keySet();
String keyString = domains.keySet().toString();
List<String> categories = synset.getDomains().keySet().stream()
.map(domain -> ((BabelDomain) domain).getDomainString())
.collect(Collectors.toList());
for (String category : categories) {
if(keyString.equals(category)) {
System.out.println("The word " + word + " has the domain " + category);
} else {
System.out.println("Nada! " + category);
}
}
}
}
br.close();
}
}
The output looks like this:
Synset ID for TURIN is: bn:00077665n
Nada! Geography and places
Any ideas on how to solve this issue?
I found my own error. For the sake of completeness I'm posting it.
The BabelDomain needs to be declared and specified (before the while-loop), like this:
BabelDomain domain = BabelDomain.GEOGRAPHY_AND_PLACES;
Im trying to delete all items in my table in dynamodb but it does not work.
try {
ScanRequest scanRequest = new ScanRequest().withTableName(table);
ScanResult scanResult = null;
do {
if (Check.nonNull(scanResult)) {
scanRequest.setExclusiveStartKey(scanResult.getLastEvaluatedKey());
}
scanResult = client.scan(scanRequest);
scanResult.getItems().forEach((Item) -> {
String n1 = Item.get("n1").toString();
String n2 = tem.get("n2").toString();
DeleteItemSpec spec = new DeleteItemSpec().withPrimaryKey("n1", n1, "n2", n2);
dynamodb.getTable(table).deleteItem(spec);
});
} while (Check.nonNull(scanResult.getLastEvaluatedKey()));
} catch (Exception e) {
throw new BadRequestException(e);
}
n1 is my Primary partition key
n2 is my Primary sort key
The best approach to delete all the items from DynamoDB is to drop the table and recreate it.
Otherwise, there are lot of read capacity and write capacity units being used which will cost you.
Dropping and recreating the table is the best approach.
PREAMBLE: While a scan operation is expensive, I was needing this answer for initialising a table for a test scenario (low volume). The table was being created by another process and I needed the test scenario on that table, I could therefore not delete and recreate the table.
ANSWER:
given:
DynamoDbClient db
static String TABLE_NAME
static String HASH_KEY
static String SORT_KEY
ScanIterable scanIterable = db.scanPaginator(ScanRequest.builder()
.tableName(TABLE_NAME)
.build());
for(ScanResponse scanResponse:scanIterable){
for( Map<String, AttributeValue> item: scanResponse.items()){
Map<String,AttributeValue> deleteKey = new HashMap<>();
deleteKey.put(HASH_KEY,item.get(HASH_KEY));
deleteKey.put(SORT_KEY,item.get(SORT_KEY));
db.deleteItem(DeleteItemRequest.builder()
.tableName(TRANSACTION_TABLE_NAME)
.key(deleteKey).build());
}
}
To delete all the items from the table first you need to perform scan operation over the table which will results you an scanoutcome. Using the iterator loop over the sacnoutcome with the primary key and it's primary key value.This will be one of the approach to delete all the items from the table. Hope that this code will work you. Thanks
Table table = dynamoDB.getTable(your_table);
ItemCollection<ScanOutcome> deleteoutcome = table.scan();
Iterator<Item> iterator = deleteoutcome.iterator();
while (iterator.hasNext()) {
your_table.deleteItem("PrimaryKey", iterator.next().get("primary key value"));
}
//May be we can make it look generic by reading key schema first as below
String strPartitionKey = null;
String strSortKey = null;
TableDescription description = table.describe();
List<KeySchemaElement> schema = description.getKeySchema();
for (KeySchemaElement element : schema) {
if (element.getKeyType().equalsIgnoreCase("HASH"))
strPartitionKey = element.getAttributeName();
if (element.getKeyType().equalsIgnoreCase("RANGE"))
strSortKey = element.getAttributeName();
}
ItemCollection<ScanOutcome> deleteoutcome = table.scan();
Iterator<Item> iterator = deleteoutcome.iterator();
while (iterator.hasNext()) {
Item next = iterator.next();
if (strSortKey == null && strPartitionKey != null)
table.deleteItem(strPartitionKey, next.get(strPartitionKey));
else if (strPartitionKey != null && strSortKey != null)
table.deleteItem(strPartitionKey, next.get(strPartitionKey), strSortKey, next.get(strSortKey));
}
I have created an Apache Ignite application with Spark
Ignite Version - 1.6.0
Spark Version - 1.5.2 (Built on Scala 2.11)
Application stores two tuples to IgniteRDD
When retrieve is called then collect function is taking more than 3 minutes.
Number of jobs submitted are more than 1000
Code snippet:
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import org.apache.ignite.spark.IgniteContext;
import org.apache.ignite.spark.IgniteRDD;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
public class CopyOfMainIgnite {
public static void main(String args[]) {
SparkConf conf = new SparkConf().setAppName("Demo").setMaster(
"spark://169.254.228.183:7077");
System.out.println("Spark conf initialized.");
JavaSparkContext sc = new JavaSparkContext(conf);
sc.addJar("./target/IgnitePOC-0.0.1-SNAPSHOT-jar-with-dependencies.jar");
System.out.println("Spark context initialized.");
IgniteContext ic = new IgniteContext(sc.sc(),
"ignite/client-default-config.xml");
System.out.println("Ignite Context initialized.");
String cacheName = "demo6";
save(sc, ic, cacheName);
retrieve(ic, cacheName);
ic.close(false);
sc.close();
}
private static void retrieve(IgniteContext ic, String cacheName) {
System.out.println("Getting IgniteRDD saved.");
IgniteRDD<String, String> javaIRDDRet = ic.fromCache(cacheName);
long temp1 = System.currentTimeMillis();
JavaRDD<Tuple2<String, String>> javardd = javaIRDDRet.toJavaRDD();
System.out
.println("Is empty Start Time: " + System.currentTimeMillis());
System.out.println("javaIRDDRet.isEmpty(): " + javardd.isEmpty());
System.out.println("Is empty End Time: " + System.currentTimeMillis());
long temp2 = System.currentTimeMillis();
long temp3 = System.currentTimeMillis();
System.out.println("collect and println Start Time: "
+ System.currentTimeMillis());
javardd.collect().forEach(System.out::println);
System.out.println("collect and println End Time: "
+ System.currentTimeMillis());
long temp4 = System.currentTimeMillis();
System.out.println("Is empty : " + temp1 + " " + temp2
+ " Collect and print: " + temp3 + " " + temp4);
}
private static void save(JavaSparkContext sc, IgniteContext ic,
String cacheName) {
IgniteRDD<String, String> igniteRDD = ic.fromCache(cacheName);
System.out.println("IgniteRDD from cache initialized.");
Map<String, String> tempMap = new HashMap<String, String>();
tempMap.put("Aditya", "Jain");
tempMap.put("Pranjal", "Jaju");
Tuple2<String, String> tempTuple1 = new Tuple2<String, String>(
"Aditya", "Jain");
Tuple2<String, String> tempTuple2 = new Tuple2<String, String>(
"Pranjal", "Jaju");
List<Tuple2<String, String>> list = new LinkedList<Tuple2<String, String>>();
list.add(tempTuple1);
list.add(tempTuple2);
JavaPairRDD<String, String> jpr = sc.parallelizePairs(list, 4);
System.out.println("Random RDD saved.");
igniteRDD.savePairs(jpr.rdd(), false);
System.out.println("IgniteRDD saved.");
}
}
So my question: is it going to take 3-4 minutes to fetch 2 Rdd tuples from Ignite and collect them in my process?
Or is my expectation wrong that it will respond in milliseconds?
After debugging I found it is creating 1024 partitions in ignite rdd which is causing it to fire 1024 jobs. And i am not getting any way to control number of partitions.
You can decrease number of partitions in CacheConfiguration:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="32"/>
</bean>
</property>
</bean>
Also you can use IgniteRDD.sql(..) and IgniteRDD.objectSql(..) methods to retrieve data directly from Ignite utilizing fast indexed search. For details on how to configure SQL in Ignite refer to this page: https://apacheignite.readme.io/docs/sql-queries
I need to delete several test cases i have in rally. Rally website says that the only way around this problem is to communication with Rally API and write a small bulk deletion script.
E.g. i need to delete from TC100 - TC150.
Anyone can help me with this? I am using java.
Thanks.
Per Rally Rest toolkit for Java documentation there is a Delete method.
Here is a code example that queries test cases by a tag name and then bulk-deletes these test cases. Your query criteria will be different, but if you choose to identify test cases by tag, note that Tags.Name contains "tag1" returns test cases that may have more than one tag applied, and not only those that a single "tag1".
import com.google.gson.JsonArray;
import com.google.gson.JsonObject;
import com.rallydev.rest.RallyRestApi;
import com.rallydev.rest.request.QueryRequest;
import com.rallydev.rest.response.QueryResponse;
import com.rallydev.rest.request.DeleteRequest;
import com.rallydev.rest.response.DeleteResponse;
import com.rallydev.rest.util.Fetch;
import com.rallydev.rest.util.QueryFilter;
import java.net.URI;
public class GetTestCasesByTagAndBulkDelete {
public static void main(String[] args) throws Exception {
String host = "https://rally1.rallydev.com";
String apiKey = "_abc123"; //use your api key
String applicationName = "Find TestCases by Tag and bulk delete";
String workspaceRef = "/workspace/12345";
RallyRestApi restApi = null;
try {
restApi = new RallyRestApi(new URI(host),apiKey);
restApi.setApplicationName(applicationName);
QueryRequest request = new QueryRequest("TestCase");
request.setWorkspace(workspaceRef);
request.setFetch(new Fetch(new String[] {"Name", "FormattedID", "Tags"}));
request.setLimit(1000);
request.setScopedDown(false);
request.setScopedUp(false);
request.setQueryFilter(new QueryFilter("Tags.Name", "contains", "\"tag1\""));
QueryResponse response = restApi.query(request);
System.out.println("Successful: " + response.wasSuccessful());
System.out.println("Results Size: " + response.getResults().size());
for (int i=0; i<response.getResults().size();i++){
JsonObject tcJsonObject = response.getResults().get(i).getAsJsonObject();
System.out.println("Name: " + tcJsonObject.get("Name") + " FormattedID: " + tcJsonObject.get("FormattedID"));
int numberOfTags = tcJsonObject.getAsJsonObject("Tags").get("Count").getAsInt();
QueryRequest tagRequest = new QueryRequest(tcJsonObject.getAsJsonObject("Tags"));
tagRequest.setFetch(new Fetch("Name","FormattedID"));
//load the collection
JsonArray tags = restApi.query(tagRequest).getResults();
for (int j=0;j<numberOfTags;j++){
System.out.println("Tag Name: " + tags.get(j).getAsJsonObject().get("Name"));
}
System.out.println("deleting " + tcJsonObject.get("FormattedID")) ;
DeleteRequest deleteRequest = new DeleteRequest(tcJsonObject.get("_ref").getAsString());
DeleteResponse deleteResponse = restApi.delete(deleteRequest);
}
} finally {
if (restApi != null) {
restApi.close();
}
}
}
}
Below is my current code:
private void processData(StringBuffer reportBuffer) {
String tableName = "db2_table_name";
ArrayList<HashMap<String, String>> tableValue = FileData.get(tableName);
String No = "No";
String versionNumber = "VersionNumber";
for (HashMap<String, String> fieldValue : tableValue) {
String No = fieldValue.get(No);
String Version = fieldValue.get(versionNumber);
reportBuffer.append("No is: " + No + " and Version is: " +Version + NL);
}
}
The current output of this is:
No is: 1. and Version is: 1.
No is: 1. and Version is: 2.
No is: 3. and Version is: 1.
No is: 3. and Version is: 2.
No is: 3. and Version is: 3.
What I am looking to do is only keep the latest version of each No while removing the element of the older versions. So within my new ArrayList I would ideally want to only have:
No is: 1. and Version is: 2.
No is: 3. and Version is: 3
Let me know if you need any clarifications!
Thanks!
.
You could create a new HashMap and save the latest version for each No inside that hashmap.
You should be able to do that by saving each No and Version in the hashmap while you go through the for loop you have there. However, every time before you save a value, you should check if the hashmap contains the current No, and if yes don't save it but check if the stored version is smaller than the current one and replace it.
At the end, create a loop to go around the hashmap and use it along with the reportBuffer.append()
Heres one way
Keep a Map<String,Integer> lowestAmongMaps = new HashMap<String,Integer>();
In your for loop, add the lowest value for each key to this map..
basically, something like this
Integer currentLowest = Integer.valueOf(lowestAmongMaps.get(versionNumber));
if(currentLowest == null){
currentLowest = Integer.MAX_VALUE;
}
Integer currentVal = Integer.valueOf(Version).intValue();
if( currentVal.intValue() < currentLowest.intValue()){
lowestAmongMaps.put(versionNumber,currentVal);
}
Now in the next iteration remove all keys which are not lowest.
So basically you have two iterations over
for (HashMap<String, String> fieldValue : tableValue)
If its a database the best way would be to modify the query to return only max version from FileData.get(). But I suppose its a file. So a simple solution would be to keep a third map to store the report values. Eg
private void processData(StringBuffer reportBuffer) {
String tableName = "db2_table_name";
ArrayList<HashMap<String, String>> tableValue = FileData.get(tableName);
String no = "No";
String versionNumber = "VersionNumber";
Map<String, String> mergedMap = new HashMap<String, String>();
for (HashMap<String, String> fieldValue : tableValue) {
String No = fieldValue.get(no);
String Version = fieldValue.get(versionNumber);
if(mergedMap.containsKey(No)){
Integer previousVersion = Integer.valueOf(mergedMap.get(No));
Integer currentVersion = Integer.valueOf(Version);
if(currentVersion > previousVersion){
mergedMap.put(No, Version);
}
} else {
mergedMap.put(No, Version);
}
}
for(Entry<String, String> entry : mergedMap.entrySet()){
reportBuffer.append("No is: " + entry.getKey() + " and Version is: " + entry.getValue() + NL);
}
}