Simple K-Means doesnt handle iris.arff - java

I have this class below, i build it considering the examples given on the wiki and in a thesis, why can't SympleKMeans handle data? The class can print the Datasource dados, so its nothing wrong with processing file, the error is on the build.
package slcct;
import weka.clusterers.ClusterEvaluation;
import weka.clusterers.SimpleKMeans;
import weka.core.Instance;
import weka.core.Instances;
import weka.core.converters.ConverterUtils.DataSource;
public class Cluster {
public String path;
public Instances dados;
public String[] options = new String[2];
public Cluster(String caminho, int nclusters, int seed ){
this.path = caminho;
this.options[0] = String.valueOf(nclusters);
this.options[1] = String.valueOf(seed);
}
public void ledados() throws Exception{
DataSource source = new DataSource(path);
dados = source.getDataSet();
System.out.println(dados)
if(dados.classIndex()==-1){
dados.setClassIndex(dados.numAttributes()-1);
}
}
public void imprimedados(){
for(int i=0; i<dados.numInstances();i++)
{
Instance actual = dados.instance(i);
System.out.println((i+1) + " : "+ actual);
}
}
public void clustering() throws Exception{
SimpleKMeans cluster = new SimpleKMeans();
cluster.setOptions(options);
cluster.setDisplayStdDevs(true);
cluster.getMaxIterations();
cluster.buildClusterer(dados);
Instances ClusterCenter = cluster.getClusterCentroids();
Instances SDev = cluster.getClusterStandardDevs();
int[] ClusterSize = cluster.getClusterSizes();
ClusterEvaluation eval = new ClusterEvaluation();
eval.setClusterer(cluster);
eval.evaluateClusterer(dados);
for(int i=0;i<ClusterCenter.numInstances();i++){
System.out.println("Cluster#"+( i +1)+ ": "+ClusterSize[i]+" dados .");
System.out.println("Centróide:"+ ClusterCenter.instance(i));
System.out.println("STDDEV:" + SDev.instance(i));
System.out.println("Cluster Evaluation:"+eval.clusterResultsToString());
}
}
}
The error:
weka.core.WekaException: weka.clusterers.SimpleKMeans: Cannot handle any class attribute!
at weka.core.Capabilities.test(Capabilities.java:1097)
at weka.core.Capabilities.test(Capabilities.java:1018)
at weka.core.Capabilities.testWithFail(Capabilities.java:1297)
at weka.clusterers.SimpleKMeans.buildClusterer(SimpleKMeans.java:228)
at slcct.Cluster.clustering(Cluster.java:53)//Here.
at slcct.Clustering.jButton1ActionPerformed(Clustering.java:104)

I believe you need not set the class index, as you are doing clustering and not classification. Try following this guide for programmatic Java clustering.

In your "ledados()" function just remove the code block given below. It will work. Because you have no defined class in your data.
if(dados.classIndex()==-1){
dados.setClassIndex(dados.numAttributes()-1);
}
Your new function:
public void ledados() throws Exception{
DataSource source = new DataSource(path);
dados = source.getDataSet();
System.out.println(dados) }

You would not need a class attribute in the data while doing k clustering

Related

looking for a sample code to read parameter value from aws parameter store

looking for a sample java code to read parameter store values like RDS connection string from aws parameter store. appreicate code or any reference links. thanks.
Here is the V2 (not V1) example to read a specific parameter value from the AWS parameter store:
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.ssm.SsmClient;
import software.amazon.awssdk.services.ssm.model.GetParameterRequest;
import software.amazon.awssdk.services.ssm.model.GetParameterResponse;
import software.amazon.awssdk.services.ssm.model.SsmException;
public class GetParameter {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" GetParameter <paraName>\n\n" +
"Where:\n" +
" paraName - the name of the parameter\n";
if (args.length < 1) {
System.out.println(USAGE);
System.exit(1);
}
// Get args
String paraName = args[0];
Region region = Region.US_EAST_1;
SsmClient ssmClient = SsmClient.builder()
.region(region)
.build();
try {
GetParameterRequest parameterRequest = GetParameterRequest.builder()
.name(paraName)
.build();
GetParameterResponse parameterResponse = ssmClient.getParameter(parameterRequest);
System.out.println("The parameter value is "+parameterResponse.parameter().value());
} catch (SsmException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}
import com.amazonaws.services.simplesystemsmanagement.AWSSimpleSystemsManagement;
import com.amazonaws.services.simplesystemsmanagement.AWSSimpleSystemsManagementClientBuilder;
import com.amazonaws.services.simplesystemsmanagement.model.GetParametersRequest;
import com.amazonaws.services.simplesystemsmanagement.model.GetParametersResult;
...
private static AWSSimpleSystemsManagement ssmclient = AWSSimpleSystemsManagementClientBuilder
.standard().withRegion(System.getProperty("SystemsManagerRegion")).build();
...
GetParametersRequest paramRequest = new GetParametersRequest()
.withNames(parameterName).withWithDecryption(encrypted);
GetParametersResult paramResult = new GetParametersResult();
paramResult = ssmclient.getParameters(paramRequest);
I think GitHub may be of help. I searched for SsmClient getParameter language:java and some of the results seem promising.
This one for example:
public static String getDiscordToken(SsmClient ssmClient) {
GetParameterRequest request = GetParameterRequest.builder().
name("/discord/token").
withDecryption(Boolean.TRUE).
build();
GetParameterResponse response = ssmClient.getParameter(request);
return response.parameter().value();
}

How to invoke model from TensorFlow Java?

The following python code passes ["hello", "world"] into the universal sentence encoder and returns an array of floats denoting their encoded representation.
import tensorflow as tf
import tensorflow_hub as hub
module = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4")
model = tf.keras.Sequential(module)
print("model: ", model(["hello", "world"]))
This code works but I'd now like to do the same thing using the Java API. I've successfully loaded the module, but I am unable to pass inputs into the model and extract the output. Here is what I've got so far:
import org.tensorflow.Graph;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.Tensors;
import org.tensorflow.framework.ConfigProto;
import org.tensorflow.framework.GPUOptions;
import org.tensorflow.framework.GraphDef;
import org.tensorflow.framework.MetaGraphDef;
import org.tensorflow.framework.NodeDef;
import org.tensorflow.util.SaverDef;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
public final class NaiveBayesClassifier
{
public static void main(String[] args)
{
new NaiveBayesClassifier().run();
}
protected SavedModelBundle loadModule(Path source, String... tags) throws IOException
{
return SavedModelBundle.load(source.toAbsolutePath().normalize().toString(), tags);
}
public void run()
{
try (SavedModelBundle module = loadModule(Paths.get("universal-sentence-encoder"), "serve"))
{
Graph graph = module.graph();
try (Session session = new Session(graph, ConfigProto.newBuilder().
setGpuOptions(GPUOptions.newBuilder().setAllowGrowth(true)).
setAllowSoftPlacement(true).
build().toByteArray()))
{
Tensor<String> input = Tensors.create(new byte[][]
{
"hello".getBytes(StandardCharsets.UTF_8),
"world".getBytes(StandardCharsets.UTF_8)
});
List<Tensor<?>> result = session.runner().feed("serving_default_inputs", input).
addTarget("???").run();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
I used https://stackoverflow.com/a/51952478/14731 to scan the model for possible input/output nodes. I believe the input node is "serving_default_inputs" but I can't figure out the output node. More importantly, I don't have to specify any of these values when invoking the code in python through Keras so is there a way to do the same using the Java API?
UPDATE: Thanks to roywei I can now that confirm the input node is serving_default_input and output node is StatefulPartitionedCall_1 but when I plug these names into the aforementioned code I get:
2020-05-22 22:13:52.266287: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:809 : Failed precondition: Table not initialized.
Exception in thread "main" java.lang.IllegalStateException: [_Derived_]{{function_node __inference_pruned_6741}} {{function_node __inference_pruned_6741}} Error while reading resource variable EncoderDNN/DNN/ResidualHidden_0/dense/kernel/part_25 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/EncoderDNN/DNN/ResidualHidden_0/dense/kernel/part_25/class tensorflow::Var does not exist.
[[{{node EncoderDNN/DNN/ResidualHidden_0/dense/kernel/ConcatPartitions/concat/ReadVariableOp_25}}]]
[[StatefulPartitionedCall_1/StatefulPartitionedCall]]
at libtensorflow#1.15.0/org.tensorflow.Session.run(Native Method)
at libtensorflow#1.15.0/org.tensorflow.Session.access$100(Session.java:48)
at libtensorflow#1.15.0/org.tensorflow.Session$Runner.runHelper(Session.java:326)
at libtensorflow#1.15.0/org.tensorflow.Session$Runner.run(Session.java:276)
Meaning, I still cannot invoke the model. What am I missing?
I figured it out after roywei pointed me in the right direction.
I needed to use SavedModuleBundle.session() instead of constructing my own instance. This is because the loader initializes the graph variables.
Instead of passing a ConfigProto to the Session constructor, I passed it into the SavedModelBundle loader instead.
I needed to use fetch() instead of addTarget() to retrieve the output tensor.
Here is the working code:
public final class NaiveBayesClassifier
{
public static void main(String[] args)
{
new NaiveBayesClassifier().run();
}
public void run()
{
try (SavedModelBundle module = loadModule(Paths.get("universal-sentence-encoder"), "serve"))
{
try (Tensor<String> input = Tensors.create(new byte[][]
{
"hello".getBytes(StandardCharsets.UTF_8),
"world".getBytes(StandardCharsets.UTF_8)
}))
{
MetaGraphDef metadata = MetaGraphDef.parseFrom(module.metaGraphDef());
Map<String, Shape> nameToInput = getInputToShape(metadata);
String firstInput = nameToInput.keySet().iterator().next();
Map<String, Shape> nameToOutput = getOutputToShape(metadata);
String firstOutput = nameToOutput.keySet().iterator().next();
System.out.println("input: " + firstInput);
System.out.println("output: " + firstOutput);
System.out.println();
List<Tensor<?>> result = module.session().runner().feed(firstInput, input).
fetch(firstOutput).run();
for (Tensor<?> tensor : result)
{
{
float[][] array = new float[tensor.numDimensions()][tensor.numElements() /
tensor.numDimensions()];
tensor.copyTo(array);
System.out.println(Arrays.deepToString(array));
}
}
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
/**
* Loads a graph from a file.
*
* #param source the directory containing to load from
* #param tags the model variant(s) to load
* #return the graph
* #throws NullPointerException if any of the arguments are null
* #throws IOException if an error occurs while reading the file
*/
protected SavedModelBundle loadModule(Path source, String... tags) throws IOException
{
// https://stackoverflow.com/a/43526228/14731
try
{
return SavedModelBundle.loader(source.toAbsolutePath().normalize().toString()).
withTags(tags).
withConfigProto(ConfigProto.newBuilder().
setGpuOptions(GPUOptions.newBuilder().setAllowGrowth(true)).
setAllowSoftPlacement(true).
build().toByteArray()).
load();
}
catch (TensorFlowException e)
{
throw new IOException(e);
}
}
/**
* #param metadata the graph metadata
* #return the first signature, or null
*/
private SignatureDef getFirstSignature(MetaGraphDef metadata)
{
Map<String, SignatureDef> nameToSignature = metadata.getSignatureDefMap();
if (nameToSignature.isEmpty())
return null;
return nameToSignature.get(nameToSignature.keySet().iterator().next());
}
/**
* #param metadata the graph metadata
* #return the output signature
*/
private SignatureDef getServingSignature(MetaGraphDef metadata)
{
return metadata.getSignatureDefOrDefault("serving_default", getFirstSignature(metadata));
}
/**
* #param metadata the graph metadata
* #return a map from an output name to its shape
*/
protected Map<String, Shape> getOutputToShape(MetaGraphDef metadata)
{
Map<String, Shape> result = new HashMap<>();
SignatureDef servingDefault = getServingSignature(metadata);
for (Map.Entry<String, TensorInfo> entry : servingDefault.getOutputsMap().entrySet())
{
TensorShapeProto shapeProto = entry.getValue().getTensorShape();
List<Dim> dimensions = shapeProto.getDimList();
long firstDimension = dimensions.get(0).getSize();
long[] remainingDimensions = dimensions.stream().skip(1).mapToLong(Dim::getSize).toArray();
Shape shape = Shape.make(firstDimension, remainingDimensions);
result.put(entry.getValue().getName(), shape);
}
return result;
}
/**
* #param metadata the graph metadata
* #return a map from an input name to its shape
*/
protected Map<String, Shape> getInputToShape(MetaGraphDef metadata)
{
Map<String, Shape> result = new HashMap<>();
SignatureDef servingDefault = getServingSignature(metadata);
for (Map.Entry<String, TensorInfo> entry : servingDefault.getInputsMap().entrySet())
{
TensorShapeProto shapeProto = entry.getValue().getTensorShape();
List<Dim> dimensions = shapeProto.getDimList();
long firstDimension = dimensions.get(0).getSize();
long[] remainingDimensions = dimensions.stream().skip(1).mapToLong(Dim::getSize).toArray();
Shape shape = Shape.make(firstDimension, remainingDimensions);
result.put(entry.getValue().getName(), shape);
}
return result;
}
}
There are two ways to get the names:
1) Using Java:
You can read the input and output names from the org.tensorflow.proto.framework.MetaGraphDef stored in saved model bundle.
Here is an example on how to extract the information:
https://github.com/awslabs/djl/blob/master/tensorflow/tensorflow-engine/src/main/java/ai/djl/tensorflow/engine/TfSymbolBlock.java#L149
2) Using python:
load the saved model in tensorflow python and print the names
loaded = tf.saved_model.load("path/to/model/")
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
I recommend to take a look at Deep Java Library, it automatically handle the input, output names.
It supports TensorFlow 2.1.0 and allows you to load Keras models as well as TF Hub Saved Model. Take a look at the documentation here and here
Feel free to open an issue if you have problem loading your model.
You can load TF model with Deep Java Library
System.setProperty("ai.djl.repository.zoo.location", "https://storage.googleapis.com/tfhub-modules/google/universal-sentence-encoder/1.tar.gz?artifact_id=encoder");
Criteria.Builder<NDList, NDList> builder =
Criteria.builder()
.setTypes(NDList.class, NDList.class)
.optArtifactId("ai.djl.localmodelzoo:encoder")
.build();
ZooModel<NDList, NDList> model = ModelZoo.loadModel(criteria);
See https://github.com/awslabs/djl/blob/master/docs/load_model.md#load-model-from-a-url for detail
I need to do the same, but seems still lots of missing pieces RE DJL usage. E.g., what to do after this?:
ZooModel<NDList, NDList> model = ModelZoo.loadModel(criteria);
I finally found an example in the DJL source code. The key take-away is to not use NDList for the input/output at all:
Criteria<String[], float[][]> criteria =
Criteria.builder()
.optApplication(Application.NLP.TEXT_EMBEDDING)
.setTypes(String[].class, float[][].class)
.optModelUrls(modelUrl)
.build();
try (ZooModel<String[], float[][]> model = ModelZoo.loadModel(criteria);
Predictor<String[], float[][]> predictor = model.newPredictor()) {
return predictor.predict(inputs.toArray(new String[0]));
}
See https://github.com/awslabs/djl/blob/master/examples/src/main/java/ai/djl/examples/inference/UniversalSentenceEncoder.java for the complete example.

How do I create a very simple rule using Apache Calcite and use it on Apache Flink?

I have this application in Flink which use Table API to print data from a source. THe official documentation of Flink says that the Table API uses Calcite on its core to translate and optimize query plans. They don't describe it very in deep, so I went to the source code and tried to copy some codes from there. But, as far as I saw, they use Calcite rules as well.
What if I want to implement my own rule? Is it possible? How do I implement a simple rule in Calcite to change the parameter of a filter for example?
Here is my code
public class HelloWorldCalcitePlanTableAPI {
private static final Logger logger = LoggerFactory.getLogger(HelloWorldCalcitePlanTableAPI.class);
private static final String TICKETS_STATION_01_PLATFORM_01 = "TicketsStation01Plat01";
public static void main(String[] args) throws Exception {
new HelloWorldCalcitePlanTableAPI("127.0.0.1", "127.0.0.1");
}
public HelloWorldCalcitePlanTableAPI(String ipAddressSource01, String ipAddressSink) throws Exception {
// Start streaming from fake data source sensors
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
// StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, tableConfig);
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
// Calcite configuration file to change the query execution plan
// CalciteConfig cc = tableEnv.getConfig().getCalciteConfig();
CalciteConfig cc = new CalciteConfigBuilder()
.addNormRuleSet(RuleSets.ofList(MyFilterReduceExpressionRule.FILTER_INSTANCE))
.replaceDecoRuleSet(RuleSets.ofList(MyDataStreamRule.INSTANCE))
.build();
tableEnv.getConfig().setCalciteConfig(cc);
// obtain query configuration from TableEnvironment
StreamQueryConfig qConfig = tableEnv.queryConfig();
qConfig.withIdleStateRetentionTime(Time.minutes(30), Time.hours(2));
// Register Data Source Stream tables in the table environment
tableEnv.registerTableSource(TICKETS_STATION_01_PLATFORM_01,
new MqttSensorTableSource(ipAddressSource01, TOPIC_STATION_01_PLAT_01_TICKETS));
Table result = tableEnv.scan(TICKETS_STATION_01_PLATFORM_01)
.filter(VALUE + " >= 50 && " + VALUE + " <= 100 && " + VALUE + " >= 50")
;
tableEnv.toAppendStream(result, Row.class).print();
System.out.println("Execution plan ........................ ");
System.out.println(env.getExecutionPlan());
System.out.println("Plan explaination ........................ ");
System.out.println(tableEnv.explain(result));
System.out.println("........................ ");
System.out.println("NormRuleSet: " + cc.getNormRuleSet().isDefined());
System.out.println("LogicalOptRuleSet: " + cc.getLogicalOptRuleSet().isDefined());
System.out.println("PhysicalOptRuleSet: " + cc.getPhysicalOptRuleSet().isDefined());
System.out.println("DecoRuleSet: " + cc.getDecoRuleSet().isDefined());
// #formatter:on
env.execute("HelloWorldCalcitePlanTableAPI");
}
}
public class MyDataStreamRule extends RelOptRule {
public static final MyDataStreamRule INSTANCE = new MyDataStreamRule(operand(DataStreamRel.class, none()), "MyDataStreamRule");
public MyDataStreamRule(RelOptRuleOperand operand, String description) {
super(operand, "MyDataStreamRule:" + description);
}
public MyDataStreamRule(RelBuilderFactory relBuilderFactory) {
super(operand(DataStreamRel.class, any()), relBuilderFactory, null);
}
public void onMatch(RelOptRuleCall call) {
DataStreamRel dataStreamRel = (DataStreamRel) call.rel(0);
System.out.println("======================= MyDataStreamRule.onMatch ====================");
}
}
public class MyFilterReduceExpressionRule extends RelOptRule {
public static final MyFilterReduceExpressionRule FILTER_INSTANCE = new MyFilterReduceExpressionRule(
operand(LogicalFilter.class, none()), "MyFilterReduceExpressionRule");
public MyFilterReduceExpressionRule(RelOptRuleOperand operand, String description) {
super(operand, "MyFilterReduceExpressionRule:" + description);
}
public MyFilterReduceExpressionRule(RelBuilderFactory relBuilderFactory) {
super(operand(LogicalFilter.class, any()), relBuilderFactory, null);
}
public MyFilterReduceExpressionRule(RelOptRuleOperand operand) {
super(operand);
}
#Override
public void onMatch(RelOptRuleCall arg0) {
System.out.println("======================= MyFilterReduceExpressionRule.onMatch ====================");
}
}

Upsert nested objects in mongodb 3.2 using Java driver

I'm trying to use the upsert feature of mongodb v3.2 using java,
so every solution not including a java response would not be accepted.
My problem is that the upsert command override nested objects instead of adding new ones, I have tried to use '$addToSet' and 'push', but without success and I get an error message indicating that the storage engine does not support this command.
I want to update the client's document as well as their inner objects such as checks and checks's values.
the global structure of the client doc is as below.
Client
|
|__Checks // array of checks , update or insert operation
|
|__values // array of values, every check has its own values (20 max)
// update using index(id)
link of the: Example's source code
My intention is to use only one query to update client's document without using many queries.
I'm not specialist in mongodb, so every advice or critics would be appreciated.
Even if I'm doing this all wrong, feel free to notify me, and please using java for mongo 3.2.
Here is the source code used to generate the last result.
package org.egale.core;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.UpdateOptions;
import java.util.ArrayList;
import java.util.List;
import org.bson.Document;
/**
*
* #author Zied
*/
public class MongoTest {
/**
* Pojo used to populate data
*/
static class CheckModel {
public String client;
public String checkId;
public String name;
public String command;
public String description;
public String topic;
public int refresh = 60;
public int status;
public String output;
}
static MongoClient mongoClient = new MongoClient();
static String dbName = "eagle";
private static List<Document> getCheckValues(CheckModel checkModel, int index) {
final List<Document> checkValues = new ArrayList<>();
final Document val = new Document()
.append("id", index)
.append("output", checkModel.output)
.append("status", checkModel.status);
checkValues.add(val); // second execution should not ovveride the content of value but a new
return checkValues;
}
private static void insertCheck(MongoDatabase db, CheckModel checkModel) {
int idx =++index % 20;
final List<Document> checks = new ArrayList<>();
final Document check = new Document()
.append("name", checkModel.name)
.append("command", checkModel.command)
.append("id", checkModel.checkId)
.append("description", checkModel.description)
.append("topic", checkModel.topic)
.append("last_output", checkModel.output)
.append("index", index)
.append("last_status", checkModel.status)
.append("values", getCheckValues(checkModel,idx))
.append("refresh", checkModel.refresh);
checks.add(check);
Document client = new Document()
.append("name", checkModel.client)
.append("checks", checks);
//.append("$addToSet" , new Document("checks", checks)); // <<- error here '$addToSet' is not recocnized
db.getCollection("clients") // execute client insert or update
.updateOne(
new Document().append("_id", checkModel.client), new Document("$set", client), new UpdateOptions().upsert(true)
);
}
static int index = 0;
// Name of the topic from which we will receive messages from = " testt"
public static void main(String[] args) {
MongoDatabase db = mongoClient.getDatabase(dbName);
CheckModel checkModel = new CheckModel();
checkModel.command = "ls -lA";
checkModel.client = "client_001";
checkModel.description = "ls -l command";
checkModel.checkId = "lsl_command";
checkModel.name = "client 001";
checkModel.output = "result of ls -l";
checkModel.status = 0;
checkModel.topic = "basic_checks";
checkModel.refresh = 5000;
initDB(db);
// insert the first check
insertCheck(db, checkModel);
// insert the second check after some modification
// insertCheck(db, modifyData(checkModel));
}
// mdofiy data to test the check
private static CheckModel modifyData(CheckModel checkModel){
checkModel.status = 1;
checkModel.output = "ls commadn not found";
return checkModel;
}
private static void initDB(MongoDatabase db) {
MongoCollection<Document> collection = db.getCollection("configuration");
if (collection.count() == 0) {
Document b = new Document()
.append("_id", "app_config")
.append("historical_data", 20)
.append("current_index", 0);
collection.insertOne(b);
}
Document b = new Document().append("none", "none");
MongoCollection<Document> clients = db.getCollection("clients");
clients.insertOne(b);
clients.deleteOne(b);
MongoCollection<Document> topics = db.getCollection("topics");
topics.insertOne(b);
topics.deleteOne(b);
}
}
You may use $push, $each, $slice to solve your problem, see alse https://docs.mongodb.org/manual/reference/operator/update/slice/.
db.students has following documents
{ "_id" : 10, "scores" : [ 1, 2, 3 ] }
db.students.update(
{ _id: 10 },
{
$push: {
scores: {
$each: [ 4 ],
$slice: -3
}
}
}
)
result is:
{ "_id" : 10, "scores" : [ 2, 3, 4] }

Java file encoding magic

Strange thing happened in Java Kingdom...
Long story short: I use Java API V3 to connect to QuickBooks and fetch the data form there (services for example).
Everything goes fine except the case when a service contains russian symbols (or probably non-latin symbols).
Here is Java code that does it (I know it's far from perfect)
package com.mde.test;
import static com.intuit.ipp.query.GenerateQuery.$;
import static com.intuit.ipp.query.GenerateQuery.select;
import java.util.LinkedList;
import java.util.List;
import com.intuit.ipp.core.Context;
import com.intuit.ipp.core.ServiceType;
import com.intuit.ipp.data.Item;
import com.intuit.ipp.exception.FMSException;
import com.intuit.ipp.query.GenerateQuery;
import com.intuit.ipp.security.OAuthAuthorizer;
import com.intuit.ipp.services.DataService;
import com.intuit.ipp.util.Config;
public class TestEncoding {
public static final String QBO_BASE_URL_SANDBOX = "https://sandbox-quickbooks.api.intuit.com/v3/company";
private static String consumerKey = "consumerkeycode";
private static String consumerSecret = "consumersecretcode";
private static String accessToken = "accesstokencode";
private static String accessTokenSecret = "accesstokensecretcode";
private static String appToken = "apptokencode";
private static String companyId = "companyidcode";
private static OAuthAuthorizer oauth = new OAuthAuthorizer(consumerKey, consumerSecret, accessToken, accessTokenSecret);
private static final int PAGING_STEP = 500;
public static void main(String[] args) throws FMSException {
List<Item> res = findAllServices(getDataService());
System.out.println(res.get(1).getName());
}
public static List<Item> findAllServices(DataService service) throws FMSException {
Item item = GenerateQuery.createQueryEntity(Item.class);
List<Item> res = new LinkedList<>();
for (int skip = 0; ; skip += PAGING_STEP) {
String query = select($(item)).skip(skip).take(PAGING_STEP).generate();
List<Item> items = (List<Item>)service.executeQuery(query).getEntities();
if (items.size() > 0)
res.addAll(items);
else
break;
}
System.out.println("All services fetched");
return res;
}
public static DataService getDataService() throws FMSException {
Context context = getContext();
if (context == null) {
System.out.println("Context is null, something wrong, dataService also will null.");
return null;
}
return getDataService(context);
}
private static Context getContext() {
try {
return new Context(oauth, appToken, ServiceType.QBO, companyId);
} catch (FMSException e) {
System.out.println("Context is not loaded");
return null;
}
}
protected static DataService getDataService(Context context) throws FMSException {
DataService service = new DataService(context);
Config.setProperty(Config.BASE_URL_QBO, QBO_BASE_URL_SANDBOX);
return new DataService(context);
}
}
This file is saved in UTF-8. And it prints something like
All services fetched
Сэрвыс, отнюдь
But! When I save this file in UTF-8 with BOM.... I get the correct data!
All services fetched
Сэрвыс, отнюдь
Does anybody can explain what is happening? :)
// I use Eclipse to run the code
You are fetching data from a system that doesn't share the same byte ordering as you, so when you save the file with BOM, it adds enough information in the file that future programs will read it in the remote system's byte ordering.
When you save it without BOM, it wrote the file in the remote system's byte ordering without any indication of the stored byte order, so when you read it you read it with the local system's (different) byte order. This jumbles up the bytes within the multi-byte characters, making the output appear as nonsense.

Categories