Set default bin value if not present in aerospike - java

Suppose I have 2 bins in aeropike set
number(key) 2. timeLeft
I wanted to get a timeLeft value from aerospike for a number.
But if the particular record is not present then I want to create the record and set a default value 6000 to timeLeft and then get the value in the single transaction.
public Record someMethod(String num) {
WritePolicy writePolicy = aerospikeRepo.getWritePolicy(null, ttl, true);
return aerospikeRepo.operate(writePolicy, namespace, set, num, Operation.get());
}
Personally, I think the .operate() method of the aerospike client will be used somehow but did not find relevant Operation to set the default value if not present.

You can do it using Expressions. Here is sample code:
import com.aerospike.client.AerospikeClient;
import com.aerospike.client.policy.WritePolicy;
import com.aerospike.client.Bin;
import com.aerospike.client.Key;
import com.aerospike.client.Record;
import com.aerospike.client.Value;
import com.aerospike.client.policy.RecordExistsAction;
import com.aerospike.client.AerospikeException;
import com.aerospike.client.ResultCode;
import com.aerospike.client.Operation;
import com.aerospike.client.exp.Exp;
import com.aerospike.client.exp.ExpOperation;
import com.aerospike.client.exp.ExpWriteFlags;
import com.aerospike.client.exp.Expression;
System.out.println("Client modules imported.");
AerospikeClient client = new AerospikeClient("localhost", 3000);
WritePolicy wP = new WritePolicy();
wP.respondAllOps = true;
int iNumber = 11;
int iTimeLeft = 6000;
for(int i=0; i<5; i++){
Key key = new Key ("test", "testset", iNumber);
Expression tlExp = Exp.build(Exp.val(iTimeLeft));
Record record = client.operate(wP, key,
ExpOperation.write("timeLeft", tlExp, ExpWriteFlags.CREATE_ONLY | ExpWriteFlags.POLICY_NO_FAIL),
//ExpOperation.write("timeLeft", tlExp, ExpWriteFlags.DEFAULT),
Operation.get("timeLeft"));
List<?> list = record.getList("timeLeft");
System.out.println(list.get(1));
iTimeLeft = iTimeLeft - 1000; //should not alter record value
}
This gives the following output:
Client modules imported.
6000
6000
6000
6000
6000
However, if I use the DEFAULT, the output will be modified each time. (what you don't want, compared to the correct flags above (CREATE_ONLY|POLICY_NO_FAIL i.e. silently go on to next operation if you want to update the record only if the bin does not exist).
Client modules imported.
6000
5000
4000
3000
2000

Related

Iceberg table does not see the generated Parquet file

In my use case, the table in Iceberg format is created. It only receives APPEND operations as it is about recording events in a time series stream. To evaluate the use of the Iceberg format in this use-case, I created a simple Java program that creates a set of 27600 lines. Both the metadata and the parquet file were created but I can't access them via the Java API (https://iceberg.apache.org/docs/latest/java-api-quickstart/). I'm using HadoopCatalog and FileAppender<GenericRecord>. It is important to say that I can read the Parquet file created using pyarrow and datafusion modules via Python 3 script, and it is correct!
I believe that the execution of some method in my program that links the generated Parquet file to the table created in the catalog must be missing.
NOTE: I'm only using Apache Iceberg's Java API in version 1.0.0
There is an org.apache.iceberg.Transaction object in the API that accepts an org.apache.iceberg.DataFile but I haven't seen examples of how to use it and I don't know if it's useful to solve this problem either.
See the program below:
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.*;
import org.apache.iceberg.catalog.Catalog;
import org.apache.iceberg.catalog.TableIdentifier;
import org.apache.iceberg.data.GenericRecord;
import org.apache.iceberg.data.parquet.GenericParquetWriter;
import org.apache.iceberg.hadoop.HadoopCatalog;
import org.apache.iceberg.io.FileAppender;
import org.apache.iceberg.parquet.Parquet;
import org.apache.iceberg.relocated.com.google.common.collect.Lists;
import org.apache.iceberg.types.Types;
import java.io.File;
import java.io.IOException;
import java.time.LocalDate;
import java.time.temporal.ChronoUnit;
import java.util.List;
import static org.apache.iceberg.types.Types.NestedField.optional;
import static org.apache.iceberg.types.Types.NestedField.required;
public class IcebergTableAppend {
public static void main(String[] args) {
System.out.println("Appending records ");
Configuration conf = new Configuration();
String lakehouse = "/tmp/iceberg-test";
conf.set(CatalogProperties.WAREHOUSE_LOCATION, lakehouse);
Schema schema = new Schema(
required(1, "hotel_id", Types.LongType.get()),
optional(2, "hotel_name", Types.StringType.get()),
required(3, "customer_id", Types.LongType.get()),
required(4, "arrival_date", Types.DateType.get()),
required(5, "departure_date", Types.DateType.get()),
required(6, "value", Types.DoubleType.get())
);
PartitionSpec spec = PartitionSpec.builderFor(schema)
.month("arrival_date")
.build();
TableIdentifier id = TableIdentifier.parse("bookings.rome_hotels");
String warehousePath = "file://" + lakehouse;
Catalog catalog = new HadoopCatalog(conf, warehousePath);
// rm -rf /tmp/iceberg-test/bookings
Table table = catalog.createTable(id, schema, spec);
List<GenericRecord> records = Lists.newArrayList();
// generating a bunch of records
for (int j = 1; j <= 12; j++) {
int NUM_ROWS_PER_MONTH = 2300;
for (int i = 0; i < NUM_ROWS_PER_MONTH; i++) {
GenericRecord rec = GenericRecord.create(schema);
rec.setField("hotel_id", (long) (i * 2) + 10000);
rec.setField("hotel_name", "hotel_name-" + i + 1000);
rec.setField("customer_id", (long) (i * 2) + 20000);
rec.setField("arrival_date",
LocalDate.of(2022, j, (i % 23) + 1)
.plus(1, ChronoUnit.DAYS));
rec.setField("departure_date",
LocalDate.of(2022, j, (i % 23) + 5));
rec.setField("value", (double) i * 4.13);
records.add(rec);
}
}
File parquetFile = new File(
lakehouse + "/bookings/rome_hotels/arq_001.parquet");
FileAppender<GenericRecord> appender = null;
try {
appender = Parquet.write(Files.localOutput(parquetFile))
.schema(table.schema())
.createWriterFunc(GenericParquetWriter::buildWriter)
.build();
} catch (IOException e) {
throw new RuntimeException(e);
}
try {
appender.addAll(records);
} finally {
try {
appender.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
}
I found out how to fix the Java program.
Just add the lines below to the end of the main method
PartitionKey partitionKey = new PartitionKey(table.spec(), table.schema());
DataFile dataFile = DataFiles.builder(table.spec())
.withPartition(partitionKey)
.withInputFile(localInput(parquetFile))
.withMetrics(appender.metrics())
.withFormat(FileFormat.PARQUET)
.build();
Transaction t = table.newTransaction();
t.newAppend().appendFile(dataFile).commit();
// commit all changes to the table
t.commitTransaction();
Also add to your POM file the dependency below
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>3.3.4</version>
</dependency>
This avoids the runtime error shown below:
java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.lib.input.FileInputFormat

How to pass input data to an existing tensorflow 2.x model in Java?

I'm doing my first steps with tensorflow. After having created a simple model for MNIST data in Python, I now want to import this model into Java and use it for classification. However, I don't manage to pass the input data to the model.
Here is the Python code for model creation:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical.
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32')
train_images /= 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32')
test_images /= 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
NrTrainimages = train_images.shape[0]
NrTestimages = test_images.shape[0]
import os
import numpy as np
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
# Network architecture
model = Sequential()
mnist_inputshape = train_images.shape[1:4]
# Convolutional block 1
model.add(Conv2D(32, kernel_size=(5,5),
activation = 'relu',
input_shape=mnist_inputshape,
name = 'Input_Layer'))
model.add(MaxPooling2D(pool_size=(2,2)))
# Convolutional block 2
model.add(Conv2D(64, kernel_size=(5,5),activation= 'relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
# Prediction block
model.add(Flatten())
model.add(Dense(128, activation='relu', name='features'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax', name = 'Output_Layer'))
model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
LOGDIR = "logs"
my_tensorboard = TensorBoard(log_dir = LOGDIR,
histogram_freq=0,
write_graph=True,
write_images=True)
my_batch_size = 128
my_num_classes = 10
my_epochs = 5
history = model.fit(train_images, train_labels,
batch_size=my_batch_size,
callbacks=[my_tensorboard],
epochs=my_epochs,
use_multiprocessing=False,
verbose=1,
validation_data=(test_images, test_labels))
score = model.evaluate(test_images, test_labels)
modeldir = 'models'
model.save(modeldir, save_format = 'tf')
For Java, I am trying to adapt the App.java code published here.
I am struggling with replacing this snippet:
Tensor result = s.runner()
.feed("input_tensor", inputTensor)
.feed("dropout/keep_prob", keep_prob)
.fetch("output_tensor")
.run().get(0);
While in this code, a particular input tensor is used to pass the data, in my model, there are only layers and no individual named tensors. Thus, the following doesn't work:
Tensor<?> result = s.runner()
.feed("Input_Layer/kernel", inputTensor)
.fetch("Output_Layer/kernel")
.run().get(0);
How do I pass the data to and get the output from my model in Java?
With the newest version of TensorFlow Java, you don't need to search for yourself the name of the input/output tensors from the model signature or from the graph. You can simply call the following:
try (SavedModelBundle model = SavedModelBundle.load("./model", "serve");
Tensor<TFloat32> image = TFloat32.tensorOf(...); // There a many ways to pass you image bytes here
Tensor<TFloat32> result = model.call(image).expect(TFloat32.DTYPE)) {
System.out.println("Result is " + result.data().getFloat());
}
}
TensorFlow Java will automatically take care of mapping your input/output tensors to the right nodes.
I finally managed to find a solution. To get all the tensor names in the graph, I used the following code:
for (Iterator it = smb.graph().operations(); it.hasNext();) {
Operation op = (Operation) it.next();
System.out.println("Operation name: " + op.name());
}
From this, I figured out that the following works:
SavedModelBundle smb = SavedModelBundle.load("./model", "serve");
Session s = smb.session();
Tensor<Float> inputTensor = Tensor.<Float>create(imagesArray, Float.class);
Tensor<Float> result = s.runner()
.feed("serving_default_Input_Layer_input", inputTensor)
.fetch("StatefulPartitionedCall")
.run().get(0).expect(Float.class);

Determine a YouTube channel's upload rate using YouTube Data API v3

I am writing a Java application that uses YouTube Data API v3. I want to be able to determine a channel's upload rate. For example, if a channel is one week old, and has published 2 videos, I want some way to determine that the channel's upload rate is 2 videos/week. How would I do this using the YouTube API?
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.HttpRequest;
import com.google.api.client.http.HttpRequestInitializer;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.youtube.YouTube;
import com.google.api.services.youtube.model.Channel;
import com.google.api.services.youtube.model.ChannelListResponse;
import java.io.IOException;
import java.io.InputStream;
import java.security.GeneralSecurityException;
import java.util.Collection;
import java.util.Collections;
import java.util.Properties;
public class ApiExample {
public static void main(String[] args)
throws GeneralSecurityException, IOException, GoogleJsonResponseException {
Properties properties = new Properties();
try {
InputStream in = ApiExample.class.getResourceAsStream("/" + "youtube.properties");
properties.load(in);
} catch (IOException e) {
System.err.println("There was an error reading " + "youtube.properties" + ": " + e.getCause()
+ " : " + e.getMessage());
System.exit(1);
}
YouTube youtubeService = new YouTube.Builder(new NetHttpTransport(), new JacksonFactory(), new HttpRequestInitializer() {
public void initialize(HttpRequest request) throws IOException {
}
}).setApplicationName("API Demo").build();
// Define and execute the API request
YouTube.Channels.List request = youtubeService.channels()
.list("snippet,contentDetails,statistics");
String apiKey = properties.getProperty("youtube.apikey");
request.setKey(apiKey);
ChannelListResponse response = request.setId("UC_x5XG1OV2P6uZZ5FSM9Ttw").execute();
for (Channel channel : response.getItems()) {
/* What do I do here to get the individual channel's upload rate? /
}
}
}
The above example uses the YouTube Developers channel, but I want to be able to do this with any channel.
According to the official docs, once you invoke the Channels.list API endpoint -- that returns the specified channel's meta-data, a Channels resource --, you have at your disposal the following property:
statistics.videoCount (unsigned long)
The number of public videos uploaded to the channel.
Therefore, things are almost obvious: make the value returned by this property persistent (e.g. save it into a file) and arrange your program such that to be issued weekly for to compute your desired upload rate.
Now, for what concerns your code above, you should first get rid of:
for (Channel channel : response.getItems()) {
/* What do I do here to get the individual channel's upload rate? /
}
since the items property will contain at most one item. A good practice would be to assert this condition:
assert response.getItems().size() <= 1;
The value of the needed videoCount property will be accessible under the method getVideoCount of ChannelStatistics class:
response.getItems().get(0).getStatistics().getVideoCount().
Of course, since is always good to ask from the API only the info that is really of use, I would also recommend you to use the parameter fields (the method setFields) in the form of:
request.setFields("items(statistics(videoCount))"),
inserted, for example, after request.setKey(apiKey).
This way the API will send back to you only the property that you need.
Addendum
I also have to mention that the assertion above is correct only when you pass to the API endpoint (as you currently do within your code above) one channel ID only. If in the future you'll want to compute in one go the upload rate of N channels (with N <= 50), then the condition above will look like size() <= N.
The call of Channels.list in one go on multiple channels is possible, since this endpoint's id property is allowed to be specified as a comma-separated list of channel IDs.

Tensorflow 2.0 & Java API

(note, I've resolved my problem and posted the code at the bottom)
I'm playing around with TensorFlow and the backend processing must take place in Java. I've taken one of the models from the https://developers.google.com/machine-learning/crash-course and saved it with tf.saved_model.save(my_model,"house_price_median_income") (using a docker container). I copied the model off and loaded it into Java (using the 2.0 stuff built from source because I'm on windows).
I can load the model and run it:
try (SavedModelBundle model = SavedModelBundle.load("./house_price_median_income", "serve")) {
try (Session session = model.session()) {
Session.Runner runner = session.runner();
float[][] in = new float[][]{ {2.1518f} } ;
Tensor<?> jack = Tensor.create(in);
runner.feed("serving_default_layer1_input", jack);
float[][] probabilities = runner.fetch("StatefulPartitionedCall").run().get(0).copyTo(new float[1][1]);
for (int i = 0; i < probabilities.length; ++i) {
System.out.println(String.format("-- Input #%d", i));
for (int j = 0; j < probabilities[i].length; ++j) {
System.out.println(String.format("Class %d - %f", i, probabilities[i][j]));
}
}
}
}
The above is hardcoded to an input and output but I want to be able to read the model and provide some information so the end-user can select the input and output, etc.
I can get the inputs and outputs with the python command: saved_model_cli show --dir ./house_price_median_income --all
What I want to do it get the inputs and outputs via Java so my code doesn't need to execute python script to get them. I can get operations via:
Graph graph = model.graph();
Iterator<Operation> itr = graph.operations();
while (itr.hasNext()) {
GraphOperation e = (GraphOperation)itr.next();
System.out.println(e);
And this outputs both the inputs and outputs as "operations" BUT how do I know that it is an input and\or an output? The python tool uses the SignatureDef but that doesn't seem to appear in the TensorFlow 2.0 java stuff at all. Am I missing something obvious or is it just missing from TensforFlow 2.0 Java library?
NOTE, I've sorted my issue with the answer help below. Here is my full bit of code in case somebody would like it in the future. Note this is TF 2.0 and uses the SNAPSHOT mentioned below. I make a few assumptions but it shows how to pull the input and output and then use them to run a model
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.exceptions.TensorFlowException;
import org.tensorflow.Session.Run;
import org.tensorflow.Graph;
import org.tensorflow.Operation;
import org.tensorflow.Output;
import org.tensorflow.GraphOperation;
import org.tensorflow.proto.framework.SignatureDef;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import org.tensorflow.proto.framework.MetaGraphDef;
import java.util.Map;
import org.tensorflow.proto.framework.TensorInfo;
import org.tensorflow.types.TFloat32;
import org.tensorflow.tools.Shape;
import java.nio.FloatBuffer;
import org.tensorflow.tools.buffer.DataBuffers;
import org.tensorflow.tools.ndarray.FloatNdArray;
import org.tensorflow.tools.ndarray.StdArrays;
import org.tensorflow.proto.framework.TensorInfo;
public class v2tensor {
public static void main(String[] args) {
try (SavedModelBundle savedModel = SavedModelBundle.load("./house_price_median_income", "serve")) {
SignatureDef modelInfo = savedModel.metaGraphDef().getSignatureDefMap().get("serving_default");
TensorInfo input1 = null;
TensorInfo output1 = null;
Map<String, TensorInfo> inputs = modelInfo.getInputsMap();
for(Map.Entry<String, TensorInfo> input : inputs.entrySet()) {
if (input1 == null) {
input1 = input.getValue();
System.out.println(input1.getName());
}
System.out.println(input);
}
Map<String, TensorInfo> outputs = modelInfo.getOutputsMap();
for(Map.Entry<String, TensorInfo> output : outputs.entrySet()) {
if (output1 == null) {
output1=output.getValue();
}
System.out.println(output);
}
try (Session session = savedModel.session()) {
Session.Runner runner = session.runner();
FloatNdArray matrix = StdArrays.ndCopyOf(new float[][]{ { 2.1518f } } );
try (Tensor<TFloat32> jack = TFloat32.tensorOf(matrix) ) {
runner.feed(input1.getName(), jack);
try ( Tensor<TFloat32> rezz = runner.fetch(output1.getName()).run().get(0).expect(TFloat32.DTYPE) ) {
TFloat32 data = rezz.data();
data.scalars().forEachIndexed((i, s) -> {
System.out.println(s.getFloat());
} );
}
}
}
} catch (TensorFlowException ex) {
ex.printStackTrace();
}
}
}
What you need to do is to read the SavedModelBundle metadata as a MetaGraphDef, from there you can retrieve input and output names from the SignatureDef, like in Python.
In TF Java 1.* (i.e. the client you are using in your example), the proto definitions are not available out-of-the-box from the tensorflow artifact, you need to add a dependency to org.tensorflow:proto as well and deserialize the result of SavedModelBundle.metaGraphDef() into a MetaGraphDef proto.
In TF Java 2.* (the new client actually only available as snapshots from here), the protos are present right away so you can simply call this line to retrieve the right SignatureDef:
savedModel.metaGraphDef().signatureDefMap.getValue("serving_default")

MongoDB ACKNOWLEDGED write concern faster than UNACKNOWLEDGED?

I've got a very simple test program that performs faster with ACKNOWLEDGED bulk inserts than with UNACKNOWLEDGED. And it's not just a little faster - I'm seeing a factor of nearly 100!
My understanding of the difference between these two write concerns is solely that with ACKNOWLEDGED the client waits for confirmation from the server that the operation has been executed (but not necessarily made durable), while with UNACKNOWLEDGED the client only knows that the request made it out onto the wire. So it would seem preposterous that the former could actually perform at a higher speed, yet that's what I'm seeing.
I'm using the Java driver (v2.12.0) with Oracle's Java JDK v1.7.0_71, and mongo version 3.0.0 on 64-bit Windows 7. I'm running mongod, completely out-of-the-box (fresh install), no sharding or anything. And before each test I ensure that the collection is empty and has no non-default indexes.
I would appreciate any insight into why I'm consistently seeing the opposite of what I'd expect.
Thanks.
Here's my code:
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.BulkWriteOperation;
import com.mongodb.BulkWriteResult;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;
import com.mongodb.ServerAddress;
import com.mongodb.WriteConcern;
import java.util.Arrays;
public class Test {
private static final int BATCHES = 100;
private static final int BATCH_SIZE = 1000;
private static final int COUNT = BATCHES * BATCH_SIZE;
public static void main(String[] argv) throws Exception {
DBCollection coll = new MongoClient(new ServerAddress()).getDB("test").getCollection("test");
for (String wcName : Arrays.asList("UNACKNOWLEDGED", "ACKNOWLEDGED")) {
WriteConcern wc = (WriteConcern) WriteConcern.class.getField(wcName).get(null);
coll.dropIndexes();
coll.remove(new BasicDBObject());
long start = System.currentTimeMillis();
BulkWriteOperation bulkOp = coll.initializeUnorderedBulkOperation();
for (int i = 1; i < COUNT; i++) {
DBObject doc = new BasicDBObject().append("int", i).append("string", Integer.toString(i));
bulkOp.insert(doc);
if (i % BATCH_SIZE == 0) {
BulkWriteResult results = bulkOp.execute(wc);
if (wc == WriteConcern.ACKNOWLEDGED && results.getInsertedCount() != 1000) {
throw new RuntimeException("Bogus insert count: " + results.getInsertedCount());
}
bulkOp = coll.initializeUnorderedBulkOperation();
}
}
long time = System.currentTimeMillis() - start;
double rate = COUNT / (time / 1000.0);
System.out.printf("%s[w=%s,j=%s]: Inserted %d documents in %s # %f/sec\n",
wcName, wc.getW(), wc.getJ(), COUNT, duration(time), rate);
}
}
private static String duration(long msec) {
return String.format("%d:%02d:%02d.%03d",
msec / (60 * 60 * 1000),
(msec % (60 * 60 * 1000)) / (60 * 1000),
(msec % (60 * 1000)) / 1000,
msec % 1000);
}
}
And here's typical output:
UNACKNOWLEDGED[w=0,j=false]: Inserted 100000 documents in 0:01:27.025 # 1149.095088/sec
ACKNOWLEDGED[w=1,j=false]: Inserted 100000 documents in 0:00:00.927 # 107874.865156/sec
EDIT
Ran more extensive tests, per request from Markus W. Mahlberg. For these tests, I ran the code with four write concerns: UNACKNOWLEDGED, ACKNOWLEDGED, JOURNALED, and FSYNCED. (I would expect this order to show decreasing speed.) I ran 112 repetitions, each of which performed 100 batches of 1000 inserts under each of the four write concerns, each time into an empty collection with no indexes. Code was identical to original post but with two additional write concerns, and with output to CSV format for easy analysis.
Results summary:
UNACKNOWLEDGED: 1147.105004 docs/sec avg, std dev 27.88577035
ACKNOWLEDGED: 77539.27653 docs/sec avg, std dev 1567.520303
JOURNALED: 29574.45243 docs/sec avg, std dev 123.9927554
FSYNCED: 29567.02467 docs/sec avg, std dev 147.6150994
The huge inverted performance difference between UNACKNOWLEDGED and ACKNOWLEDGED is what's got me baffled.
Here's the raw data if anyone cares for it ("time" is elapsed msec for 100*1000 insertions; "rate" is docs/second):
"UNACK time","UNACK rate","ACK time","ACK rate","JRNL time","JRNL rate","FSYNC time","FSYNC rate"
92815,1077.4120562409094,1348,74183.9762611276,3380,29585.798816568047,3378,29603.31557134399
90209,1108.5368422219512,1303,76745.97083653108,3377,29612.081729345577,3375,29629.62962962963
91089,1097.8273995762386,1319,75815.01137225171,3382,29568.30277942046,3413,29299.73630237328
90159,1109.1516099335618,1320,75757.57575757576,3375,29629.62962962963,3377,29612.081729345577
89922,1112.0749093658949,1315,76045.62737642587,3380,29585.798816568047,3376,29620.853080568722
89997,1111.1481493827573,1306,76569.67840735069,3381,29577.048210588586,3379,29594.55460195324
90141,1109.373093264996,1319,75815.01137225171,3386,29533.372711163614,3378,29603.31557134399
89771,1113.9454835080371,1325,75471.69811320755,3387,29524.65308532625,3521,28401.022436807725
89716,1114.6283828971423,1325,75471.69811320755,3379,29594.55460195324,3379,29594.55460195324
90205,1108.5859985588381,1323,75585.78987150417,3377,29612.081729345577,3376,29620.853080568722
90092,1109.976468498868,1328,75301.2048192771,3382,29568.30277942046,3379,29594.55460195324
89822,1113.3129968159249,1322,75642.965204236,3385,29542.097488921714,3383,29559.562518474726
89821,1113.3253916122064,1310,76335.87786259541,3380,29585.798816568047,3383,29559.562518474726
89945,1111.7905386625162,1318,75872.53414264036,3379,29594.55460195324,3379,29594.55460195324
89917,1112.1367483345753,1352,73964.49704142011,3381,29577.048210588586,3377,29612.081729345577
90358,1106.7088691648773,1303,76745.97083653108,3377,29612.081729345577,3380,29585.798816568047
90187,1108.8072560346836,1348,74183.9762611276,3387,29524.65308532625,3395,29455.081001472754
90634,1103.3387029150208,1322,75642.965204236,3384,29550.827423167848,3381,29577.048210588586
90148,1109.2869503483162,1331,75131.48009015778,3389,29507.22927117144,3381,29577.048210588586
89767,1113.9951207013714,1321,75700.22710068131,3380,29585.798816568047,3382,29568.30277942046
89910,1112.2233344455567,1321,75700.22710068131,3381,29577.048210588586,3385,29542.097488921714
89852,1112.9412812180028,1316,75987.84194528875,3381,29577.048210588586,3401,29403.116730373422
89537,1116.8567184515898,1319,75815.01137225171,3380,29585.798816568047,3380,29585.798816568047
89763,1114.0447623185498,1331,75131.48009015778,3380,29585.798816568047,3382,29568.30277942046
90070,1110.2475852115022,1325,75471.69811320755,3383,29559.562518474726,3378,29603.31557134399
89771,1113.9454835080371,1302,76804.91551459293,3389,29507.22927117144,3378,29603.31557134399
90518,1104.7526458825869,1325,75471.69811320755,3383,29559.562518474726,3380,29585.798816568047
90314,1107.2480457071995,1322,75642.965204236,3380,29585.798816568047,3384,29550.827423167848
89874,1112.6688474976079,1329,75244.54477050414,3386,29533.372711163614,3379,29594.55460195324
89954,1111.6793027547415,1318,75872.53414264036,3381,29577.048210588586,3381,29577.048210588586
89903,1112.3099340400208,1325,75471.69811320755,3379,29594.55460195324,3388,29515.9386068477
89842,1113.0651588343983,1314,76103.500761035,3382,29568.30277942046,3377,29612.081729345577
89746,1114.2557885588217,1325,75471.69811320755,3378,29603.31557134399,3385,29542.097488921714
93249,1072.3975592231552,1327,75357.95026375283,3381,29577.048210588586,3377,29612.081729345577
93638,1067.9425019756936,1331,75131.48009015778,3377,29612.081729345577,3392,29481.132075471698
87775,1139.2765593847905,1340,74626.86567164179,3379,29594.55460195324,3378,29603.31557134399
86495,1156.136192843517,1271,78678.20613690009,3375,29629.62962962963,3376,29620.853080568722
85584,1168.442699570013,1276,78369.90595611285,3432,29137.529137529138,3376,29620.853080568722
86648,1154.094728095282,1278,78247.2613458529,3382,29568.30277942046,3411,29316.91586045148
85745,1166.2487608606916,1274,78492.93563579278,3380,29585.798816568047,3363,29735.355337496283
85813,1165.3246011676551,1279,78186.08287724786,3375,29629.62962962963,3376,29620.853080568722
85831,1165.0802157728558,1288,77639.75155279503,3376,29620.853080568722,3377,29612.081729345577
85807,1165.4060857505797,1259,79428.11755361399,3466,28851.702250432772,3375,29629.62962962963
85964,1163.2776511097668,1258,79491.2559618442,3378,29603.31557134399,3378,29603.31557134399
85854,1164.7680946723508,1257,79554.49482895785,3382,29568.30277942046,3375,29629.62962962963
85787,1165.6777833471272,1257,79554.49482895785,3377,29612.081729345577,3377,29612.081729345577
85537,1169.084723569917,1272,78616.35220125786,3377,29612.081729345577,3377,29612.081729345577
85408,1170.8505058074186,1271,78678.20613690009,3375,29629.62962962963,3425,29197.080291970804
85577,1168.5382754712132,1261,79302.14115781126,3378,29603.31557134399,3375,29629.62962962963
85663,1167.365140142185,1261,79302.14115781126,3377,29612.081729345577,3378,29603.31557134399
85812,1165.3381811401669,1273,78554.59544383347,3377,29612.081729345577,3378,29603.31557134399
85783,1165.7321380693145,1273,78554.59544383347,3377,29612.081729345577,3376,29620.853080568722
85682,1167.106276697556,1280,78125.0,3381,29577.048210588586,3376,29620.853080568722
85753,1166.1399601180133,1260,79365.07936507936,3379,29594.55460195324,3377,29612.081729345577
85573,1168.5928972923703,1332,75075.07507507507,3377,29612.081729345577,3377,29612.081729345577
86206,1160.0120641254668,1263,79176.56373713381,3376,29620.853080568722,3383,29559.562518474726
85593,1168.31983923919,1264,79113.92405063291,3380,29585.798816568047,3378,29603.31557134399
85903,1164.1036983574495,1261,79302.14115781126,3378,29603.31557134399,3377,29612.081729345577
85516,1169.3718134618082,1277,78308.53563038372,3375,29629.62962962963,3376,29620.853080568722
85553,1168.8660830128692,1291,77459.3338497289,3490,28653.295128939826,3377,29612.081729345577
85550,1168.907071887785,1293,77339.52049497294,3379,29594.55460195324,3379,29594.55460195324
85610,1168.0878402055835,1298,77041.60246533128,3384,29550.827423167848,3378,29603.31557134399
85522,1169.2897733916418,1267,78926.59826361484,3379,29594.55460195324,3379,29594.55460195324
85595,1168.2925404521293,1276,78369.90595611285,3379,29594.55460195324,3376,29620.853080568722
85451,1170.2613193526115,1286,77760.49766718507,3376,29620.853080568722,3391,29489.82601002654
85792,1165.609847071988,1252,79872.20447284346,3382,29568.30277942046,3376,29620.853080568722
86501,1156.0559993526085,1255,79681.2749003984,3379,29594.55460195324,3379,29594.55460195324
85718,1166.616113301757,1269,78802.20646178094,3382,29568.30277942046,3376,29620.853080568722
85605,1168.156065650371,1265,79051.38339920949,3378,29603.31557134399,3380,29585.798816568047
85398,1170.9876109510762,1274,78492.93563579278,3377,29612.081729345577,3395,29455.081001472754
86370,1157.809424568716,1273,78554.59544383347,3376,29620.853080568722,3376,29620.853080568722
85905,1164.0765962400326,1280,78125.0,3379,29594.55460195324,3379,29594.55460195324
86020,1162.5203441060219,1285,77821.01167315176,3375,29629.62962962963,3376,29620.853080568722
85726,1166.5072440099852,1272,78616.35220125786,3380,29585.798816568047,3380,29585.798816568047
85628,1167.8422945765403,1270,78740.15748031496,3379,29594.55460195324,3376,29620.853080568722
85989,1162.93944574306,1258,79491.2559618442,3376,29620.853080568722,3378,29603.31557134399
85981,1163.047650062223,1276,78369.90595611285,3376,29620.853080568722,3376,29620.853080568722
86558,1155.2947156819703,1269,78802.20646178094,3385,29542.097488921714,3378,29603.31557134399
85745,1166.2487608606916,1293,77339.52049497294,3378,29603.31557134399,3375,29629.62962962963
85544,1168.9890582624148,1266,78988.94154818325,3376,29620.853080568722,3377,29612.081729345577
85536,1169.0983913206135,1268,78864.35331230283,3380,29585.798816568047,3380,29585.798816568047
85477,1169.9053546568082,1278,78247.2613458529,3388,29515.9386068477,3377,29612.081729345577
85434,1170.4941826439124,1253,79808.45969672786,3378,29603.31557134399,3375,29629.62962962963
85609,1168.1014846569872,1276,78369.90595611285,3364,29726.516052318668,3376,29620.853080568722
85740,1166.316771635176,1258,79491.2559618442,3377,29612.081729345577,3377,29612.081729345577
85640,1167.6786548341897,1266,78988.94154818325,3378,29603.31557134399,3377,29612.081729345577
85648,1167.569587147394,1281,78064.012490242,3378,29603.31557134399,3376,29620.853080568722
85697,1166.9019919017,1287,77700.0777000777,3377,29612.081729345577,3378,29603.31557134399
85696,1166.9156086631815,1256,79617.83439490446,3379,29594.55460195324,3376,29620.853080568722
85782,1165.7457275419085,1258,79491.2559618442,3379,29594.55460195324,3379,29594.55460195324
85837,1164.9987767512844,1264,79113.92405063291,3379,29594.55460195324,3376,29620.853080568722
85632,1167.7877428998504,1278,78247.2613458529,3380,29585.798816568047,3459,28910.089621277824
85517,1169.3581393173288,1256,79617.83439490446,3379,29594.55460195324,3380,29585.798816568047
85990,1162.925921618793,1302,76804.91551459293,3380,29585.798816568047,3377,29612.081729345577
86690,1153.535586572846,1281,78064.012490242,3375,29629.62962962963,3381,29577.048210588586
86045,1162.1825788831425,1274,78492.93563579278,3380,29585.798816568047,3383,29559.562518474726
86146,1160.820003250296,1274,78492.93563579278,3382,29568.30277942046,3418,29256.87536571094
86027,1162.4257500552153,1280,78125.0,3382,29568.30277942046,3381,29577.048210588586
85992,1162.8988743138896,1281,78064.012490242,3376,29620.853080568722,3380,29585.798816568047
85857,1164.727395553071,1288,77639.75155279503,3382,29568.30277942046,3376,29620.853080568722
85853,1164.7816616775185,1284,77881.6199376947,3375,29629.62962962963,3374,29638.41138114997
86069,1161.8585088707896,1295,77220.07722007722,3378,29603.31557134399,3378,29603.31557134399
85842,1164.930919596468,1296,77160.49382716049,3378,29603.31557134399,3376,29620.853080568722
86195,1160.160102094089,1301,76863.95080707148,3376,29620.853080568722,3379,29594.55460195324
85523,1169.2761011657683,1305,76628.35249042146,3376,29620.853080568722,3378,29603.31557134399
85752,1166.1535591006625,1275,78431.37254901961,3374,29638.41138114997,3377,29612.081729345577
85441,1170.3982865369085,1286,77760.49766718507,3377,29612.081729345577,3380,29585.798816568047
85566,1168.6884977678048,1265,79051.38339920949,3377,29612.081729345577,3380,29585.798816568047
85523,1169.2761011657683,1267,78926.59826361484,3377,29612.081729345577,3376,29620.853080568722
86152,1160.7391586962578,1285,77821.01167315176,3374,29638.41138114997,3378,29603.31557134399
85684,1167.0790345922226,1272,78616.35220125786,3378,29603.31557134399,3384,29550.827423167848
86252,1159.3934053703103,1271,78678.20613690009,3376,29620.853080568722,3377,29612.081729345577

Categories