OpenLDAP 2.3/2.4 concurrency issue - java

Experiencing concurrency issues when authenticating and other read requests with Open LDAP (version tested 2.3.43 and 2.4.39)
When making 100 concurrent bind requests the test code takes around 150 milliseconds. Increasing this to 1000 concurrent requests sees the time taken increase to 9303 milliseconds.
So from x10 concurrent requests we are seeing a x62 increase in time taken.
Is this expected behaviour? Or is there something missing in our OpenLDAP server configuration/linux host configuration?
NOTE: We have run this test code against a Windows based Apache DS server 2.0.0 (same tree structure, etc) for comparison and against that server, the performance results where what we would normally expect (i.e. 100x takes ~80ms, 1000x takes ~400ms, 10,000x takes ~2700ms)
Settings in slapd.conf:
cachesize 100000
idlcachesize 300000
database bdb
suffix "dc=company,dc=com"
rootdn "uid=admin,ou=system"
rootpw secret
directory /var/lib/ldap
index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub
index nisMapName,nisMapEntry eq,pres,sub
sizelimit 100000
loglevel 256
Test code:
import java.util.ArrayList;
import javax.naming.NamingException;
import javax.naming.directory.DirContext;
import org.springframework.ldap.core.LdapTemplate;
import org.springframework.ldap.core.support.LdapContextSource;
public class DirectoryServiceMain {
public static void main(String[] args) {
int concurrentThreadCount = 100;
LdapContextSource ctx = new LdapContextSource();
ctx.setUrls(new String [] { "ldap://ldap1.dev.company.com:389/", "ldap://ldap1.dev.company.com:389/" });
ctx.setBase("dc=company,dc=com");
ctx.setUserDn("uid=admin,ou=system");
ctx.setPassword("secret");
ctx.setPooled(true);
ctx.setCacheEnvironmentProperties(false);
LdapTemplate template = new LdapTemplate();
template.setContextSource(ctx);
long startTime = System.currentTimeMillis();
ArrayList<Thread> threads = new ArrayList<>();
for(int i = 0; i < concurrentThreadCount; i++) {
Thread t = new Thread(
() -> {
DirContext context = template.getContextSource().getContext("uid=username,dc=users,uid=office,dc=suborganisations,uid=ABC,dc=organisations,dc=company,dc=com",
"password");
try {
context.close();
} catch(NamingException e) {}
});
t.start();
threads.add(t);
}
boolean alive = true;
while(alive) {
alive = false;
for(Thread t : threads) {
if(t.isAlive()) {
alive = true;
try {Thread.sleep(10);} catch(InterruptedException e) {}
}
}
}
long endTime = System.currentTimeMillis();
System.out.println("Total time: " + (endTime - startTime));
}
}
ulimit -n
131072
* UPDATE *
If a slight delay (e.g. Thread.sleep(1)) is added after each t.start(), then processing time of n concurrent threads drops considerably.

A longer answer is if you are using BDB as the database then you will likely see linear scaling problems above a certain number of concurrent requests. BDB has its own db_config file that you can configure to provide better performance characteristics. You could also consider change to MDB which was specifically written for open ldap and has better linear scaling with minimal configuration.
You should also consider limiting the number of concurrent connection made by setting the jndi ldap connection pool sizes against the LDAPContextSource:
Map<String, Object> map = new HashMap<>();
map.put("com.sun.jndi.ldap.connect.pool.initsize", 2);
map.put("com.sun.jndi.ldap.connect.pool.maxsize", 2);
map.put("com.sun.jndi.ldap.connect.pool.prefsize", 2);
ctx.setBaseEnvironmentProperties(map);

Related

JedisCluster configurations and how it maintains the pool of connections

I have recently started using JedisCluster for my application. There is little to no documentation and examples for the same. I tested a use case and the results are not what I expected
public class test {
private static JedisCluster setConnection(HashSet<HostAndPort> IP) {
JedisCluster jediscluster = new JedisCluster(IP, 30000, 3,
new GenericObjectPoolConfig() {{
setMaxTotal(500);
setMinIdle(1);
setMaxIdle(500);
setBlockWhenExhausted(true);
setMaxWaitMillis(30000);
}});
return jediscluster;
}
public static int getIdleconn(Map<String, JedisPool> nodes){
int i = 0;
for (String k : nodes.keySet()) {
i+=nodes.get(k).getNumIdle();
}
return i;
}
public static void main(String[] args) {
HashSet IP = new HashSet<HostAndPort>() {
{
add(new HostAndPort("host1", port1));
add(new HostAndPort("host2", port2));
}};
JedisCluster cluster = setConnection(IP);
System.out.println(getIdleconn(cluster.getClusterNodes()));
cluster.set("Dummy", "0");
cluster.set("Dummy1", "0");
cluster.set("Dummy3", "0");
System.out.println(getIdleconn(cluster.getClusterNodes()));
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(getIdleconn(cluster.getClusterNodes()));
}
}
The output for this snippet is:
0
3
3
Questions=>
I have set the timeout to 30000 JedisCluster(IP, 30000, 3,new GenericObjectPoolConfig() . I believe this is the connection timeout which means Idle connections are closed after 30 seconds. Although this doesn't seem to be happening. After sleeping for 60 seconds, the number of idle connections is still 3. What I am doing/understanding wrong here? I want the pool to close the connection if not used for more than 30 seconds.
setMinIdle(1). Does this mean that regardless the connection timeout, the pool will always maintain one connection?
I prefer availability more than throughput for my app. What should be the value for setMaxWaitMillis if conn timeout is 30 secs?
Though rare, the app fails with redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster. This i think is connected to 1. How to prevent this?
30000 or 30 seconds here refers to (socket) timeout; the timeout for single socket (read) operation. It is not related with closing idle connections.
Closing idle connections are controlled by GenericObjectPoolConfig. So check the parameters there.
Yes (mostly).
setMaxWaitMillis is the timeout for getting a connection object from a connection object pool. It is not related to 30 secs and not really solve you anything in terms of availability.
Keep your cluster nodes available.
There has been changes in Jedis related to this. You can try a recent version (4.x, even better 4.2.x).

Java gRPC server inbound vs outbound threads

So I have this grpc Java server:
#Bean(initMethod = "start", destroyMethod = "shutdown")
public Server bodyShopGrpcServer(#Autowired BodyShopServiceInt bodyShopServiceInt) {
return ServerBuilder.forPort(bodyShopGrpcServerPort)
.executor(Executors.newFixedThreadPool(12))
.addService(new BodyShopServiceGrpcGw(bodyShopServiceInt))
.build();
}
..and this client:
long overallStart = System.nanoTime();
int iterations = 10000;
List<Long> results = new CopyOnWriteArrayList<>();
ExecutorService executorService = Executors.newFixedThreadPool(bodyShopGrpcThreadPoolSize);
ManagedChannel channel =
InProcessChannelBuilder.forName("bodyShopGrpcInProcessServer")
.executor(executorService)
.build();
BodyShopServiceGrpc.BodyShopServiceStub bodyShopServiceStub =
BodyShopServiceGrpc.newStub(channel);
for (int i = 0; i < iterations; i++) {
long start = System.nanoTime();
StreamObserver<MakeBodyResponse> responseObserver =
new StreamObserver<>() {
#Override
public void onNext(MakeBodyResponse makeBodyResponse) {
long stop = System.nanoTime();
results.add(stop - start);
}
#Override
public void onError(Throwable throwable) {
Status status = Status.fromThrowable(throwable);
logger.error("Error status: {}", status);
}
#Override
public void onCompleted() {}
};
bodyShopServiceStub.makeBody(
MakeBodyRequest.newBuilder()
.setBody(CarBody.values()[random.nextInt(CarBody.values().length)].toString())
.build(),
responseObserver);
}
channel
.shutdown()
.awaitTermination(
10, TimeUnit.SECONDS);
long sum = results.stream().reduce(0L, Math::addExact);
BigDecimal avg =
BigDecimal.valueOf(sum).divide(BigDecimal.valueOf(iterations), RoundingMode.HALF_DOWN);
long overallStop = System.nanoTime();
This gives me average round-trip latency and overall time for a batch of 10000.
Now what bothers me is that latency is ~30-50% of overall batch time.
I assume this is because all of the server threads are being assign to serve client requests and there's no thread left in the pool to serve callbacks.
Is there a way how to tune this? I mean, it's not possible to set a different thread pool for requests and callbacks.
I know there's a streaming API in grpc, is that a preferred/only way to reduce round-trip latency?
Thx #Eric Anderson it did not occur to me.
Plotted the results and you're absolutely right:
latency plot
My assumption that callback is waiting for an available thread was wrong, it's just that all the requests enter the system at the same time - start measuring at the same time. In fact I was comparing sync. vs. async. values, while this was working for sync. calls for async. it's clearly wrong.

How to get optimal bulk insertion rate in DynamoDb through Executor Framework in Java?

I'm doing a POC on Bulk write (around 5.5k items) in local Dynamo DB using DynamoDB SDK for Java. I'm aware that each bulk write cannot have more than 25 write operations, so I am dividing the whole dataset into chunks of 25 items each. Then I'm passing these chunks as callable actions in Executor framework. Still, I'm not having a satisfactory result as the 5.5k records are getting inserted in more than 100 seconds.
I'm not sure how else can I optimize this. While creating the table I provisioned the WriteCapacityUnit as 400(not sure what's the maximum value I can give) and experimented with it a bit, but it never made any difference. I have also tried changing the number of threads in executor.
This is the main code to perform the bulk write operation:
public static void main(String[] args) throws Exception {
AmazonDynamoDBClient client = new AmazonDynamoDBClient().withEndpoint("http://localhost:8000");
final AmazonDynamoDB aws = new AmazonDynamoDBClient(new BasicAWSCredentials("x", "y"));
aws.setEndpoint("http://localhost:8000");
JSONArray employees = readFromFile();
Iterator<JSONObject> iterator = employees.iterator();
List<WriteRequest> batchList = new ArrayList<WriteRequest>();
ExecutorService service = Executors.newFixedThreadPool(20);
List<BatchWriteItemRequest> listOfBatchItemsRequest = new ArrayList<>();
while(iterator.hasNext()) {
if (batchList.size() == 25) {
Map<String, List<WriteRequest>> batchTableRequests = new HashMap<String, List<WriteRequest>>();
batchTableRequests.put("Employee", batchList);
BatchWriteItemRequest batchWriteItemRequest = new BatchWriteItemRequest();
batchWriteItemRequest.setRequestItems(batchTableRequests);
listOfBatchItemsRequest.add(batchWriteItemRequest);
batchList = new ArrayList<WriteRequest>();
}
PutRequest putRequest = new PutRequest();
putRequest.setItem(ItemUtils.fromSimpleMap((Map) iterator.next()));
WriteRequest writeRequest = new WriteRequest();
writeRequest.setPutRequest(putRequest);
batchList.add(writeRequest);
}
StopWatch watch = new StopWatch();
watch.start();
List<Future<BatchWriteItemResult>> futureListOfResults = listOfBatchItemsRequest.stream().
map(batchItemsRequest -> service.submit(() -> aws.batchWriteItem(batchItemsRequest))).collect(Collectors.toList());
service.shutdown();
while(!service.isTerminated());
watch.stop();
System.out.println("Total time taken : " + watch.getTotalTimeSeconds());
}
}
This is the code used to create the dynamoDB table:
public static void main(String[] args) throws Exception {
AmazonDynamoDBClient client = new AmazonDynamoDBClient().withEndpoint("http://localhost:8000");
DynamoDB dynamoDB = new DynamoDB(client);
String tableName = "Employee";
try {
System.out.println("Creating the table, wait...");
Table table = dynamoDB.createTable(tableName, Arrays.asList(new KeySchemaElement("ID", KeyType.HASH)
), Arrays.asList(new AttributeDefinition("ID", ScalarAttributeType.S)),
new ProvisionedThroughput(1000L, 1000L));
table.waitForActive();
System.out.println("Table created successfully. Status: " + table.getDescription().getTableStatus());
} catch (Exception e) {
System.err.println("Cannot create the table: ");
System.err.println(e.getMessage());
}
}
DynamoDB Local is provided as a tool for developers who need to develop offline for DynamoDB and is not designed for scale or performance. As such it is not intended for scale testing, and if you need to test bulk loads or other high velocity workloads it is best to use a real table. The actual cost incurred from dev testing on a live table is usually quite minimal as the tables only need to be provisioned for high capacity during the test runs.

Vert.x performance drop when starting with -cluster option

I'm wondering if any one experienced the same problem.
We have a Vert.x application and in the end it's purpose is to insert 600 million rows into a Cassandra cluster. We are testing the speed of Vert.x in combination with Cassandra by doing tests in smaller amounts.
If we run the fat jar (build with Shade plugin) without the -cluster option, we are able to insert 10 million records in about a minute. When we add the -cluster option (eventually we will run the Vert.x application in cluster) it takes about 5 minutes for 10 million records to insert.
Does anyone know why?
We know that the Hazelcast config will create some overhead, but never thought it would be 5 times slower. This implies we will need 5 EC2 instances in cluster to get the same result when using 1 EC2 without the cluster option.
As mentioned, everything runs on EC2 instances:
2 Cassandra servers on t2.small
1 Vert.x server on t2.2xlarge
You are actually running into corner cases of the Vert.x Hazelcast Cluster manager.
First of all you are using a worker Verticle to send your messages (30000001). Under the hood Hazelcast is blocking and thus when you send a message from a worker the version 3.3.3 does not take that in account. Recently we added this fix https://github.com/vert-x3/issues/issues/75 (not present in 3.4.0.Beta1 but present in 3.4.0-SNAPSHOTS) that will improve this case.
Second when you send all your messages at the same time, it runs into another corner case that prevents the Hazelcast cluster manager to use a cache of the cluster topology. This topology cache is usually updated after the first message has been sent and sending all the messages in one shot prevents the usage of the ache (short explanation HazelcastAsyncMultiMap#getInProgressCount will be > 0 and prevents the cache to be used), hence paying the penalty of an expensive lookup (hence the cache).
If I use Bertjan's reproducer with 3.4.0-SNAPSHOT + Hazelcast and the following change: send message to destination, wait for reply. Upon reply send all messages then I get a lot of improvements.
Without clustering : 5852 ms
With clustering with HZ 3.3.3 :16745 ms
With clustering with HZ 3.4.0-SNAPSHOT + initial message : 8609 ms
I believe also you should not use a worker verticle to send that many messages and instead send them using an event loop verticle via batches. Perhaps you should explain your use case and we can think about the best way to solve it.
When you're you enable clustering (of any kind) to an application you are making your application more resilient to failures but you're also adding a performance penalty.
For example your current flow (without clustering) is something like:
client ->
vert.x app ->
in memory same process eventbus (negletible) ->
handler -> cassandra
<- vert.x app
<- client
Once you enable clustering:
client ->
vert.x app ->
serialize request ->
network request cluster member ->
deserialize request ->
handler -> cassandra
<- serialize response
<- network reply
<- deserialize response
<- vert.x app
<- client
As you can see there are many encode decode operations required plus several network calls and this all gets added to your total request time.
In order to achive best performance you need to take advantage of locality the closer you are of your data store usually the fastest.
Just to add the code of the project. I guess that would help.
Sender verticle:
public class ProviderVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
IntStream.range(1, 30000001).parallel().forEach(i -> {
vertx.eventBus().send("clustertest1", Json.encode(new TestCluster1(i, "abc", LocalDateTime.now())));
});
}
#Override
public void stop() throws Exception {
super.stop();
}
}
And the inserter verticle
public class ReceiverVerticle extends AbstractVerticle {
private int messagesReceived = 1;
private Session cassandraSession;
#Override
public void start() throws Exception {
PoolingOptions poolingOptions = new PoolingOptions()
.setCoreConnectionsPerHost(HostDistance.LOCAL, 2)
.setMaxConnectionsPerHost(HostDistance.LOCAL, 3)
.setCoreConnectionsPerHost(HostDistance.REMOTE, 1)
.setMaxConnectionsPerHost(HostDistance.REMOTE, 3)
.setMaxRequestsPerConnection(HostDistance.LOCAL, 20)
.setMaxQueueSize(32768)
.setMaxRequestsPerConnection(HostDistance.REMOTE, 20);
Cluster cluster = Cluster.builder()
.withPoolingOptions(poolingOptions)
.addContactPoints(ClusterSetup.SEEDS)
.build();
System.out.println("Connecting session");
cassandraSession = cluster.connect("kiespees");
System.out.println("Session connected:\n\tcluster [" + cassandraSession.getCluster().getClusterName() + "]");
System.out.println("Connected hosts: ");
cassandraSession.getState().getConnectedHosts().forEach(host -> System.out.println(host.getAddress()));
PreparedStatement prepared = cassandraSession.prepare(
"insert into clustertest1 (id, value, created) " +
"values (:id, :value, :created)");
PreparedStatement preparedTimer = cassandraSession.prepare(
"insert into timer (name, created_on, amount) " +
"values (:name, :createdOn, :amount)");
BoundStatement timerStart = preparedTimer.bind()
.setString("name", "clusterteststart")
.setInt("amount", 0)
.setTimestamp("createdOn", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(timerStart);
EventBus bus = vertx.eventBus();
System.out.println("Bus info: " + bus.toString());
MessageConsumer<String> cons = bus.consumer("clustertest1");
System.out.println("Consumer info: " + cons.address());
System.out.println("Waiting for messages");
cons.handler(message -> {
TestCluster1 tc = Json.decodeValue(message.body(), TestCluster1.class);
if (messagesReceived % 100000 == 0)
System.out.println("Message received: " + messagesReceived);
BoundStatement boundRecord = prepared.bind()
.setInt("id", tc.getId())
.setString("value", tc.getValue())
.setTimestamp("created", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(boundRecord);
if (messagesReceived % 100000 == 0) {
BoundStatement timerStop = preparedTimer.bind()
.setString("name", "clusterteststop")
.setInt("amount", messagesReceived)
.setTimestamp("createdOn", new Timestamp(new Date().getTime()));
cassandraSession.executeAsync(timerStop);
}
messagesReceived++;
//message.reply("OK");
});
}
#Override
public void stop() throws Exception {
super.stop();
cassandraSession.close();
}
}

MongoDB ACKNOWLEDGED write concern faster than UNACKNOWLEDGED?

I've got a very simple test program that performs faster with ACKNOWLEDGED bulk inserts than with UNACKNOWLEDGED. And it's not just a little faster - I'm seeing a factor of nearly 100!
My understanding of the difference between these two write concerns is solely that with ACKNOWLEDGED the client waits for confirmation from the server that the operation has been executed (but not necessarily made durable), while with UNACKNOWLEDGED the client only knows that the request made it out onto the wire. So it would seem preposterous that the former could actually perform at a higher speed, yet that's what I'm seeing.
I'm using the Java driver (v2.12.0) with Oracle's Java JDK v1.7.0_71, and mongo version 3.0.0 on 64-bit Windows 7. I'm running mongod, completely out-of-the-box (fresh install), no sharding or anything. And before each test I ensure that the collection is empty and has no non-default indexes.
I would appreciate any insight into why I'm consistently seeing the opposite of what I'd expect.
Thanks.
Here's my code:
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.BulkWriteOperation;
import com.mongodb.BulkWriteResult;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;
import com.mongodb.ServerAddress;
import com.mongodb.WriteConcern;
import java.util.Arrays;
public class Test {
private static final int BATCHES = 100;
private static final int BATCH_SIZE = 1000;
private static final int COUNT = BATCHES * BATCH_SIZE;
public static void main(String[] argv) throws Exception {
DBCollection coll = new MongoClient(new ServerAddress()).getDB("test").getCollection("test");
for (String wcName : Arrays.asList("UNACKNOWLEDGED", "ACKNOWLEDGED")) {
WriteConcern wc = (WriteConcern) WriteConcern.class.getField(wcName).get(null);
coll.dropIndexes();
coll.remove(new BasicDBObject());
long start = System.currentTimeMillis();
BulkWriteOperation bulkOp = coll.initializeUnorderedBulkOperation();
for (int i = 1; i < COUNT; i++) {
DBObject doc = new BasicDBObject().append("int", i).append("string", Integer.toString(i));
bulkOp.insert(doc);
if (i % BATCH_SIZE == 0) {
BulkWriteResult results = bulkOp.execute(wc);
if (wc == WriteConcern.ACKNOWLEDGED && results.getInsertedCount() != 1000) {
throw new RuntimeException("Bogus insert count: " + results.getInsertedCount());
}
bulkOp = coll.initializeUnorderedBulkOperation();
}
}
long time = System.currentTimeMillis() - start;
double rate = COUNT / (time / 1000.0);
System.out.printf("%s[w=%s,j=%s]: Inserted %d documents in %s # %f/sec\n",
wcName, wc.getW(), wc.getJ(), COUNT, duration(time), rate);
}
}
private static String duration(long msec) {
return String.format("%d:%02d:%02d.%03d",
msec / (60 * 60 * 1000),
(msec % (60 * 60 * 1000)) / (60 * 1000),
(msec % (60 * 1000)) / 1000,
msec % 1000);
}
}
And here's typical output:
UNACKNOWLEDGED[w=0,j=false]: Inserted 100000 documents in 0:01:27.025 # 1149.095088/sec
ACKNOWLEDGED[w=1,j=false]: Inserted 100000 documents in 0:00:00.927 # 107874.865156/sec
EDIT
Ran more extensive tests, per request from Markus W. Mahlberg. For these tests, I ran the code with four write concerns: UNACKNOWLEDGED, ACKNOWLEDGED, JOURNALED, and FSYNCED. (I would expect this order to show decreasing speed.) I ran 112 repetitions, each of which performed 100 batches of 1000 inserts under each of the four write concerns, each time into an empty collection with no indexes. Code was identical to original post but with two additional write concerns, and with output to CSV format for easy analysis.
Results summary:
UNACKNOWLEDGED: 1147.105004 docs/sec avg, std dev 27.88577035
ACKNOWLEDGED: 77539.27653 docs/sec avg, std dev 1567.520303
JOURNALED: 29574.45243 docs/sec avg, std dev 123.9927554
FSYNCED: 29567.02467 docs/sec avg, std dev 147.6150994
The huge inverted performance difference between UNACKNOWLEDGED and ACKNOWLEDGED is what's got me baffled.
Here's the raw data if anyone cares for it ("time" is elapsed msec for 100*1000 insertions; "rate" is docs/second):
"UNACK time","UNACK rate","ACK time","ACK rate","JRNL time","JRNL rate","FSYNC time","FSYNC rate"
92815,1077.4120562409094,1348,74183.9762611276,3380,29585.798816568047,3378,29603.31557134399
90209,1108.5368422219512,1303,76745.97083653108,3377,29612.081729345577,3375,29629.62962962963
91089,1097.8273995762386,1319,75815.01137225171,3382,29568.30277942046,3413,29299.73630237328
90159,1109.1516099335618,1320,75757.57575757576,3375,29629.62962962963,3377,29612.081729345577
89922,1112.0749093658949,1315,76045.62737642587,3380,29585.798816568047,3376,29620.853080568722
89997,1111.1481493827573,1306,76569.67840735069,3381,29577.048210588586,3379,29594.55460195324
90141,1109.373093264996,1319,75815.01137225171,3386,29533.372711163614,3378,29603.31557134399
89771,1113.9454835080371,1325,75471.69811320755,3387,29524.65308532625,3521,28401.022436807725
89716,1114.6283828971423,1325,75471.69811320755,3379,29594.55460195324,3379,29594.55460195324
90205,1108.5859985588381,1323,75585.78987150417,3377,29612.081729345577,3376,29620.853080568722
90092,1109.976468498868,1328,75301.2048192771,3382,29568.30277942046,3379,29594.55460195324
89822,1113.3129968159249,1322,75642.965204236,3385,29542.097488921714,3383,29559.562518474726
89821,1113.3253916122064,1310,76335.87786259541,3380,29585.798816568047,3383,29559.562518474726
89945,1111.7905386625162,1318,75872.53414264036,3379,29594.55460195324,3379,29594.55460195324
89917,1112.1367483345753,1352,73964.49704142011,3381,29577.048210588586,3377,29612.081729345577
90358,1106.7088691648773,1303,76745.97083653108,3377,29612.081729345577,3380,29585.798816568047
90187,1108.8072560346836,1348,74183.9762611276,3387,29524.65308532625,3395,29455.081001472754
90634,1103.3387029150208,1322,75642.965204236,3384,29550.827423167848,3381,29577.048210588586
90148,1109.2869503483162,1331,75131.48009015778,3389,29507.22927117144,3381,29577.048210588586
89767,1113.9951207013714,1321,75700.22710068131,3380,29585.798816568047,3382,29568.30277942046
89910,1112.2233344455567,1321,75700.22710068131,3381,29577.048210588586,3385,29542.097488921714
89852,1112.9412812180028,1316,75987.84194528875,3381,29577.048210588586,3401,29403.116730373422
89537,1116.8567184515898,1319,75815.01137225171,3380,29585.798816568047,3380,29585.798816568047
89763,1114.0447623185498,1331,75131.48009015778,3380,29585.798816568047,3382,29568.30277942046
90070,1110.2475852115022,1325,75471.69811320755,3383,29559.562518474726,3378,29603.31557134399
89771,1113.9454835080371,1302,76804.91551459293,3389,29507.22927117144,3378,29603.31557134399
90518,1104.7526458825869,1325,75471.69811320755,3383,29559.562518474726,3380,29585.798816568047
90314,1107.2480457071995,1322,75642.965204236,3380,29585.798816568047,3384,29550.827423167848
89874,1112.6688474976079,1329,75244.54477050414,3386,29533.372711163614,3379,29594.55460195324
89954,1111.6793027547415,1318,75872.53414264036,3381,29577.048210588586,3381,29577.048210588586
89903,1112.3099340400208,1325,75471.69811320755,3379,29594.55460195324,3388,29515.9386068477
89842,1113.0651588343983,1314,76103.500761035,3382,29568.30277942046,3377,29612.081729345577
89746,1114.2557885588217,1325,75471.69811320755,3378,29603.31557134399,3385,29542.097488921714
93249,1072.3975592231552,1327,75357.95026375283,3381,29577.048210588586,3377,29612.081729345577
93638,1067.9425019756936,1331,75131.48009015778,3377,29612.081729345577,3392,29481.132075471698
87775,1139.2765593847905,1340,74626.86567164179,3379,29594.55460195324,3378,29603.31557134399
86495,1156.136192843517,1271,78678.20613690009,3375,29629.62962962963,3376,29620.853080568722
85584,1168.442699570013,1276,78369.90595611285,3432,29137.529137529138,3376,29620.853080568722
86648,1154.094728095282,1278,78247.2613458529,3382,29568.30277942046,3411,29316.91586045148
85745,1166.2487608606916,1274,78492.93563579278,3380,29585.798816568047,3363,29735.355337496283
85813,1165.3246011676551,1279,78186.08287724786,3375,29629.62962962963,3376,29620.853080568722
85831,1165.0802157728558,1288,77639.75155279503,3376,29620.853080568722,3377,29612.081729345577
85807,1165.4060857505797,1259,79428.11755361399,3466,28851.702250432772,3375,29629.62962962963
85964,1163.2776511097668,1258,79491.2559618442,3378,29603.31557134399,3378,29603.31557134399
85854,1164.7680946723508,1257,79554.49482895785,3382,29568.30277942046,3375,29629.62962962963
85787,1165.6777833471272,1257,79554.49482895785,3377,29612.081729345577,3377,29612.081729345577
85537,1169.084723569917,1272,78616.35220125786,3377,29612.081729345577,3377,29612.081729345577
85408,1170.8505058074186,1271,78678.20613690009,3375,29629.62962962963,3425,29197.080291970804
85577,1168.5382754712132,1261,79302.14115781126,3378,29603.31557134399,3375,29629.62962962963
85663,1167.365140142185,1261,79302.14115781126,3377,29612.081729345577,3378,29603.31557134399
85812,1165.3381811401669,1273,78554.59544383347,3377,29612.081729345577,3378,29603.31557134399
85783,1165.7321380693145,1273,78554.59544383347,3377,29612.081729345577,3376,29620.853080568722
85682,1167.106276697556,1280,78125.0,3381,29577.048210588586,3376,29620.853080568722
85753,1166.1399601180133,1260,79365.07936507936,3379,29594.55460195324,3377,29612.081729345577
85573,1168.5928972923703,1332,75075.07507507507,3377,29612.081729345577,3377,29612.081729345577
86206,1160.0120641254668,1263,79176.56373713381,3376,29620.853080568722,3383,29559.562518474726
85593,1168.31983923919,1264,79113.92405063291,3380,29585.798816568047,3378,29603.31557134399
85903,1164.1036983574495,1261,79302.14115781126,3378,29603.31557134399,3377,29612.081729345577
85516,1169.3718134618082,1277,78308.53563038372,3375,29629.62962962963,3376,29620.853080568722
85553,1168.8660830128692,1291,77459.3338497289,3490,28653.295128939826,3377,29612.081729345577
85550,1168.907071887785,1293,77339.52049497294,3379,29594.55460195324,3379,29594.55460195324
85610,1168.0878402055835,1298,77041.60246533128,3384,29550.827423167848,3378,29603.31557134399
85522,1169.2897733916418,1267,78926.59826361484,3379,29594.55460195324,3379,29594.55460195324
85595,1168.2925404521293,1276,78369.90595611285,3379,29594.55460195324,3376,29620.853080568722
85451,1170.2613193526115,1286,77760.49766718507,3376,29620.853080568722,3391,29489.82601002654
85792,1165.609847071988,1252,79872.20447284346,3382,29568.30277942046,3376,29620.853080568722
86501,1156.0559993526085,1255,79681.2749003984,3379,29594.55460195324,3379,29594.55460195324
85718,1166.616113301757,1269,78802.20646178094,3382,29568.30277942046,3376,29620.853080568722
85605,1168.156065650371,1265,79051.38339920949,3378,29603.31557134399,3380,29585.798816568047
85398,1170.9876109510762,1274,78492.93563579278,3377,29612.081729345577,3395,29455.081001472754
86370,1157.809424568716,1273,78554.59544383347,3376,29620.853080568722,3376,29620.853080568722
85905,1164.0765962400326,1280,78125.0,3379,29594.55460195324,3379,29594.55460195324
86020,1162.5203441060219,1285,77821.01167315176,3375,29629.62962962963,3376,29620.853080568722
85726,1166.5072440099852,1272,78616.35220125786,3380,29585.798816568047,3380,29585.798816568047
85628,1167.8422945765403,1270,78740.15748031496,3379,29594.55460195324,3376,29620.853080568722
85989,1162.93944574306,1258,79491.2559618442,3376,29620.853080568722,3378,29603.31557134399
85981,1163.047650062223,1276,78369.90595611285,3376,29620.853080568722,3376,29620.853080568722
86558,1155.2947156819703,1269,78802.20646178094,3385,29542.097488921714,3378,29603.31557134399
85745,1166.2487608606916,1293,77339.52049497294,3378,29603.31557134399,3375,29629.62962962963
85544,1168.9890582624148,1266,78988.94154818325,3376,29620.853080568722,3377,29612.081729345577
85536,1169.0983913206135,1268,78864.35331230283,3380,29585.798816568047,3380,29585.798816568047
85477,1169.9053546568082,1278,78247.2613458529,3388,29515.9386068477,3377,29612.081729345577
85434,1170.4941826439124,1253,79808.45969672786,3378,29603.31557134399,3375,29629.62962962963
85609,1168.1014846569872,1276,78369.90595611285,3364,29726.516052318668,3376,29620.853080568722
85740,1166.316771635176,1258,79491.2559618442,3377,29612.081729345577,3377,29612.081729345577
85640,1167.6786548341897,1266,78988.94154818325,3378,29603.31557134399,3377,29612.081729345577
85648,1167.569587147394,1281,78064.012490242,3378,29603.31557134399,3376,29620.853080568722
85697,1166.9019919017,1287,77700.0777000777,3377,29612.081729345577,3378,29603.31557134399
85696,1166.9156086631815,1256,79617.83439490446,3379,29594.55460195324,3376,29620.853080568722
85782,1165.7457275419085,1258,79491.2559618442,3379,29594.55460195324,3379,29594.55460195324
85837,1164.9987767512844,1264,79113.92405063291,3379,29594.55460195324,3376,29620.853080568722
85632,1167.7877428998504,1278,78247.2613458529,3380,29585.798816568047,3459,28910.089621277824
85517,1169.3581393173288,1256,79617.83439490446,3379,29594.55460195324,3380,29585.798816568047
85990,1162.925921618793,1302,76804.91551459293,3380,29585.798816568047,3377,29612.081729345577
86690,1153.535586572846,1281,78064.012490242,3375,29629.62962962963,3381,29577.048210588586
86045,1162.1825788831425,1274,78492.93563579278,3380,29585.798816568047,3383,29559.562518474726
86146,1160.820003250296,1274,78492.93563579278,3382,29568.30277942046,3418,29256.87536571094
86027,1162.4257500552153,1280,78125.0,3382,29568.30277942046,3381,29577.048210588586
85992,1162.8988743138896,1281,78064.012490242,3376,29620.853080568722,3380,29585.798816568047
85857,1164.727395553071,1288,77639.75155279503,3382,29568.30277942046,3376,29620.853080568722
85853,1164.7816616775185,1284,77881.6199376947,3375,29629.62962962963,3374,29638.41138114997
86069,1161.8585088707896,1295,77220.07722007722,3378,29603.31557134399,3378,29603.31557134399
85842,1164.930919596468,1296,77160.49382716049,3378,29603.31557134399,3376,29620.853080568722
86195,1160.160102094089,1301,76863.95080707148,3376,29620.853080568722,3379,29594.55460195324
85523,1169.2761011657683,1305,76628.35249042146,3376,29620.853080568722,3378,29603.31557134399
85752,1166.1535591006625,1275,78431.37254901961,3374,29638.41138114997,3377,29612.081729345577
85441,1170.3982865369085,1286,77760.49766718507,3377,29612.081729345577,3380,29585.798816568047
85566,1168.6884977678048,1265,79051.38339920949,3377,29612.081729345577,3380,29585.798816568047
85523,1169.2761011657683,1267,78926.59826361484,3377,29612.081729345577,3376,29620.853080568722
86152,1160.7391586962578,1285,77821.01167315176,3374,29638.41138114997,3378,29603.31557134399
85684,1167.0790345922226,1272,78616.35220125786,3378,29603.31557134399,3384,29550.827423167848
86252,1159.3934053703103,1271,78678.20613690009,3376,29620.853080568722,3377,29612.081729345577

Categories