Here is the gist for my test
I am trying to test the performance of neo4j writes (embedded and rest) using jlbh. Unfortunately i am not able to get anything beyond 50 nodes insertion per second (((
I am hoping that this is something to do with how i am configuring the multiple performance tuning nobs.
Config details:
the jvm is oracle jdk1.8.0_77
the server ulimit is set to 40000
the server is an intel Ubuntu 14.04 server with 24gigs of ram.
uname is (Linux ymo-dt 3.19.0-58-generic #64~14.04.1-Ubuntu SMP Fri Mar 18 19:05:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux)
both the client and the server are running on the same machine (obviously since it is embedded)
i figured once i know how to tune the embeded version i can go ahead and try the rest api with both client and server running separately
I am trying to follow this so question in the tuning
As the gist (gradle part) says the version of neo4j i am testing is 3.0.1 from amven
I am passing Xmx and xms as both set to 4096m as per the gradle file
I cant for the life of me understand how that person got 7070 nodes insertion per second.
My questions :
What am i doing wrong ?
What do i need to add to the perfTuning section to make this reach 7000 nodes insertion per second ?
I am hoping that i am not going to need to batch the inserts at all. So the actual question is what is the maximum inserts i can achieve without resorting to batching ? or how ami supposed to do it with one tx per insertion ?
Once i move to the rest api what is the most performance tuned api i can use to access the server remotely ? I tried bolt and i am also not getting more than 38 nodes inserted per second.
Edit:
Seems like ext4 is to blame here on linux. Even after mounting a tmpfs and using it in neo4j i am still getting around 300tps only (((
I mounted a tmpfs in /etc/fstab
tmpfs /mnt/data1 tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777,size=2G 0 0
Related
We have two wildfly 16 server running on Linux. First with JDK 11.0.2, second with JDK 8.
Wildfly 1 has a remote outbound connection to wildfly 2 which is used for HTTP-remoting. This is necessary because it has to run with Java 8 32 bit.
When we perform a load test after 100.000 requests from wildfly 1 to wildfly 2 response time increases steadily.
A heap dump analysis of wildfly 2 using MAT gives us some information about the problem. The heap dump shows a lot of ‘io.netty.buffer.Poolchunks’ that use about 73% of the memory.
Seems the inbound buffers won't be cleaned properly.
Wildfly 2 does not recover when the load stops.
Is there any workaround or setting to avoid this?
I have developed a REST API using Spring Framework. When I deploy this in Tomcat 8 on RHEL, the response times for POST and PUT requests are very high when compared to deployment on my local machine (Windows 8.1). On RHEL server it takes 7-9 seconds whereas on local machine it is less than 200 milliseconds.
RAM and CPU of RHEL server are 4 times that of local machine. Default tomcat configurations are used in both Windows and RHEL. Network latency is ruled out because GET requests take more or less same time as local machine whereas time taken to first byte is more for POST and PUT requests.
I even tried profiling the remote JVM using Visual JVM. There are no major hotspots in my custom code.
I was able to reproduce this same issue in other RHEL servers. Is there any tomcat setting which could help in fixing this performance issue ?
The profiling log you have placed means nothing, more or less. It shows the following:
The blocking queue is blocking. Which is normal, because this is its purpose - to block. This mean there is nothing to take from it.
It is waiting for connection on the socket. Which is also normal.
You do not specify what is your RHEL 8 physical/hardware setup. The operating system here might not be the only thing. You can not eliminate still network latency. What about if you have SAN, the SAN may have latency itself. If you are using SSD drive and the RHEL is using SAN with replication you may experience network latecy there.
I am more inclined to first check the IO on the disk than to focus on operating system. If the server is shared there might be other processes occupying the disk.
You are saying that the latency is ruled out because the GET requests are taking the same time. This is not enough to overrule it as I said this is the latency between the client and the application server, it does not check the latency between your app server machin and your SAN or disk or whatever storage is there.
I am accessing MS SQL Server Express running in a VirtualBox image for a Java / Hibernate (with EhCache setup) application.
The VM is connected via NIC mode.
When the server starts up it loads (most of the DB's) data in EhCache.
Right now it takes ~5 minutes to start up. If I switch to a dedicated machine that hosts MS SQL Server (not Express) the startup takes ~1 min.
Any suggestions what could be wrong here?
Ok found the problem.
VirtualBox in NIC mode somehow reduces the network speed between host and guest to 10mps.
More can be found in this forum thread on the Virtualbox site.
For a ticket in this regard (that seems to be marke as fixed but isn't, at least not in version 4.3.10 r93012) see here
I am caching my serialized POJO s(4mb-8mb size objects) concurrently into Couchbase server with CouchBase client (couchbase-client-1.4.3).
for(upto 20 itertarions){
new Thread().start().. //this thread cache the objects
Thread.sleep(500); // the less sleep time, the more cache failures :(
}
I have 2 replicated servers. The client can cach small size objects, but when the object size increases, it throws exceptions.
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: 192.168.0.1/192.168.0.2:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:167)
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:140)`
I found similar questions and answers. How ever, I am not a position to upgrade my memory as the applications which use the couchbase client have their concerns of memory. How ever I tried adding JVM arguments such as -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=500
This is how I create couchBase cache client
CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
cfb.setFailureMode(FailureMode.Retry);
cfb.setMaxReconnectDelay(5000);
cfb.setOpTimeout(15000);
cfb.setOpQueueMaxBlockTime(10000);
client=new CouchbaseClient(cfb.buildCouchbaseConnection(
uris,BUCKET_TYPE,PASSWORD));
I tried with maximum time gaps to make caching successful and avoid time outs.But it also doesn't work.In our real live applications, usual 7 or 8 caches can happen within a second. The applications cannot hold the process until the cache happens successfully.(it if waits, then caching is useless because of its time consumption. Getting direct Database is always cheaper!!!)
Pleas any one let me know how can improve my couchbase client(since I have limitations of hardware and JVM limitations, I am looking a way to improve the client) to avoid such time outs and improve the performance? Cannot I do serializations compressions out of the couchbase client and do it myself ?
Updated: My COUCHBASE Setup.
- I am caching serialized object which has 5 to 10mb sizes.
-I have 2 nodes in difference machines.
-Each pc has 4GB RAM. CPUs are: one pc is 2 cores and other 4 cores. (Is it not enough ? )
-The client application runs in the pc which has 4 cores.
-I just configured a LAN for this testing.
-Both OS are ubuntu 14. one pc is 32 bit another one 64bit.
-The couchbase version is Latest Community Edition couchbase-server-community_2.2.0_x86_64 .(Is this buggy? :( )
- Couchbase client Couchbase-Java-Client-1.4.3
- There are 100 threads starts with a 500ms gap. Each thread cache into CB.
Also, I checked the System monitoring. The PC which the CB node and the client runs shows more CPU and RAM usage but, the other replicated PC (less hardware features) does not show much hardware usage and it is normal.
EDIT: Can this happens because of the client side issue or the CB server ? Any idea please ?
I have a soap client that calls a web service through ssl, when I add this line:
System.setProperty("java.protocol.handler.pkgs","com.sun.net.ssl.internal.www.protocol");
the client works with a rate of 15 calls per second, when removing it the speed goes down to 1.5 per second (10 times slower), I am using java 4 and tomcat 6 on a windows machine for my development environment
I'd be happy with this but when deploying the same code to oracle application server 10g on a unix machine the speed is always 1.5 per second weather I am setting the property or not!
Can any one figure out what is going on here?!
In Oracle App Server, try using oracle.mds.net.protocol instead of com.sun.net.ssl.internal.www.protocol. This is the value used by default in WebLogic, I have never used OAS so I can't advise.