I am accessing MS SQL Server Express running in a VirtualBox image for a Java / Hibernate (with EhCache setup) application.
The VM is connected via NIC mode.
When the server starts up it loads (most of the DB's) data in EhCache.
Right now it takes ~5 minutes to start up. If I switch to a dedicated machine that hosts MS SQL Server (not Express) the startup takes ~1 min.
Any suggestions what could be wrong here?
Ok found the problem.
VirtualBox in NIC mode somehow reduces the network speed between host and guest to 10mps.
More can be found in this forum thread on the Virtualbox site.
For a ticket in this regard (that seems to be marke as fixed but isn't, at least not in version 4.3.10 r93012) see here
Related
I have a remote Linux server and I want to connect to an Oracle database which is in another server, using ojdbc7 lib
When I try to connect directly to the database from my Windows PC, using the same client and ojdbc7 lib, I have reasonable connection time.
Now, when I want to connect through my linux server, I get extreme slowness, just in the connection time. . Once connected, the execution is OK.
I have read about adding -Djava.security.egd=file:/dev/urandom like in this post, but nothing happened.
What could I do to fix this delay in setting up a connection from linux?
Close, but no cigar: it's "file:///dev/urandom", or one of the variations, see eg. https://anirban-m.blogspot.com/2014/03/jdbc-connection-reset-error-java.html
I noticed you are using version 12.1.0.1.
There was an Oracle bug where JDBC connections could take excessive times because the data being sent required the listener to perform a DNS lookup for each connection and that could apparently be very slow for some reason.
The bug was fixed in 12.2 and there is a back-ported fix (patch) for 12.1.0.2.
In the meantime, try getting your Linux admin to go through the process of tuning DNS lookups on that server. E.g., tune /etc/resolv.conf or enable the name service cache daemon. I'm not really expert in Linux administration so I can't help you. But based on the problem and the version you are using, that's where I'd look.
I have developed a REST API using Spring Framework. When I deploy this in Tomcat 8 on RHEL, the response times for POST and PUT requests are very high when compared to deployment on my local machine (Windows 8.1). On RHEL server it takes 7-9 seconds whereas on local machine it is less than 200 milliseconds.
RAM and CPU of RHEL server are 4 times that of local machine. Default tomcat configurations are used in both Windows and RHEL. Network latency is ruled out because GET requests take more or less same time as local machine whereas time taken to first byte is more for POST and PUT requests.
I even tried profiling the remote JVM using Visual JVM. There are no major hotspots in my custom code.
I was able to reproduce this same issue in other RHEL servers. Is there any tomcat setting which could help in fixing this performance issue ?
The profiling log you have placed means nothing, more or less. It shows the following:
The blocking queue is blocking. Which is normal, because this is its purpose - to block. This mean there is nothing to take from it.
It is waiting for connection on the socket. Which is also normal.
You do not specify what is your RHEL 8 physical/hardware setup. The operating system here might not be the only thing. You can not eliminate still network latency. What about if you have SAN, the SAN may have latency itself. If you are using SSD drive and the RHEL is using SAN with replication you may experience network latecy there.
I am more inclined to first check the IO on the disk than to focus on operating system. If the server is shared there might be other processes occupying the disk.
You are saying that the latency is ruled out because the GET requests are taking the same time. This is not enough to overrule it as I said this is the latency between the client and the application server, it does not check the latency between your app server machin and your SAN or disk or whatever storage is there.
I have an application developed with Java using MySQL and C3P0 for connection pooling. The application works perfectly fine in localhost, the connection management was superb. However, I uploaded this application to Daily Razor "private JVM", and here we go, there were lot of MySQL connections than the application will ever make! Normally the application will make max 10 connections, but when I hosted there I can see 30 or more.
Apart from that, I always had number of mysql processors running in my localhost, but when uploaded to the online server I can only see 2. It is like upside down. The application works fine but there were number of times that I had to restart the server due to slow connection issue.
What is making this kind of thing? Anyway pls don't ask for code, because I don't know where the issue is
I am experience a strange performance issue when accessing data from SQL Server from a Spring based application. In my current setup, the Spring java application runs on a separate machine accessing data from a remote SQL Server DB. I am using NamedParameterTemplate in Spring, which I believe uses Prepared Statement to execute the query. For some reason, some of the query takes a long time to complete (approx. 2 mins). The JAVA app runs on a 64bit machine running 64bit version of Java v1.6, and the SQL Server is MS SQL Server 2008 R2.
The strangeness, is if I run the same java app from my laptop running Windows XP 32bit, running the same version of Java v1.6, the query takes less than a second, accessing the exact same remote DB server (infact, I am connected through VPN)
This shows the issue is not with the Spring framework but may be with the SQL JDBC Driver. I am using Microsoft JDBC Driver 4.0 for SQL Server (sqljdbc.jar)
I am completely clueless, as what could possibly be wrong and not sure where to start my debugging process.
I understand, there isn't much information in my question, so please let me know if you need any specific detail.
Thanks for any help/suggestions.
I think this may be due to the combination of your java version and jdbc driver failing to handshake the connection with the server. See Driver.getConnection hangs using SQLServer driver and Java 1.6.0_29 and http://blogs.msdn.com/b/jdbcteam/archive/2012/01/19/patch-available-for-sql-server-and-java-6-update-30.aspx
If so, switching to 1.6.0 upgrade 30 or higher and applying kb 2653857 ought to fix it.
I actual have a big problem! I open a connection to a oracle database with "DriverManager.getConnection(url, properties)".
On UNIX machines (currently on a VM), the problem occurs that 99% of the time it needs minutes till the function return a connection. I increased the connection timout of oracle so that I don't get a SQLException, but it needs up to 3 Minutes to get a connection. On my windows machine the connection is returned in under 1 second.
telnet to server + port works, ping is sucessfully, traceroute looks good. I also tried from several VMs or on different databases on different physical machines.
I run the actual JDBC Driver "ojdbc6-11.2.0.2.0.jar".
Does anyone have a good idea?
After a long time we figured out the problem. The Oracle JDBC driver blocked at the point where a unique id was read. After setting the VM Argument
-Djava.security.egd=file:/dev/urandom
we could gurantee to always receive a uniqueid in an adequate time. The default /dev/random unfortunately just generates a uniqueid if the machine has enough entropy, which is often missing on virtual machines.
Maybe this helps some of you folks one day.
It's a little bit strange but it could be a REVERSE DNS problem.
If you Oracle server is on unix, try the following:
$ host IP_ADDRESS_OF_WIN_MACHINE
$ host IP_ADDRESS_OF_LINUX_MACHINE
See if there is something different on the two name resolutions. If there is, then it might be the case that trying to do a reverse DNS lookup on the LINUX IP is taking too long.
It's happened to me.