My application runs SQL Server 2008 on Windows Server 2008 using Tomcat.
I am using a Cloud-based server
One of my queries takes exactly 48 seconds to run from the application. It pulls 425 records from the database.
I was not surprised by the length of time.
Although a similar query on Oracle ran in 6 seconds, the Oracle setup had 2GB of memory and a quad-processor, whereas my SQL Server set up was 1GB of memory and a single 2.66GHZ processor.
Therefore I increased my server spec to 2GB. There was no change.
I then increased my server spec to dual processor from single processor. I was surprised to find that the query now took longer - exactly 70 seconds!
I am aware that the conversion of the query from P/SQL to T/SQL may have been done in a very non-efficient way. And there may be speed differences between Oracle and SQL Server. However my main question is really, why should the same query now run slower on a dual-core system than on a single-core?
Thank you!
Depending on the hypervisor used and the total load on the server, dual virtual CPUs can result in getting (significantly) less CPU time, as some hypervisor's scheduler insists of finding 2 CPUs that are schedulable at the same time. If your query is single-threaded, the advantage of a second CPU is zero, but getting less CPU time makes it slower.
Exactly the opposite direction is another possibility: If your query is multithreaded and your storage is slow, then the 2 threads can create a random IO storm, that decreases your storage throughput.