Help please,
I have an Access database with about 40 users placed around Australia, connected to a SQL server backend based in Sydney. Performance is good most of the time but occasionally it deteriorates. The biggest complaint from users is after updating a record is takes a second or two to move to the next line in a datasheet subform. The setup is a master form, with a datasheet style subform for entering order lines.
I have noted when these complaints start, PING times from the local PC to the SQL Server can get above 150ms and up to 300ms, when the norm is around 30-50ms. Is this a problem? Is the PING time a good reference for speed? Should 150ms still be acceptable?
My next issue is, we are wanting to move the SQL Server to the US. It would appear the best PING times I get are around 220ms. I have tested the connection and the lag on my forms is really bad. Has anyone ever had to connect to a SQL Server in the US from Australia? Can it be done? Should I be looking at a different platform?
Any help appreciated. Thanks. CE.
Related
We've released a new game on Facebook that uses SQL Azure and we're getting intermittent connection timeouts.
I dealt with this earlier and implemented a 'retry' solution that seemed to have dealt with the transient connection issues.
However, now that the game is out I'm seeing it happen again. Not often, but it is happening. When it happens, I try logging into the SQL Azure Management web portal and I get a connection timeout there too. Same with trying SSMS.
The query itself is the first one of the game and it's a simple select on a table with 4 records.
After about 4 minutes, the timeouts stop and everything is good for a day or two.
Since these are players around the country, I don't have direct contact with the users.
I'm looking for any advice on how I can figure out what's going on.
Thanks,
Tim
FYI: http://apps.facebook.com/RelicBall/
Depending on how much compute you have in front of your database I would put in a limit on the connection pools that can be created with connection string.
Trying setting if for example you have 2 compute in front of the database.
Max Pool Size=70;
SQL Database can only handle 180 connections this is a hard limit. You can find for example when you are hitting the connection limit a retry framework will make the matter worse as it will try to connecting for a period of time leading to further downtime. This might be the reason you see several minutes as the compute retry frameworks give up.
http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx
Have a look with the following:
-- monitor connections
SELECT
e.connection_id,
s.session_id,
s.login_name,
s.last_request_end_time,
s.cpu_time
FROM
sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections e
ON s.session_id = e.session_id
GO
You should try to add cache to you application design, this can greatly reduce you application over head on the database and is recommend practice with SQL Azure. Especially as you can have connection issues. I have seen this type of issue before and it was connection limits so maybe worth investigating a bit of time in that direction to see if that causes. If not I would open a ticket to MS Support.
hths, Goodluck.
EDIT: Premium Database obviously raise the limits on connections so worth of investigation also as quick fix to this issue and potentially a long run one.
http://blogs.technet.com/b/dataplatforminsider/archive/2013/07/23/premium-preview-for-windows-azure-sql-database-now-live.aspx
I have a .NET application (VB.NET) that runs against a MS Access database. Every data request connects to the access database, runs and returns the query and closes the connection back again.
I placed the database on a windows xp 32-bit machine.
I have two clients on which I installed the .NET application. Both clients are running windows 7 professional 32-bit.
Now I have a performance problem with this.
When I use the first client it runs fine. All data is shown very fast. When I than use the second client, it takes some 10 seconds to connect to the database, fetch the data and close the database connection. When i ask for other data on that second client, it all runs fine, until I request data from the first client than back again. Than it takes again 10 seconds on the first client before my data is fetched.
Can anybody please help me with that? I owe a Belgian beer to the solver of this issue ;-)
Thanks!
Tom Wickerath wrote a great article on improving multiuser performance for MS Access applications. While his article assumes a MS Access front-end, many of the tips should apply to a .Net application. I recall two points that might help you:
Keep a persistent connection to the back-end
Use (short) UNC paths instead of mapped drives
After a long search, i found it out... My virusscanner NOD32 was causing this, most probably by excessive scanning inbound and outbound network traffic.
I'm not sure stackoverflow is the right place for questions like this, but ...
It sounds like the first process is locking the file, so the second process has to wait.
"Use SQL Server" isn't a completely flippant response - SQL Server is specifically designed to handle concurrency issues like this.
IMHO ...
PS:
This is a pretty lame link, but it might help:
http://office.microsoft.com/en-us/access-help/about-sharing-an-access-database-on-a-network-mdb-HP005240860.aspx
PPS:
Here's a somewhat better link, with some suggestions for things you can do to improve concurrency:
http://www.softcoded.com/web_design/upgrading_access.php
I have an application with a central DataTier that can execute a query to a data table using an SQLDataAdapter. None of this code has changed but now all queries are taking at least 10x as long to execute a query returning even one record.
The only difference is that I have been using the app in a VM but the issue has started mid way through using the application. eg, the speed issue has not manifested itself from the start of using the VM, rather half way through.
Has anyone else had an issue with the SQL Data Adapter taking a long time to fill for no reason? executing the query in Management studio it runs in less than a second.
Firewalls are disabled
ok, after another half day wasted on this it seems to be an issue relating to networking on the Virtual PC.
I have seen a massive improvement by changing the network adapter in the VM to Shared NAT and no longer experience the long delay when populating data tables.
Its obviously having an issue resolving the SQL server.
For anyone else that stumbles across this post here are the settings
I have RO access on a SQL View. This query below times out. How to avoid this?
select
count(distinct Status)
from
[MyTable] with (NOLOCK)
where
MemberType=6
The error message I get is:
Msg 121, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
Your query is probably fine. "The semaphore timeout period has expired" is a Network error, not a SQL Server timeout.
There is apparently some sort of network problem between you and the SQL Server.
edit: However, apparently the query runs for 15-20 min before giving the network error. That is a very long time, so perhaps the network error could be related to the long execution time. Optimization of the underlying View might help.
If [MyTable] in your example is a View, can you post the View Definition so that we can have a go at optimizing it?
Although there is clearly some kind of network instability or something interfering with your connection (15 minutes is possible that you could be crossing a NAT boundary or something in your network is dropping the session), I would think you want such a simple?) query to return well within any anticipated timeoue (like 1s).
I would talk to your DBA and get an index created on the underlying tables on MemberType, Status. If there isn't a single underlying table or these are more complex and created by the view or UDF, and you are running SQL Server 2005 or above, have him consider indexing the view (basically materializing the view in an indexed fashion).
You could put an index on MemberType.
Please check your Windows system event log for any errors specifically for the "Event Source: Dhcp". It's very likely a networking error related to DHCP. Address lease time expired or so. It shouldn't be a problem related to the SQL Server or the query itself.
Just search the internet for "The semaphore timeout period has expired" and you'll get plenty of suggestions what might be a solution for your problem. Unfortunately there doesn't seem to be the solution for this problem.
Do you have an index defined over the Status column and MemberType column?
how many records do you have? are there any indexes on the table? try this:
;with a as (
select distinct Status
from MyTable
where MemberType=6
)
select count(Status)
from a
My team were experiencing these issues intermittently with long running SSIS packages. This has been happening since Windows server patching.
Our SSIS and SQL servers are on separate VM servers.
Working with our Wintel Servers team we rebooted both servers and for the moment, the problem appears to have gone away.
The engineer has said that they're unsure if the issue is the patches or new VMTools that they updated at the same time. We'll monitor for now and if the timeout problems recur, they'll try rolling back the VMXNET3 driver, first, then if that doesn't work, take off the June Rollup patches.
So for us the issue is nothing to do with our SQL Queries (we're loading billions of new rows so it has to be long running).
This is happen because another instance of sql server is running. So you need to kill first then you can able to login to SQL Server.
For that go to Task Manager and Kill or End Task the SQL Server service then go to Services.msc and start the SQL Server service.
While I would be tempted to blame my issues - I'm getting the same error with my query, which is much, much bigger and involves a lot of loops - on the network, I think this is not the case.
Unfortunately it's not that simple. Query runs for 3+ hours before getting that error and apparently it crashes at the same time if it's just a query in SSMS and a job on SQL Server (did not look into details of that yet, so not sure if it's the same error; definitely same spot, though).
So just in case someone comes here with similar problem, this thread:
https://www.sqlservercentral.com/Forums/569962/The-semaphore-timeout-period-has-expired
suggest that it may equally well be a hardware issue or actual timeout.
My loops aren't even (they depend on sales level in given month) in terms of time required for each, so good month takes about 20 mins to calculate (query looks at 4 years).
That way it's entirely possible I need to optimise my query. I would even say it's likely, as some changes I did included new tables, which are heaps... So another round of indexing my data before tearing into VM config and hardware tests.
Being aware that this is old question: I'm on SQL Server 2012 SE, SSMS is 2018 Beta and VM the SQL Server runs on has exclusive use of 132GB of RAM (30% total), 8 cores, and 2TB of SSD SAN.
What is the optimal number of connections that can be open on a SQL Server 2000 DB. I know in the previous company I was working for, on a tru 64 box with Oracle 8i, 8 processor machine we'd figured out that 8*12= 96 connections seemed to be a good number. Is there any such calc for SQL Server 2000. The DB runs on a 2-processor(hyper threaded 4) machine. There are a lot of transactions that run against the DB. The reason I ask is because we have an app that typically tends to leave around 100 connections open even if it is not doing anything and I am having difficulty explaining that that might be a cause for our performance issues. Maybe, SQL Server does not have such a limitation... Can any of you pour forth some wisdom on this? Much appreciate it. Thanks,
I should add it is the Standard Edition.
If you don't know if this is your performance bottleneck then you should be trying to determine that, not trying to limit the connections or something.
If you haven't, you should:
Use SQL Profiler to find long-running queries.
Monitor your db server's cpu load, memory/page file usage, and network usage
Find one of your longest running queries (see #1 above) and write a very lean test app that can throw this query at your db server during peak load and record some response times.
If #1 and #2 don't uncover anything, and #3 shows your db server has slow response times during load then you know you have a problem like "too many connections". But if you haven't done #3 then it seems advisable to do that, as mucking with connection limits and such seems like it will just create artificial bottlenecks, and not really get you to the root of your problem, IMO.
Your performance issue will not be caused by the number of connections.
As well as sliderhouserules' answer, as a quick fix I'd suggest switch off hyperthreading rather than limiting your connections.
link1, link2 (note: this guy worked on the MS SQL 2005 code)
Each connection takes a trivial amount of memory. A shared db lock is for stability only.
This blog post on MSDN indicates there is no limit - at least in the Express editions: http://blogs.msdn.com/euanga/archive/2006/03/09/545576.aspx
And this indicates that it might be 256, for lite editions - http://blogs.msdn.com/stevelasker/archive/2006/04/10/SqlEverywhereInfo.aspx
This also shows no limit: http://channel9.msdn.com/forums/TechOff/169030-The-difference-between-SQL-Server-2005-Express-and-Developer-Edition/?CommentID=299642
addition - from a comment, http://msdn.microsoft.com/en-us/library/aa196730(SQL.80).aspx indicates the max is 32767, while there is no "ideal"
If the app is a long running app and it's on the same server, if the app leaves open db handles that have created a lock this is truly bad for performance. You can check something like select * from sys.dm_tran_locks or sp_lock to give you an idea.