I am using IIS counters to monitor "bussiness" of IIS. Specially I like 2 of them:
Current Anonymous Users (The number of users who currently have an anonymous request pending with the WWW service. In IIS 6.0, Current Users (Anonymous or NonAnonymous) is the number of requests currently being worked on by the server)
Total Anonymous User (The number of users who have established an anonymous request since the WWW service started. This counter does not increment when files are being served from the kernel cache.)
Because my bottle neck is database, which happend to be Oracle 10g, I wonder if similar counters could be taken from Oracle server (on database level).
Basicly speaking I would like to know how many request to database ABC is waiting to be served at the moment of my request, and how many request was served since (last reset, beginnig of the day...)
How I could get this data of Oracle Server ?
V$SESSION can be used to determine how many database sessions are active at the current instant in time. This query will show you the number of user sessions (rather than background sessions that the Oracle database itself creates) are active at the current instant. You may want to further restrict this to be the number of active sessions where the USERNAME is the user that your middle tier is connecting as or the MACHINE from which the session is created is one of your middle tier servers.
SELECT COUNT(*)
FROM v$session
WHERE status = 'ACTIVE'
AND type = 'USER'
There is no easy mapping in Oracle for the "number of requests served" of a web browser. From a database perspective, there aren't any markers of when a "request" begins and ends. You could potentially count transactions but the Oracle database itself is constantly issuing transactions in the background which is likely to cause problems if you wanted a measure that would map closely to the number of web pages served.
That said, however, using counters to diagnose and monitor Oracle database performance is not a particularly good idea. Oracle has much more sophisticated monitoring and tuning tools available. Depending on the edition (standard or enterprise) and whether you've licensed the Performance and Tuning Pack, you'd be much better served grabbing an AWR report from a time period when the database was the bottleneck and analyzing that to see what needs to be tuned.
Related
I have performance issue with my .NET app hosted on an azure web app, connecting to Azure SQL DB with a custom connection string.
The more there are users, the more the app is slow. Therefore I am wondering if there are some improvements to perform at connection pool level.
How to check the pool size currently set ? How to detect sql issues when handling requests from different users ? And how to set pool size ?
Thank you for your help.
I think it's related to SQL Database resource limits for Azure SQL Database server.
The more there are users, the more the app is slow, one of the most important reason is database resource limits are reached.
Compute (DTUs and eDTUs / vCores)
When database compute utilization (measured by DTUs and eDTUs, or vCores) becomes high, query latency increases and can even time out.
Storage
When database space used reaches the max size limit, database inserts and updates that increase the data size fail and clients receive an error message. Database SELECTS and DELETES continue to succeed.
Sessions and workers (requests)
The maximum number of sessions and workers are determined by the service tier and compute size (DTUs and eDTUs). New requests are rejected when session or worker limits are reached, and clients receive an error message. While the number of connections available can be controlled by the application, the number of concurrent workers is often harder to estimate and control. This is especially true during peak load periods when database resource limits are reached and workers pile up due to longer running queries.
Fore details, please reference: What happens when database resource limits are reached.
If you Azure SQL DB is single database, you can reference these documents:
Azure SQL Database vCore-based purchasing model limits for a single database.
Resource limits for single databases using the DTU-based purchasing model.
Choose the most appropriate service tier.
About the performance issue, you also can use the Monitoring and performance tuning. It will help troubleshoot performance issue and improve the performance.
Hope this helps.
I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/
There are Many SQL Servers hosted on different different Servers.
All Servers are working based on "SQL Server Authentication". So the Same Login is used by many people in the Organization.
How to trace who deleted some of the records in particular table?
Do we need any additional coding like Triggers are required or its a in-build feature of SQL server to provide those details?
Please help me.
Thank You.
If the deletion has already occurred and you had nothing in place to track / log this, then the chances are going to be very low - they are not zero, but not far above it.
If you use the transaction log to identify the exact deletion and the session id of the deletion, which we already know is the shared user login - and you have got successful login security auditing enabled you would in theory be able to trace it back to the IP address that made the deletion.
However - that is a pretty slim chance - I would suspect that the login is from the actual application software and you would of needed that to be running directly on the users machine, e.g not a 3-tier / web based server of any flavor, but a good old thick client app making direct connections.
That gets you an IP and a time, but not a who was logged in on that machine at that time, if its shared in any form, then you are having to get login records on the machine etc.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort.
The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features:
WAIT TYPE = ASYNC_NETWORK_IO
The SQL being run is “(#claimid
varchar(15))SELECT claimid, enrollid,
status, orgclaimid, resubclaimid,
primaryclaimid FROM claim WHERE
primaryclaimid = #claimid AND
primaryclaimid <> claimid)”. This is
relatively innocuous SQL that should
only return one or two records, not a
large dataset.
NO OTHER SQL statements have been
implicated in the blocking, only this
SQL statement.
This is parameterized SQL for which
an execution plan is cached in
sys.dm_exec_cached_plans.
This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked.
HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2.
When we trace back to the web server implicated in the blocking, we see the following:
There is always some sort of
application related error in the
Event Log on the web server, linked
to the Host ID and Host Process ID
from the SQL Session.
The error messages vary, usually some
sort of SystemOutofMemory. (These
error messages seem to be similar to
error messages that we have seen in
the past without such dramatic
consequences. We think was happening
before, but didn’t lead to blocking.
Why now?)
No known problems with the network
adapters on either the web servers or
the SQL server.
(In any event the record set returned by the offending query would be small.)
Things ruled out:
Indexes are regularly defragmented.
Statistics regularly updated.
Increased sample size of statistics
on claim.primaryclaimid.
Forced recompilation of the cached
execution plan.
Created a compound index with
primaryclaimid, claimid.
No networking problems.
No known issues on the web server.
No changes to application software on
web servers.
We hypothesize that the chain of events goes something like this:
Web server process submits SQL
above.
SQL server executes the SQL, during
which it acquires a lock on the
claim table.
Web server process gets an error and
dies.
SQL server session is hung waiting
for the web server process to read
the data set.
SQL Server sessions that need to get
X locks on parts of the claim table
(anyone processing claims) are
blocked by the lock on the claim
table and remain blocked until they
all hit the application time out.
Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome.
Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only?
Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?
ASYNC_NETWORK_IO is caused by clients not able to receive data quick enough and filling network buffers (simply put). There is no magic SQL Server setting to fix it.
reboot the client (even if it's web server)
ensure NICs are set correctly (firmware, full duplex etc)
ensure physical cables are ok (any packet losses etc?)
etc
It is not a SQL Server issue, as such...
Blog article 1
BOL:
ASYNC_NETWORK_IO Occurs on network
writes when the task is blocked behind
the network. Verify that the client is
processing data from the server.
And another with link to MS whitepaper
I had the same problem and it got solved when I disabled the Kaspersky antivirus on the client.
I'm attempting to help our network engineers troubleshoot a situation for one of our clients. This client purchased a point-of-sale system from quite literally a "mom-and-pop" vendor, and said vendor recommended SQL Server Express 2005 as the back-end database to save the client from having to incur extra licensing fees. (Please don't get me started on that!)
We didn't write the app, and because it's a commercial app, we have no source code available. (Not that it would help us if we did; the thing was built in PowerBuilder, so we don't have tooling for it.) The app does none of its own logging, that we can ascertain. All we have to go on is SQL Server Express's own logging.
In the application, an end user swipes a membership card. Occasionally (a few times a day), the swipe will not return data from the database. The message on screen will say, "Member 123 not found." (The member numbers are actually six digits, "000123.") A rescan immediately afterward returns the member data correctly.
We've eliminated the scanner itself as a source of issues -- it routinely scans the full six-digit number. A scan of SQL Server Express's log indicates that it is coming back online from being idle, often at the point of the scan (but also at several other times per day). (Idle mode is explained here.)
I understand that allocating/deallocating RAM the way SQL Express does is a time-consuming process, especially if we're talking about hundreds of megabytes at a time -- which appears to be the case.
What we're not sure of is whether or not we're getting back partial data, or if the app is simply failing to connect to the database and displaying a generic error message. Since everything is so opaque, and the client is (for obvious reasons) unwilling to pay us to sit in their facility for 8 hours or so to physically see it happen (perhaps with network monitoring/packet sniffing tools), we're kind of at a loss.
At this point, our recommendation is that the client upgrade to SQL Server 2005 Workgroup Edition, with 5 CALs. But that doesn't completely sit well with me as the solution to this issue, because I'm reasonably certain that no SQL Server ever returns partial data -- if you can't connect, you can't connect. (That said, I still recommend it because it's a solution to a number of their other issues!)
I don't have much experience with Express. (I never use it for anything but local development, and there only at home; I certainly never recommend it to my clients.)
My question to those who might have experience with Express is, have you ever seen an instance of SQL Express return partial data, without the app itself being the cause of it? Specifically, have you seen this behavior when returning from idle mode?
(For what it's worth, we're inclined to believe that the app is failing to connect and merely displaying a generic error message, lopping off leading zeroes on the member ID when it does. That seems the most reasonable answer -- a third question might be, do you guys concur with that assessment?)
I've never heard of or experienced SQL Server Express returning partial data. It's essentially the same code base as the full SQL Server.
It is more likely that the application is experiencing a timeout (which defaults to 30 seconds) due to SQL Server Express going idle. The application probably receives a timeout that it does not expect and does not handle it well.
The problem and possible solutions are discussed in this forum thread: http://social.msdn.microsoft.com/forums/en-US/sqlexpress/thread/a8fbf8d6-9949-47a5-a32b-50f8131f1127/
I suspect you have a connection string that looks like this:
Data Source=.\SQLEXPRESS; Integrated Security=True;AttachDbFilename=|DataDirectory|\myDatabase.mdf;User Instance=True
From the referenced thread:
This connection string will cause an
initial connection to the main
instance (.\SQLEXPRESS) and then
instruct the main instance to spawn a
new instance of SQL Server under the
user's context and attach the database
specified to that new User Instance.
The User Instance is a completely
separate running instance of SQL
Server form the main instance that is
unique to the user and that will be
shut down when there are no longer any
connections to it.
This is totally different that
attaching a database to the main
instance, which stays running at all
times, unless you've manually shut it
down. If your question is about the
main instance going into an Idle
state, then your question is not
unique to SQL Express and you should
ask this question in the Database
Engine forum. I believe all Editions
of SQL Server have an Idle state and
the other forum would be where you can
find out how to affect that behavior.