I want to set up some monitoring software that will generate an SMNP trap if a database log file goes beyond about 95% usage. It can only look at the first result in the first column of an SQL query, so what I'm looking for is an SQL Query which will just return the percentage figure ONLY in the result - eg, 95
I've found several different ways of doing similar things, but all return table heading etc, whereas I just want the figure. It'll be running this query every hour so nothing too intensive. I'm running SQL version 8.
Thanks, Mike
You could write a query against the OS DMVs to get just the single value you're looking for.
Not sure if this will work for SQL Server 2000, but I know it works as far back as SQL Server 2005. It also requires that performance counters are enabled on the host server (i.e. OS, not just SQL Server).
This query should do the trick:
SELECT cntr_value as PercentUsed
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Percent Log Used'
AND instance_name = 'your_database_name'
Related
We have a linked server (OraOLEDB.Oracle) defined in the SQL Server environment. Oracle 12c, SQL Server 2016. There is also an Oracle client (64 bit) installed on SQL Server.
When retrieving data from Oracle (a simple query, getting all columns from a 3M row, fairly narrow table, with varchars, dates and integers), we are seeing the following performance numbers:
sqlplus: select from Oracle > OS File on the SQL Server itself
less than 2k rows/sec
SSMS: insert into a SQL Server table select from Oracle using OpenQuery (passthrough to Oracle, so remote execution)
less than 2k rows/sec
SQL Export/Import tool (in essence, SSIS): insert into a SQL Server table, using the OLEDB Oracle for source and OLEDB SQL Server for target
over 30k rows/second
Looking for ways to improve throughput using OpenQuery/OpenResultSet, to match SSIS throughput. There is probably some buffer/flag somewhere that allows to achieve the same?
Please advise...
Thank you!
--Alex
There is probably some buffer/flag somewhere that allows to achieve the same?
Probably looking for the FetchSize parameter
FetchSize - specifies the number of rows the provider will fetch at a
time (fetch array). It must be set on the basis of data size and the
response time of the network. If the value is set too high, then this
could result in more wait time during the execution of the query. If
the value is set too low, then this could result in many more round
trips to the database. Valid values are 1 to 429,496, and 296. The
default is 100.
eg
exec sp_addlinkedserver N'MyOracle', 'Oracle', 'ORAOLEDB.Oracle', N'//172.16.8.119/xe', N'FetchSize=2000', ''
See, eg https://blogs.msdn.microsoft.com/dbrowne/2013/10/02/creating-a-linked-server-for-oracle-in-64bit-sql-server/
I think there are many way to enhance the performance on the INSERT query, I suggest reading the following article to get more information about data loading performance.
The Data Loading Performance Guide
There are one method you can try which is minimizing the logging by using clustered index. check the link below for more information:
New update on minimal logging for SQL Server 2008
I'm looking for a query to get the current running queries in Azure SQL. All of the T-SQL I've found do not show the running queries when I test them (for instance, run a query in one window, then look in another window at the running queries). Also, I'm not looking for anything related to the time, CPU, etc, but only the actual running query text.
When I run ...
SELECT * FROM Table --(takes 2 minutes to load)
... and run a standard information query (like from Pinal Dave or this), I don't see the above query (I assume there's another way).
select * from sys.dm_exec_requests should give you what other sessions are doing.You can join this with sys.dm_exec_sql_text to get the text if needed. sys.dm_tran_locks gives the locks hold / waiting. If this is V12 server you can also use dbcc inutbuffer. Make sure that the connection you are running is dbo / server admin
I've written a SQL query that looks like this:
SELECT * FROM MY_TABLE WHERE ID=123456789;
When I run it in the Query Analyzer in SQL Server Management Studio, the query never returns; instead, after about ten minutes, I get the following error: System.OutOfMemoryException
My server is Microsoft SQL Server (not sure what version).
SELECT * FROM MY_TABLE; -- return 44258086
SELECT * FROM MY_TABLE WHERE ID=123456789; -- return 5
The table has over forty million rows! However, I need to fetch five specific rows!
How can I work around this frustrating error?
Edit: The server suddenly started working fine for no discernable reason, but I'll leave this question open for anyone who wants to suggest troubleshooting steps for anyone else with this problem.
According to http://support.microsoft.com/kb/2874903:
This issue occurs because SSMS has insufficient memory to allocate for
large results.
Note SSMS is a 32-bit process. Therefore, it is limited to 2 GB of
memory.
The article suggests trying one of the following:
Output the results as text
Output the results to a file
Use sqlcmd
You may also want to check the server to see if it's in need of a service restart--perhaps it has gobbled up all the available memory?
Another suggestion would be to select a smaller subset of columns (if the table has many columns or includes large blob columns).
If you need specific data use an appropriate WHERE clause. Add more details if you are stuck with this.
Alternatively write a small application which operates using a cursor and does not try to load it completely into memory.
I want production optimized query to return each row count and size for each database table in an instance
Like:
DATABASE/CATALOG_NAME TABLE_NAME RECORD_COUNT SIZE(Bytes/KB/MB)
What version of SQL Server are you using?
SSMS has a built in report that will just do that. You can export the report to excel.
What is better than that?
Check out this post from Buck Woody. If you get to see him talk, please go. Here is a very good presenter.
http://blogs.msdn.com/b/buckwoody/archive/2007/12/14/sql-server-management-studio-standard-reports-disk-usage-by-table-top-tables.aspx
i have a view that is using linked server to retrieve data from a remote server in SQL Server. On each time viewing the view, the results returned are vary. For example, 1st time execution may return 100 rows of records but on 2nd time of execution, rows returned are 120 rows. Any ideas what is the cause?
I have witnessed odd linked-server results that are a product of non-determinism written into the SQL itself, I.e. a TOP query written without an ORDER BY clause.
This problem, for example, where the chap had multiple non-unique foreign keys coming from a table source on the left hand side of a linked-server INNER JOIN, and wanted 10 rows from a remote sub-query to the right, where the end result was restricted to 10 rows itself, when it should have been greater than 10 rows.
Should definitely give your SQL a quick eye for such curiosities.
The data on the linked server changed between executions?
Is your SQL Server fully patched? SQL Server 2008 and 2005 both have bug fixes out related to incorrect query results from linked servers.
Here is one example:
969997 FIX: You receive an incorrect result when you query data from a linked server that is created by using an index OLE DB provider in SQL Server 2005 or in SQL Server 2008
Is the linked server also a SQL Server? If not, perhaps a buggy driver? I've seen odd results, for example, due to an old Informix ODBC driver. Are you able to run something akin to SQL Profiler on the linked server to see what command it's receiving?
I'm not sure what the answer is, but (assuming that your counts of 100 and 120 are accurate) can you not capture the data from the two runs and compare it? That might give you some clues as to what's going on. For example, is it completely different datat, or is it duplicate rows (in the 120 row batch).