View specific executed queries within SQL Server Management Studio? - sql

Continuing from this post - I have some more questions I hope someone can help me with:
Is there a way to select a specific database and/or table to get the queries from or add this as a column?
In my queries there are some variables shown as #P1 or #GUID. Is there a way to get the data that was inserted there?
I am only using Express to I also have no access to SQL Profiler.

sys.dm_exec_sql_text has a dbid column, so you can filter on that. For example I took the query from the other answer and added a where clause filtering on queries against master:
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query]
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
WHERE dest.dbid = DB_ID('master')
ORDER BY deqs.last_execution_time DESC
Note that not all queries have the right database context (or a database context at all). For example, if you have a query that joins two tables that are in different databases, you will only see one dbid - it will either be the executing context and may or may not be one of the databases referenced in the query. So applying the filter might actually hide queries you are interested in.
You may be able to obtain parameters by digging into the XML from other DMOs such as sys.dm_exec_cached_plans and sys.dm_exec_query_plan. If you have an execution plan already for a query that you've captured, it is going to be much easier to use a tool like SQL Sentry Plan Explorer than to wade through gobs of XML yourself.
Disclaimer: I work for SQL Sentry, who provides the free tool to the community.

Just an FYI, you know that even though SQL Express doesn't include profile, if you have access to it you can use.

Related

What is the difference between getting data from writing a SQL query and Getting Data from sql server import [PowerBI]

As question states:
Within powerBi there from the 'Get Data from SQL Server' -> connecting to the SQL Server
there are two options import and advanced. With Advanced, you can write a sql query to get the data or the default is import. This shows all the tables on the server and you can just ETL from a click.
What is the real difference?
If you are comfortable writing your own T-SQL select statement, you can use it to bypass the Power Query editor and send your desired statement straight to the SQL database. That is also handy if you have code already written out from a previous query or project, which you can just paste into the Advanced query window.
If you use the Power Query Editor to build you query step by step, you have a better visualisation about what data is returned by the previous step(s), and you can apply data manipulations after sighting the data.
Power Query uses query folding, which means that your individual steps are analysed and then translated into the most efficient SQL code before it is sent to the server.
That means that even if you don't speak T-SQL very well, you can still build efficient queries with the Query Editor, and if you feel you are an accomplished T-SQL developer, you can shortcut the Query Editor steps altogether. Of course that means that it is also possible to use "Advanced" and write clunky, inefficient T-SQL that performs slower than going through the Query Editor steps would.
In the end, it comes down to preference and familiarity. A seasoned DBA might just quickly write out a Select statement, a SQL rookie might prefer to click a few ribbon commands instead. The result can be identical in returned data and performance.

Measuring the Performance of SQL Queries

Let me say ahead of time, that I have very little understanding of the algorithms that SQL queries go through, so please excuse my ignorance.
My question is: How do you go about evaluating the performance of a particular SQL query? And what metrics are considered?
For example,
SELECT * FROM MyTable;
and
SELECT * FROM MyTable UNION SELECT * From MyTable;
I'm guessing the second one is a lot slower even though both queries return the same results. But, how could someone evaluate the two and decide which one is better and why?
Are there tools to do this? Is there any type of SQL stack trace? Etc...
Thanks.
Assuming you're talking about SQL Server (you didn't specify...):
You need to look into SQL Server Profiler - and the best intro around is a six-part webcast series called
Mastering SQL Server Profiler
in which Brad MacGehee walks you through how to start using Profiler and what to get from it.
Red-Gate Software also publishes a free e-book on Mastering SQL Server Profiler (also by Brad)
Also assuming you are talking about SQL Server, if you are using SQL Server Management Studio, then you can try 'Display Estimatesd Execution Plan' and/or 'Include Actual Execution Plan' (from the Query menu).
The difference is that the first one doesn't execute the query, while the second does. So the second is more accurate, but the first is useful if you only want to execute the 'lighter' query.
Both of them display the execution tree. You can hover over each node and see statistics.
The one to use to compare is 'Estimate Subtree Cost' (the higher the worse).
Hope this was helpful.

DBMS Query Log, is it possible?

We want to record all db query into log table, is it possible?
For sql server, see this answer about sql server profiler. For mysql, the query log is a solution. However, they both write to files but you can always parse the log files and insert them into tables if you want to query the data.
Beware, however that logging does not come free. You will see some performance degradation in both cases. If you only want to log the queries of an application, you could opt to log the queries there (optionally, asynchronously). You'll have to test to see what's the best option.
EDIT : And also, depending your amount of traffic, logging all queries can eat large amounts of diskspace in a short amount of time. If you log in the application, you could use an logging library like nlog that has a rollover system (i.e. if the logfiles reach > 100 mb, then start deleting the oldest files). In all three cases, you could also set aside a partition meant only for logging so it doesn't fill up your main hard disks.
From a SQL Server perspective......
As others have suggested, SQL Server Profiler is certainly one way to go but you're going to incur a resource hit from doing so. Should you choose this method you absolutely must implement it as a Server Side Trace rather than via the GUI.
You may also have some success monitoring, recording the contents of the Dynamic Management Views (DMV) for things such as query execution statistics.
You'll want to look at DMV's such as:
sys.dm_exec_query_stats
sys.dm_exec_sql_text
sys.dm_exec_query_plan
For example, here is a query that can be used to identify the poorest performing top 20 SQL queries by CPU consumption. Not exactly what you are after but it does demonstrate how to use the DMV's that you would be interested in.
SELECT TOP 20
qs.sql_handle,
qs.execution_count,
qs.total_worker_time AS Total_CPU,
total_CPU_inSeconds = --Converted from microseconds
qs.total_worker_time/1000000,
average_CPU_inSeconds = --Converted from microseconds
(qs.total_worker_time/1000000) / qs.execution_count,
qs.total_elapsed_time,
total_elapsed_time_inSeconds = --Converted from microseconds
qs.total_elapsed_time/1000000,
st.text,
qp.query_plan
FROM
sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS apply sys.dm_exec_query_plan (qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC
I don't think you can write to a DB table, MYSQL can write them to a file and you can write a script to parse the file and insert the queries.

retrieve most recently executed SQL command (T-SQL)

One of my developers working on a trigger-based logging facility in SQL Server 2008 asked me if there was a command to retrieve the most recently executed SQL command within T-SQL. I thought there was a system stored procedure for just such a function, but it's possible I'm thinking of another product from a prior decade... online searches yielded us no results.
Does anyone have information on anything of the sort?
sure try this :
SELECT
DMExQryStats.last_execution_time AS [Executed At],
DMExSQLTxt.text AS [Query]
FROM
sys.dm_exec_query_stats AS DMExQryStats
CROSS APPLY
sys.dm_exec_sql_text(DMExQryStats.sql_handle) AS DMExSQLTxt
ORDER BY
DMExQryStats.last_execution_time DESC
it will returns recently executed queries along with the date and time at which they were executed
Well, the procedure that retrieves the most current SQL batch can safely return itself :)
On a serious note, you can look at sys.dm_exec_query_stats and sys.dm_exec_procedure_stats to see when a plan was last time executed, based on the last_execution_time column. But note that the method is not reliable because it does not account for plans that were evicted from the cache after execution.
What does "most recent" mean in the context of a multi-core machine?
Also, does he mean the most recently started, or the most recently finished?
Finally, he should just open SSMS and look at Activity Monitor.

will the sql queries i run in ms-access also work on mysql without any changes?

will the sql queries i run in ms-access also work on mysql without any changes ?
It's possible, but it depends on what the queries use. Date and string functions are the most likely to cause problems when porting queries.
The DATEDIFF keyword is supported on both Access & MySQL, but the function takes different parameters:
Access: DATEDIFF
MySQL: DATEDIFF
Well, if the coder wrote the queries with portability in the forefront of their mind then there's a good chance that you will need to make only minimal changes. However, you could only expect the most simple queries to work with no changes, regardless of which SQL product were involved.
In an ideal world, all SQL products would comply with ISO/ANSI Standard SQL with vendor extensions. In reality, while mySQL generally has a good track record in Standard SQL compliance, the Access Database Engine's record is rather poor -- it still doesn't even conform to entry level SQL-92, which was a fairly fundamental requirement even a decade ago (and seemingly none too difficult to achieve either).
[Your question is in all lower case. I've assumed by 'queries' you mean SQL DML SELECT. If you use 'queries' to mean INSERT/UPDATE/DELETE SQL DML plus SQL DDL and SQL DCL then this changes the answer. You should note the the Access Database Engine's UPDATE SQL DML is proprietary and non-deterministic; further, it does not support SQL-92's scalar subquery syntax. This is of major significance when porting to a SQL product.]
Thanks for your question. It just goes to show that it's worth considering portability from day one.
I would like to add one more point to OMG Ponies answer
Transform that is use for cross tab queries in MS ACCESS cannot be used in MySQL
e.g.
TRANSFORM Sum([M_Sales].[Amount]) AS SumOfAmount
SELECT [M_Sales].[Department]
FROM M_Sales
GROUP BY [M_Sales].[Department]
PIVOT Format([M_Sales].[Sale_date],"mmm") In ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec");
in MSACCESS ( taken from )
could be something in MySql Common MySQL Queries . Just visit the Pivot table section
Given some of your previous questions, you could save some time with MySQL, compared to Access: 12.1.10. CREATE TABLE Syntax