Can we get a stored procedure call using Sql server profiler within another SP - sql

I have a stored procedure named CreateUpdateNewOrder and i call another SP in it named CreateClinicalDocument Now i want to see what exact values my second SP is getting for execution. I can run a sql profiler tool to see what input values CreateUpdateNewOrder is getting but I can't think of any other way of getting input values for inner SP call other than print them in query. Anyone has better way to do it?

You can run SQL Profiler and select the SPS template instead of the default one.This will show you every statement executed, even if it's inside a stored procedure.To use the SPS template you need to do the following:
File -> New trace
In the dialog that opens go to combo "Use this template" and select TSQL_SPs.
Now continue setting up your profiling session as you would normally.
Once you start the trace you will notice it's much more verbose. It will break down each procedure and will show what's executed line by line.Please let me know if you would need any other details.

It all depends on how you need to access and use the information, but it could be useful to log the values to a table. You could also try Debug in SSMS and set appropriate break points.

If you look at the standard template in profiler you'll see that on the Events Selection tab under the Stored Procedures heading it only includes "RPC:Completed". The T_SQL_SPs template includes "SP:Completed", "SP_Starting" and "SP:StmtStarting". I believe you just need to include those in whatever template you choose.

Related

Pentaho Execute SQL Statements variable conversion to null

I am using PDI to delete and insert some data from a DB. I have the following issue. I create two variables called START_DATE and END_DATE that are used to select the data that will be deleted from my DB. I am able to get them and run my transformation with no erors in the log file, but when I checked if data was deleted, I find it didn't. I send checked my "DeleteProcedure" step, and it says "Conversion error: null". I have tried different approached to take the variables and pass them as Strings, but I haven't been able to solve this issue. It cannot be a SQL mistake as I tested it with a constant and it works.
Any ideas? I attach some pics. Thanks!
As a documentation of the Execute SQL script says:
Note: When you have an issue, that the SQL is started at the initialization phase of the transformation and not for each row, make sure to check the option "Execute for each row" (see description below).
In your case it executes during the initialization phase of the transformation that's why it gets null values instead of ones from previous step.

SQL Parameters - where does expansion happens

I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
DELETE * FROM #tablename
and...
DELETE * FROM :tablename
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it doesn't use parameters)
With such data access libraries, like UniDAC or FireDAC, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
Second - where (typically) does the parameter replacement happen?
It doesn't. That's the whole point. Data elements in your query stay data items. Code elements stay code elements. The two never intersect, and thus there is never an opportunity for malicious data to be treated as code.
connect via ODBC to an Excel spreadsheet... I'm trying to understand why a particular command works only when it doesn't use parameters
Excel isn't really a database engine, but if it were, you still can't use a parameter for the name a table.
SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses #parname for parameters. Oracle uses :parname. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
command.CommandText = string.Format("DELETE FROM {0}", tableName);
P.S. A * is not allowed after a DELETE. After all, you can only delete whole rows, not a set of columns.

How to call a big Stored procedure in Qlikview having 3 parameter

I have a stored procedure which contains 3 input parameters with multiple SELECTs and INNER JOINs. I want to Call the stored procedure in QlikView. I followed lots of tutorials, but I make it work.
I am Using OLE DB and I'm trying to call as follows:
SQL CALL [DB NAME].[dbo].[ABC] #_ End-Time ='2012-12-31 00:21:06.550', #_ Start-time = '2012-12-31 00:21:06.550',
#_ Username = 'XYZ';
Is this correct? If not, what are the ways to call stored procedures into Qlikview and what permission do I need for this?
i'm not sure that you checked this thread (http://goo.gl/IiGD2) but it might be useful. Couple of things that i'm noticing from it: there is additional string that need to be added to the connection string "(mode is write)" and also to activate the "Open Databases in Read and Write mode" in qv.
Also make sure that you have sql rights to execute.
Regards!
Stefan
A workaround could be to retrieve the three input variables from a table instead, and update this table from qlikview using SQL insert.
It may be possible to run an store procedure from QlikView, but it is not possible to pull any output you get from it. You should convert that to a function if you want to retrieve any data from QlikView.
Creating a MV is your best course of action, and you will have a better performance.

SSRS Text Query: Variable names must be unique within a query batch or stored procedure

I am developing an SSRS 2008 report, but instead of using stored procedures, I want to use all Text queries. This report was working with stored procedures, but when I changed this report to use same logic but via text queries, I got the following error:
An error occurred during local report processing
    Query execution failed for dataset 'BRSR_Totals'
        The variable name '#END_yEAR' has already been declared. Variable names must be unique within a query batch or stored procedure.
Operation cancelled by user.
The problem is that some of my datasets (text queries) re-use the same parameters and END_YEAR is one of these parameters. How do I make this report run correctly?
One area that you might want to check is case sensitivity. SSRS is case-sensitive when considering parameter names but T-SQL does not have that case sensitivity. Take another look at your code and make sure that all parameters are using the same case.
I just resolved a similar issue using a text query to populate a dataset. It worked in SQL Server Management Studio and it worked in the Query Designer within BIDS, but failed at runtime.
The issue turned out to be BIDS helpfully adding parameters to the Dataset that this query was referencing. Switching to the Parameters tab of the Dataset Properties showed that BIDS had duplicated the parameters I had already added earlier. Deleting the duplicates resolved my problem.
To respond to the suggestion that the logic be off-loaded into a stored procedure: in this case, the report is a custom report for a single customer. The query will only ever be used in this report and makes a few assumptions about the customer's configuration that should not be available globally
I also just fixed this same issue in one of my queries. I was using a text query and had datetime variables/parameters. SSRS added a second set into the parameters for the dataset properties. I deleted them and my query ran fine after that and my graph populated.
I ran into a similar issue on a report where I had declare a substantial number of parameters at the beginning that I didn't want the end user to see. The issue I had was I was using a comma at the beginning of the line, so I had:
DECLARE #Parameter VARCHAR(4) = 'text'
, #Parameter VARCHAR(4) = 'text2'
It worked just fine in SSMS, but when I ran it in Report Builder 3.0 it threw the error shown in this thread. I changed it to remove the comma and to restate DECLARE at the beginning of each line and it worked perfectly.
Check that you didn't declare it twice, once in the CREATE PROC statement you're creating and another in the actual code...I've seen this problem while testing changes to SP code.

Getting the SQL from a Doctrine Migration

I have been researching a way to get the SQL statements that are built by a generated Migration file. These extend Doctrine_Migration_Base. Essentially I would like to save the SQL as change scripts.
The execution path leads me to Doctrine_Export which has methods that build the SQL statement and executes them. I have found no way of asking for just them. The export methods found in Doctrine_Export only operate on Doctrine_Record models and not Migration scripts.
From the command line './doctrine migrate version#' the path goes:
Doctrine_Cli::run(cmd)
Doctrine_Task_Migrate::setArguments(args)
Doctrine_Task_Migrate::execute()
Doctrine_Migration::migrate(to)
Doctrine_Migration_Process::Doctrine_Export::various
create, drop, alter methods with sql
equivalents.
Has anyone tackled this before? I really would not like to change Doctrine base files. Any help is greatly appreciated.
Could you make a dev server, and do the migration on that, storing a SQL Trace as you go?you don't have to keep the results, but you would get a list of every command.
Taking into account Rob Farley's suggestion, I modified:
Doctrine_Core::migrate
Doctrine_Task_Migrate::execute
When the execute method is called the optional argument 'dryRun' is checked. If true
then a 'Doctrine_Connection_Profiler' instance is created. The 'dryRun' value is then passed onto
the 'Doctrine_Core::migrate' method. The 'dryRun' value of true allows the changes to rollback when done executing the SQL statements. When the method returns, the profiler is parsed and non-empty SQL statements
not containing 'migration_version' are saved and displayed to the terminal.