Can't set column name equal to variable which is determined by the user - sql

I am creating a result set where I want the column name to be equal to a variable name that is et during run time. Is that possible ? How do I do that?
In the example below the user choses the date (myDate) before running the query (e.g 2015-06-11). The I want the column name to be that date (2015-06-11). How do I do that? FYI: I'm using Teradata.
SELECT
table_A.Cnt as ?myDate
/* I can't write ?myDate like that. I also tried to convert it to a string */
FROM
(
SELECT COUNT(*) AS Cnt FROM A
WHERE theDate=?myDate
) AS table_A

What you are trying to do is parameterize an object (or the name of an object) rather than parameterize a value, which seems straight forward when you think up the idea, but it's a bit more difficult to pull off.
First off, only an SP allows you to write and execute SQL dynamically, which is what you are doing here. Second, it's a little verbose. Third, it opens you up to SQL injection issues since you are slipping a parameter from a user into SQL then executing it, so proceed cautiously and do what you can to prevent a-holes from mucking up your system.
CREATE PROCEDURE paramMyField
(
IN myDate Date,
--This has to be less than 30 otherwise Teradata will be angry.
--I would set it low just to keep injection possibilities to minimum
IN fieldName VARCHAR(10)
)
--Tell it how many result sets this thing is going to return:
DYNAMIC RESULT SETS 1
--Set the security (using the security of the bloke that sets this thing off, if you don't trust them, neither do I)
SQL SECURITY INVOKER
BEGIN
--We'll need a variable to hold the dynamically generated sql statement
DECLARE dynSQL VARCHAR(5000);
--And we'll need a cursor and a statement
DECLARE dynCursor CURSOR WITH RETURN ONLY FOR dynStatement;
SET dynSQL = '
SELECT
table_A.Cnt as ' || fieldName || '
FROM
(
SELECT COUNT(*) AS Cnt FROM A
WHERE theDate = DATE ''' || myDate || '''
) AS table_A;';
--Now to prep the statement
PREPARE dynStatement FROM dynSQL;
--And open the cursor (we will open and not close it so it's sent back as a resultset
OPEN dynCursor;
END;
There's a lot happening there, but basically it's a stored procedure that takes in two parameters (the date and the name of the field) and spits back a record set that is the results of the SQL statement with a dynamically named field. It does this by using a dynamic SQL statement.
This is executed by running something like:
CALL paramMyField(DATE '2015-06-15', 'Whatever');

Related

Test for a column within a Select statement

Is it possible to test for a column before selecting it within a select statement?
This may be rough for me to explain, I have actually had to teach myself dynamic SQL over the past 4 months. I am using a dynamically generated parameter (#TableName) to store individual tables within a loop (apologize for the vagueness, but the details aren't relevant).
I then want to be able to be able to conditionally select a column from the table (I will not know if each table has certain columns). I have figured out how to check for a column outside of a select statement...
SET #SQLQuery2 = 'Select #OPFolderIDColumnCheck = Column_Name From INFORMATION_SCHEMA.COLUMNS Where Table_Name = #TABLENAME And Column_Name = ''OP__FolderID'''
SET #ParameterDefinition2 = N'#TABLENAME VARCHAR(100), #OPFolderIDColumnCheck VARCHAR(100) OUTPUT'
EXECUTE SP_EXECUTESQL #SQLQuery2, #ParameterDefinition2, #TABLENAME, #OPFolderIDColumnCheck OUTPUT
IF #OPFolderIDColumnCheck IS NULL
BEGIN
SET #OP__FOLDERID = NULL
END
ELSE
IF #OPFolderIDColumnCheck IS NOT NULL
BEGIN
...etc
but id like to be able to do it inside of a select statement. Is there a way to check and see if OP__FOLDERID exists in the table?
Id like to be able to do something like this:
SELECT IF 'OP__FOLDERID' EXISTS IN [TABLE] THEN 'OP__FOLDERID' FROM [TABLE]
Thank you for any help or direction you can offer.
I'm afraid there isn't any direct way to do this within a SELECT statement at all. You can determine if a column exists in a table, however, and construct your dynamic SQL accordingly. To do this, use something like this:
IF COL_LENGTH('schemaName.tableName', 'columnName') IS NOT NULL
BEGIN
-- Column Exists
END
You could then set a variable as a flag, and the code to construct the dynamic SQL would construct the expression with/without the column, as desired. Another approach would be to use a string value, and set it to the column name if it is present (perhaps with a prefix or suffix comma, as appropriate to the expression). This would allow you to save writing conditionals in the expression building, and would be particularly helpful where you have more than one or two of these maybe-columns in a dynamic expression.

Why do SQL injection not working?

DECLARE #a varchar(max);
set #a ='''a'' OR Name like ''%a'';';
--Why the below query not working
Select TOP 10 * FROM Member where Name = #a
-- The query below was executed to make sure that the query above
being constructed properly
print 'SQL: Select TOP 10 * FROM Member where Name ='+ #a
--SQL: Select TOP 10 * FROM Member where Name ='a' OR Name like '%a';
Correct me if im wrong, SQL injection wont work in Stored Procedure is due to some precompiled factor but the above scenario was tested in query statement instead of Stored Procedure. Why still not working?
I'm not sure why you think that would work. #a is a varchar variable, so Select TOP 10 * FROM Member where Name = #a finds rows where Name is equal to the value of that variable.
If you want SQL-Server to take the value of #a and insert it into the query as code, then you need to use sp_executesql (analogous to eval in languages like Bash and Python and JavaScript):
EXECUTE sp_executesql 'Select TOP 10 * FROM Member where Name = ' + #a
SQL Injection occurs when data is confused for and interpreted as code.
This does not happen in your scenario since parameter or variable values are not directly interpreted as code - they're only at risk of being interpreted as code if you construct new code by combining strings and these parameter/variable values and then pass the entire constructed string to the system and ask it to interpret the entire string as code - via exec, sp_executesql or other such means.
Look there is no name ending with 'a'. Try like
Select TOP 10 * FROM Member where Name ='a' OR Name like '%a%'
Updated
Microsoft handle SQL injection for SQL parameters.

Optimization of query in Oracle

Update table from statement. Using subquery.
I need write an update statement that used same table.
I used subquery for update multiple columns.
My example.
UPDATE USER_BANCU.REGISTRU_21052016_AE NEW
SET (
NEW.LIST_COND,
NEW.LISTA_FOND,
NEW.GEN_ACT_NE_LIC,
NEW.GEN_ACT_LIC
)
=
(
SELECT
OLD.LIST_COND,
OLD.LISTA_FOND,
OLD.GEN_ACT_NE_LIC,
OLD.GEN_ACT_LIC
FROM (
SELECT
VB.IDNO IDNO ,
trim_vb(VB.LIST_COND) LIST_COND,
trim_vb(VB.LISTA_FOND) LISTA_FOND,
REPLACE(VB.GEN_ACT_NE_LIC, ' ','' ) GEN_ACT_NE_LIC,
REPLACE(VB.GEN_ACT_LIC, ' ','' ) GEN_ACT_LIC
FROM USER_BANCU.REGISTRU_21052016_AE VB
) OLD
WHERE
OLD.IDNO=NEW.IDNO
)
WHERE EXISTS (
SELECT *
FROM (
SELECT
VB.IDNO IDNO ,
trim_vb(VB.LIST_COND) LIST_COND,
trim_vb(VB.LISTA_FOND) LISTA_FOND,
REPLACE(VB.GEN_ACT_NE_LIC, ' ','' ) GEN_ACT_NE_LIC,
REPLACE(VB.GEN_ACT_LIC, ' ','' ) GEN_ACT_LIC
FROM USER_BANCU.REGISTRU_21052016_AE VB
) OLD
WHERE
OLD.IDNO=NEW.IDNO
)
Update table from statement.
Using subbquery.
Ability to optimize? Is it possible to create a procedure or a cursor in this case?
I have an error when you run a query --ORA-01427: single-row subquery returns more than one row.
It looks like you've massively overcomplicated the update. Since you're updating every single row in the table with values from the same table, I think you're just trying to do:
update user_bancu.registru_21052016_ae
set list_cond = trim_vb(vb.list_cond),
lista_fond = trim_vb(vb.lista_fond),
gen_act_ne_lic = replace(vb.gen_act_ne_lic, ' '),
gen_act_lic = replace(vb.gen_act_lic, ' ');
N.B. I removed the '' from the replace parameters because in Oracle, there isn't such a thing as an empty string - it's treated the same as null. And as the default value of the string-to-replace-with parameter is null, you can just remove the parameter altogether.
Also, replacing the above statement with a procedure involving looping round a cursor is likely to be slower. If you have to have a procedure, just use the update statement directly in the procedure.
If you need to speed things up even further, than the above update statement, I suggest you take a look at the trim_vb function calls - if you can move the logic directly into the update statement, then that should speed things up even more (certainly in pre-12c, user defined function calls in DML statements involve context switching between the SQL and PL/SQL engines, which slows things down.).

"Order by" works in Console but doesn't work in Stored Procedures

I've a problem executing a Stored Procedure in Informix. I'm doing a simple query that it doesn't work. This is the query:
SELECT
first 1 field1,
date1
FROM
historia_t
WHERE
field3 = 1
AND field4 = 1
AND date1 BETWEEN (CURRENT - 1 UNITS YEAR) AND CURRENT
ORDER BY
field1 desc
If I execute the query in DbVisualizer I don't have any problem but if I execute the query in Informix(With stored procedures) I get a sintaxis error in the line with " AND date1 BETWEEN (CURRENT - 1 UNITS YEAR) AND CURRENT " . But the real problem it's in ORDER BY field1 desc;
I don't know why, but sometimes Stored Procedures return errors when you use Order by in them.
Note: Fields are invented because I think they aren't important for the problem.
Thanks in advanced!
When you run a SELECT statement via DB-Access or equivalent, the program takes care of creating a cursor, opening it, fetching the data, closing the cursor, and freeing up the resources used.
Inside a stored procedure, you have to manage this processing. The FOREACH loop does that automatically. If you're using dynamic SQL, there are other statements you can use.
If a SELECT statement may return more than one row, you need the cursor management. If the SELECT statement returns just a single row, you can specify which variable should receive the result. I observe that ORDER BY is immaterial when the SELECT returns a single row — if you have an ORDER BY, there'll be a strong presumption that the query might return more than one row.
For example, this stored procedure works (and returns syssynonyms):
create procedure fk2() returning varchar(128) as tabname;
define t varchar(128);
select tabname into t from informix.systables where tabid = 9;
return t;
end procedure;
But where there's more than one row, you need:
create procedure fk3() returning varchar(128) as tabname;
define t varchar(128);
foreach select tabname into t
from informix.systables
where tabid between 4 and 10
order by tabname # No semicolon permitted (don't ask!)
return t with resume;
end foreach;
end procedure;
This returns:
syscolauth
sysdepend
syssynonyms
syssyntable
systabauth
sysusers
sysviews

Print Dynamic Parameter Values

I've used dynamic SQL for many tasks and continuously run into the same problem: Printing values of variables used inside the Dynamic T-SQL statement.
EG:
Declare #SQL nvarchar(max), #Params nvarchar(max), #DebugMode bit, #Foobar int
select #DebugMode=1,#Foobar=364556423
set #SQL='Select #Foobar'
set #Params=N'#Foobar int'
if #DebugMode=1 print #SQL
exec sp_executeSQL #SQL,#Params
,#Foobar=#Foobar
The print results of the above code are simply "Select #Foobar". Is there any way to dynamically print the values & variable names of the sql being executed? Or when doing the print, replace parameters with their actual values so the SQL is re-runnable?
I have played with creating a function or two to accomplish something similar, but with data type conversions, pattern matching truncation issues, and non-dynamic solutions. I'm curious how other developers solve this issue without manually printing each and every variable manually.
I dont believe the evaluated statement is available, meaning your example query 'Select #FooBar' is never persisted anywhere as 'Select 364556243'
Even in a profiler trace you would see the statement hit the cache as '(#Foobar int)select #foobar'
This makes sense, since a big benefit of using sp_executesql is that it is able to cache the statement in a reliable form without variables evaluated, otherwise if it replaced the variables and executed that statement we would just see the execution plan bloat.
updated: Here's a step in right direction:
All of this could be cleaned up and wrapped in a nice function, with inputs (#Statement, #ParamDef, #ParamVal) and would return the "prepared" statement. I'll leave some of that as an exercise for you, but please post back when you improve it!
Uses split function from here link
set nocount on;
declare #Statement varchar(100), -- the raw sql statement
#ParamDef varchar(100), -- the raw param definition
#ParamVal xml -- the ParamName -to- ParamValue mapping as xml
-- the internal params:
declare #YakId int,
#Date datetime
select #YakId = 99,
#Date = getdate();
select #Statement = 'Select * from dbo.Yak where YakId = #YakId and CreatedOn > #Date;',
#ParamDef = '#YakId int, #Date datetime';
-- you need to construct this xml manually... maybe use a table var to clean this up
set #ParamVal = ( select *
from ( select '#YakId', cast(#YakId as varchar(max)) union all
select '#Date', cast(#Date as varchar(max))
) d (Name, Val)
for xml path('Parameter'), root('root')
)
-- do the work
declare #pStage table (pName varchar(100), pType varchar(25), pVal varchar(100));
;with
c_p (p)
as ( select replace(ltrim(rtrim(s)), ' ', '.')
from dbo.Split(',', #ParamDef)d
),
c_s (pName, pType)
as ( select parsename(p, 2), parsename(p, 1)
from c_p
),
c_v (pName, pVal)
as ( select p.n.value('Name[1]', 'varchar(100)'),
p.n.value('Val[1]', 'varchar(100)')
from #ParamVal.nodes('root/Parameter')p(n)
)
insert into #pStage
select s.pName, s.pType, case when s.pType = 'datetime' then quotename(v.pVal, '''') else v.pVal end -- expand this case to deal with other types
from c_s s
join c_v v on
s.pName = v.pName
-- replace pName with pValue in statement
select #Statement = replace(#Statement, pName, isnull(pVal, 'null'))
from #pStage
where charindex(pName, #Statement) > 0;
print #Statement;
On the topic of how most people do it, I will only speak to what I do:
Create a test script that will run the procedure using a wide range of valid and invalid input. If the parameter is an integer, I will send it '4' (instead of 4), but I'll only try 1 oddball string value like 'agd'.
Run the values against a data set of representative size and data value distribution for what I'm doing. Use your favorite data generation tool (there are several good ones on the market) to speed this up.
I'm generally debugging like this on a more ad hoc basis, so collecting the results from the SSMS results window is as far as I need to take it.
The best way I can think of is to capture the query as it comes across the wire using a SQL Trace. If you place something unique in your query string (as a comment), it is very easy to apply a filter for it in the trace so that you don't capture more than you need.
However, it isn't all peaches & cream.
This is only suitable for a Dev environment, maybe QA, depending on how rigid your shop is.
If the query takes a long time to run, you can mitigate that by adding "TOP 1", "WHERE 1=2", or a similar limiting clause to the query string if #DebugMode = 1. Otherwise, you could end up waiting a while for it to finish each time.
For long queries where you can't add something the query string only for debug mode, you could capture the command text in a StmtStarted event, then cancel the query as soon as you have the command.
If the query is an INSERT/UPDATE/DELETE, you will need to force a rollback if #DebugMode = 1 and you don't want the change to occur. In the event you're not currently using an explicit transaction, doing that would be extra overhead.
Should you go this route, there is some automation you can achieve to make life easier. You can create a template for the trace creation and start/stop actions. You can log the results to a file or table and process the command text from there programatically.