This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
In searching for an answer to this question, I found this popular post on StackOverflow. Unfortunately, it doesn't work completely. The question is this:
Is there a way to check for existence of a table (or another object) before performing modifications (e.g. INSERT)? The before mentioned post suggests this:
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'questionableTable'))
BEGIN
INSERT INTO dbo.questionableTable VALUES ('success!');
END
Error: Invalid object name 'dbo.questionableTable'.
The problem with this is that SQL Server fails when it parses the INSERT statement, stating that dbo.questionableTable doesn't exist. The previous INFORMATION_SCHEMA check doesn't seem to affect it.
Is there a way to write this kind of query? For SQL Server, in particular. But I would also like to see similar operations for other database systems, if such things exist.
The motivation behind this question is because we have multiple databases which contain subsets of each others' tables. What I would like is to have a single script that can be applied to all databases, and which only modified the tables that exist there (and doesn't error upon execution).
Use dynamic SQL via the EXEC() function:
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'questionableTable'))
BEGIN
EXEC('INSERT INTO dbo.questionableTable VALUES (''success!'')');
END
The EXEC() function executes a string as SQL, but being a string it isn't evaluated until executed, so the tables mentioned in the string don't need to exist at compile time. This allows the stored proc to be defined prior to the table being created.
I tested this on my local server and it seems to work:
if exists (select * from dbname.sys.tables where name='tablename')
begin
select * from dbname.dbo.tablename
end
Related
I'm currently working on a .NET application and want to make it as modular as possible. I've already created a basic SELECT procedure, which returns data by checking inputted parameters on SQL Server side.
I want to create a procedure that parses structured data as string and inserts its' contents to corresponding table in database.
For example, I have a table as
CREATE TABLE ExampleTable (
id_exampleTable int IDENTITY (1, 1) NOT NULL,
exampleColumn1 nvarchar(200) NOT NULL,
exampleColumn2 int NULL,
exampleColumn3 int NOT NULL,
CONSTRAINT pk_exampleTable PRIMARY KEY ( id_exampleTable )
)
And my procedure starts as
CREATE PROCEDURE InsertDataIntoCorrespondingTable
#dataTable nvarchar(max), --name of Table in my DB
#data nvarchar(max) --normalized string parameter as 'column1, column2, column3, etc.'
AS
BEGIN
IF #dataTable = 'table'
BEGIN
/**Parse this string and execute insert command**/
END
ELSE IF /**Other statements**/
END
TL;DR
So basically, I'm looking for a solution that can help me achieve something like this
EXEC InsertDataIntoCorrespondingTableByID(
#dataTable = 'ExampleTable',
#data = '''exampleColumn1'', 2, 3'
)
Which should be equal to just
INSERT INTO ExampleTable SELECT 'exampleColumn1', 2, 3
Sure, I can push data as INSERT statements (for each and every 14 tables inside DB...), generated inside an app, but I want to conquer T-SQL :)
This might be reasonable (to some degree) on an RDBMS that supports structured data like JSON or XML natively, but doing this the way you are planning is going to cause some real pain-in-the-rear support and, more importantly, a sql injection attack vector. I would leave this to the realm of the web backend server where it belongs.
You are likely going to invent your own structured data markup language and parser to solve this as sql server. That's a wheel that doesn't need to be reinvented. If you do end up building this, highly consider going with JSON to avoid all the issues that structured data inherently bring with it, assuming your version of sql server supports json parsing/packaging.
Your front end that packages your data into your SDML is going to have to assume column ordinals, but column ordinal is not something that one should rely on in a database. SQL Amateurs often do, I know from years in the industry and dealing with end users that are upset when a new column is introduced in a position they don't want it. Adding a column to a table shouldn't break an application. If it does, that application has bad code.
Regarding the sql injection attack vector, your SP code is going to get ugly. You'll need to parse out each item in #data into a variable of its own in order to properly parameterize your dynamic sql that is being built. See here under the "working with parameters" section for what that will look like. Failure to add this to your SP code means that values passed in that #data SDML could become executable SQL instead of literals and that would be very bad. This is not easy to solve in SP language. Where it IS easy to solve though is in the backend server code. Every database library on the planet supports parameterized query building/execution natively.
Once you have this built you will be dynamically generating an INSERT statement and dynamically generating variables or an array or some data structure to pass in parameters to the INSERT statement to avoid sql injection attacks. It's going to be dynamic, on top of dynamic, on top of dynamic which leads to:
From a support context, imagine that your application just totally throws up one day. You have to dive into investigate. You track the SDML that your front end created that caused the failure, and you open up your SP code to troubleshoot. Imagine what this code ends up looking like
It has to determine if the table exists
It has to parse the SDML to get each literal
It has to read DB metadata to get the column list
It has to dynamically write the insert statement, listing the columns from metadata and dynamically creating sql parameters for the VALUES() list.
It has to execute sending a dynamic number of variables into the dynamically generated sql.
My support staff would hang me out to dry if they had to deal with that, and I'm the one paying them.
All of this is solved by using a proper backend to handle communication, deeper validation, sql parameter binding, error catching and handling, and all the other things that backend servers are meant to do.
I believe that your back end web server should be VERY aware of the underlying data model. It should be the connection between your view, your data, and your model. Leave the database to the things it's good at (reading and writing data). Leave your front end to the things that it's good at (presenting a UI for the end user).
I suppose you could do something like this (may need a little extra work)
declare #columns varchar(max);
select #columns = string_agg(name, ', ') WITHIN GROUP ( ORDER BY column_id )
from sys.all_columns
where object_id = object_id(#dataTable);
declare #sql varchar(max) = select concat('INSERT INTO ',#dataTable,' (',#columns,') VALUES (', #data, ')')
exec sp_executesql #sql
But please don't. If this were a good idea, there would be tons of examples of how to do it. There aren't so it's probably not a good idea.
There are however tons of examples of using ORMs or auto-generated code in stead - because that way your code is maintainable, debugable and performant.
I've small doubt about update code block which has been written by someone before and now I'll be using it in my Java program.
Is it possible to update a column first, then commit and afterwards use same column as an input in another update statement inside same block, as listed in below code. I know using sub-query way to do this but have never seen this way before. It'll great if someone can confirm
1) Whether it is correct?
2) If not, what can be updated to make it work beyond using sub-query format.
3) Also, bas_capital_calc_cd is column in same table derivatives which is being updated. Can we pass column as an input to functions, such as bas2_rwa_calc here? Moreover, can we pass column name at all in plsql function as input.
Thanks in advance for help!
--BAS_EB_RWA_COMMT is being used in BAS_EB_TOTAL_CAPITAL calculation. similarly, BAS_AB_RWA_COMMT is being used in BAS_AB_TOTAL_CAPITAL calculation.
IF ID = 17 THEN
UPDATE derivatives
SET BAS_CAPITAL_CALC_CD = 'T',
BAS_CATEGORY_CD = case when nvl(rec.ssfa_resecure_flag,'N') = 'Y' then 911 else 910 end,
BAS_EB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd, v_SSFA_COMMT_AMT,v_BAS_CAP_FACTOR_K_COMMT, v_basel_min,v_bas_rwa_rate) + NVL(BAS_CVA_PORTFOLIO_RWA,0),
BAS_AB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd, v_SSFA_COMMT_AMT,V_BAS_CAP_FACTOR_K_COMMT, v_basel_min,v_bas_rwa_rate) + NVL(BAS_CVA_PORTFOLIO_RWA,0),
BAS_ICAAP_EB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd,bas_unused_commt,bas_icaap_factor_k_commt,v_basel_min,v_bas_rwa_rate),
WHERE AS_OF_DATE = v_currect_DATE
COMMIT;
UPDATE derivatives
SET BAS_EB_TOTAL_CAPITAL = round(BAS2_MGRL_CAPITAL(v_date, BAS_EB_RWA, BAS_EB_RWA_COMMT),2),
BAS_AB_TOTAL_CAPITAL = round(BAS2_MGRL_CAPITAL(v_date, BAS_AB_RWA, BAS_AB_RWA_COMMT),2)
WHERE AS_OF_DATE = v_DATE
AND ID_NUMBER = rec.ID_NUMBER
AND IDENTITY_CODE = rec.IDENTITY_CODE;
COMMIT;
WHERE AS_OF_DATE = v_currect_DATE;
COMMIT;
END IF
In DB2 and the SQL standard you use a feature called FINAL_TABLE to do this. In Oracle you use a statement called "RETURNING".
cf - https://blog.jooq.org/tag/final-table/
As I understood from your question statement you need to understand the PLSQL. Hoping, I got it correct.
To understand the concept let us first discuss what is a PL/SQL?
Theory Source: https://www.geeksforgeeks.org/plsql-introduction/
PL/SQL is a block structured language that enables developers to combine the power of SQL with procedural statements.All the statements of a block are passed to oracle engine all at once which increases processing speed and decreases the traffic.
Disadvantages of SQL:
SQL doesn’t provide the programmers with a technique of condition checking, looping and branching.
SQL statements are passed to Oracle engine one at a time which increases traffic and decreases speed.
SQL has no facility of error checking during manipulation of data.
Features of PL/SQL:
PL/SQL is basically a procedural language, which provides the functionality of decision making, iteration and many more features of procedural programming languages.
PL/SQL can execute a number of queries in one block using single
command.
One can create a PL/SQL unit such as procedures, functions, packages, triggers, and types, which are stored in the database for reuse by applications.
PL/SQL provides a feature to handle the exception which occurs in PL/SQL block known as exception handling block.
Applications written in PL/SQL are portable to computer hardware or operating system where Oracle is operational.
PL/SQL Offers extensive error checking.
Now please check the highlighted point PL/SQL can execute a number of queries in one block using single command.
Let us take an example of the situation you described.
create table test as select 0 as col1, 0 as col2 from dual;
declare
v_col1 test.col1%type;
v_col2 test.col2%type;
begin
update test set col1 = col1 + 1;
commit;
dbms_output.put_line('col1='+v_col1);
dbms_output.put_line('col2='+v_col2);
update test set col2 = col1 + 1;
commit;
dbms_output.put_line('col1='+v_col1);
dbms_output.put_line('col2='+v_col2);
end;
Please run above code, it is just a simple example of your question.
Ans Point 1: (Considering Oracle as sample database) So, according to me yes, it is possible, However, way you are writing these two updates, I am not sure that this is the best way or only way to handle such situations.
Ans Point 3: You can use Dynamic SQL to achieve the same in Oracle.
Reference Link : https://docs.oracle.com/cd/B10500_01/appdev.920/a96590/adg09dyn.htm
I am creating a new SQL database that needs to connect to several existing DB's, all of which have the same schema.
I would like to create stored procedures that can be used to access any of the existing DB's whilst re-using code as much as possible. My question is, what is the best way to write generic SP's that can be used to access any one of these existing databases?
Here are some options I have considered so far. Please note, these are just simple code snippets to illustrate the question, not real-world code. First I tried switching on the DB name:
IF #db = 'A'
BEGIN
SELECT * FROM A.dbo.SERIES_DATA
END
IF #db = 'B'
BEGIN
SELECT * FROM B.dbo.SERIES_DATA
END
This has the disadvantage that the same SQL statement is repeated several times. So then I thought of using dynamic SQL:
DECLARE #Command varchar(100)
SET #Command = 'select * from ' + #db + '.dbo.SERIES_DATA'
EXEC (#Command)
This solves the problem of duplicate staements, but has the risks of dynamic SQL (e.g. injection attacks etc). So finally I hit on the idea of creating a Table-valued Function:
CREATE FUNCTION [dbo].[ufnSeries_Data] (#db varchar(10))
RETURNS TABLE
AS
RETURN (
SELECT * FROM A.dbo.Series_Data WHERE #db = 'A'
UNION
SELECT * FROM B.dbo.Series_Data WHERE #db = 'B'
)
GO
This gives me the ability to write single, generic code to access any of the DB's:
SELECT * FROM ufnSeries_Data(#db)
The problem with this approach is that I have to give all users of the system read-access to all of the databases in order for this query to work, otherwise I get an access-denied error. I'm also not certain of the performance impact of creating a TVF like this.
Does anyone has any other ideas for how I can structure my code to minimize duplication of statements? Maybe there is a much simpler approach that I'm missing? I would be very grateful for any suggestions.
You can do this by removing the reference to your database and schema
SELECT * FROM SERIES_DATA
It is a question of creating the schema in all your databases, and granting the login access to the databases and schemas (in this case you are using the default dbo schema).
If you are using the management studio, when you create logins, you can handle the rest with user mapping.
If you have the same schema (e.g. dbo), you can change your select to:
SELECT * FROM dbo.SERIES_DATA
You really don't want to include the database name unless you are joining tables across databases.
We have been asked to get a list of all tables within our SQL Server 2008 database, which have SELECT permission to a specific role. It would need to be in a query, because then we need to write a script that revokes that GRANT SELECT permission.
Thanks.
This isn't an entire answer (and can't be - I don't have an SQL Server installation to fiddle with) but hopefully it will get you on the right track. It's executing a stored procedure.
USE yourDB;
GO
EXEC sp_table_privileges
#table_name = '%';
Within the results set, you'll be interested primarily in the third column, table_name, and the sixth column, privileges.
Source: MSDN: sp_table_privileges
In my mind, the next step - assuming this is the data you are looking for - would be to wrap this SQL statement with one that inserts the results of the EXEC statement into a temporary table which you can then slice and dice as needed. This endeavour is left as an exercise to the reader (nudge nudge). The third answer on this stack overflow question may be of some use in extending the query given above but as always, YMMV.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
This is the darndest thing I've ever seen.
I created a proc from a template that uses SYSNAME as the parameter types. All portions of the proc that took the name from the parameter are throwing errors. Here is a sample:
IF EXISTS(select 1 from sysobjects where name=N'dbo.ms_lst_partner_break_types' and xtype='p')
BEGIN
PRINT 'DROP PROCEDURE dbo.ms_lst_partner_break_types'
DROP PROCEDURE dbo.ms_lst_partner_break_types
END
Here is the error:
Msg 102, Level 15, State 1, Line 4
Incorrect syntax near '_partner_break_types'.
The weirdest thing is that when I double click on dbo.ms_lst_partner_break_types SSMS highlights either ms_lst or _partner_break_types depending on where I click. Copy the script into Textpad and back, same problem. Remove _partner_break_types and suddenly it works.
Does anyone have any idea what gives?
I don't know why it happened, but Unicode character 0x1f was inserted into the script for some reason. It might be a bug in SSMS, but I don't think it's going to be answered that easily.
In SQL Server 2008, it's sys.objects. Also, the field to look for the "name" is different, as well as other general syntax:
IF EXISTS (SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'[dbo].[my_proc]')
AND type in (N'P', N'PC'))
The easiest thing to do is to right-click on the SP and select "script as drop to new query window" via the context menu heirarchy.