Delete rows from tables with similar name. CASCADE not available - sql

We have a database that is managed by an external system, so we don't have access to create foreign keys for cascade deletes, etc.
The thing is we want to delete rows from tables that have a similar name. For example we can have 3 tables named like this:
dbo.[test$Sales Line$1]
dbo.[test$Sales Line$2]
dbo.[test$Sales Line$3]
What we do currently is we get the tables with a query like:
select t.name as table_name
from sys.tables t
where t.name like 'test$Sales Line$%'
Then we have to loop through each table, and delete the rows we need.
Wondering if there is a faster solution; something like
DELETE FROM dbo.[test$Sales Line$%]
WHERE DocNo = 'A1001'

Honestly, although you say you can't, fixing the design would be the real solution here. If you can't do that, then reaching out whoever is responsible for being able to do so should be something you do. It appears here, for your database, you should have a single table called Sale and then a column called Line in the table with is used to denote the information represented in the table's name. As you have likely already found out, these designs don't scale (at all) and are a real pain to work with if you need to do something which affects all the tables of a particular name. The discussion of this type of design (problem) is outside the scope of the question, but there is no syntax like this:
DELETE
FROM dbo.TablePrefix%
WHERE ColumnName = <Some Value>;
An object's name must be a literal, not an expression, not a wildcard.
Using a Loop is one solution, however, loops in T-SQL are pretty slow. You may, therefore, be better off creating a large batch with all your DELETE statements, rather than many batches with a single DELETE statement.
This might look something like this (I assume a recent version of SQL Server):
DECLARE #SQL nvarchar(MAX),
#CRLF nchar(2) = NCHAR(13) + NCHAR(10);
SELECT #SQL = STRING_AGG(N'DELETE FROM ' + QUOTENAME(s.[name]) + N'.' + QUOTENAME(t.[name]) + #CRLF +
N'WHERE DocNo = #DocNo;',#CRLF)
FROM sys.schemas s
JOIN sys.tables t ON s.schema_id = t.schema_id
WHERE t.[name] LIKE N'test$Sales Line$%';
--PRINT #SQL; --Your best friend
EXEC sys.sp_executesql #SQL, N'#DocNo varchar(5)', #DocNo; --#DocNo's datatype is guessed.
If you are on an old version of SQL Server, you'll need to use a different method of string aggregation, such as the FOR XML PATH (and STUFF) method.

Related

how to get one by one column name using loop is SQL [duplicate]

This question already has answers here:
Build a dynamic list of INSERT statement values
(5 answers)
Closed 1 year ago.
I'm trying to fetch column names of the table using loop so that I can create automated 'insert' (from one database to another) just like the attached image
.
Main purpose is that I want to enter table name and script should generate the insert just like the attached image.
this is the table structure
My response is too long to be a comment, so I needed to put it as answer.
Using a loop for this is actually the worst idea ever. Approaching SQL like a normal programming language can lead to bad habits.
There are system tables that store all of this information for you. You should look at sys.columns and sys.tables. There are obvious joins between these two tables that will get you 90% of the way there, if not all the way.
Youre working way too hard to achieve what is a pretty common task - simple ETL of one object to another destination. Im pretty much of this work youre doing to create inserts probably can just be simple inserts without having to check if the values being null or not (nulls can be inserted from select statements). Youre most certainly creating unnecessary overhead in this approach (granted im just guessing, but from the looks of it you have 110k rows with conditional logic running per row, so I dont think Im wrong here). My suggestion is to read an article or two about ETL best standard practices and learn what an "UpSert" is.
This should get you about 90% there...
Declare #sql varchar(max) = ''
select #sql = #sql + '(Case when ' + c.name + ' IS NULL then ''NULL''
Else Concat('''''''''+ ',' + c.name +''''''', '') end ), '
from sys.columns c
inner join sys.tables t on c.object_id = t.object_id
where c.object_id = OBJECT_ID('dbo.Account')
Select ' insert into dbo.Account (' + #sql + ')'
print #sql
--EXEC (#sql)

SQL JOIN based on table contents

I have a single table that contains questions with corresponding references to another table and field that contain the answers. Something like:
I would like to query the questions table and return QID, QuestionText and the value contained in the [ResponseTable].[ResponseField] for each QID. The design seamed flexible at the time. However the app developer is expecting a stored procedure and the SQL developer was counting on an in app solution for this issue.
I am at the end of my rope trying to build this query. How would you suggest accomplishing this task?
I don't think you'll like hearing this answer because it will likely mean some major rework, but I think it's the right answer. Get rid of the questions table and put the questions into new Question fields in the Client1, Client9, and Jobs tables; one for each response.
For example the Client1 table will have these fields:
ColorPref
ColorPrefQuestion
Rating
RatingQuestion
...and so on
Working around that design will be manageable where working around the design you have now will be a headache.
It sounds like a redesign should be considered (storing all responses in one table, for example), but if that's not a possibility then dynamic SQL (using sp_executesql) can be used. However, it can be dangerous to use as it is vulnerable to SQL injection. There are some precautions that can be taken, such as using QUOTENAME on table and column names. This is also a good read before using dynamic SQL: The Curse and Blessings of Dynamic SQL.
DECLARE #tableName NVARCHAR(50)
DECLARE #columnName NVARCHAR(50)
DECLARE #query NVARCHAR(MAX)
SET #tableName = 'Client1'
SET #columnName = 'ColorPref'
SET #query = 'SELECT ' + QUOTENAME(#columnName) + ' FROM ' + QUOTENAME(#tableName)
EXEC sp_executesql #query
Until you get to the rewrite you mentioned, consider the idea of using a view to bring these response tables together.
CREATE VIEW ClientResponses AS
SELECT QID, ResponseField FROM [Client1]
UNION
SELECT QID, ResponseField FROM [Jobs]
UNION
SELECT QID, ResponseField FROM [Client9]
-- ..... add the new tables as they are created
This will
Avoid dynamic SQL
Give you a single place to maintain querying
Provide a pretty simple, readable way to hobble this together

How to check if a view exists that uses a table

Is it possible to check if a table is part of a view in same or different database using SQL Server Management Studio?
If it can be done through some plugins, that would be fine too.
Like this:
SELECT *
FROM INFORMATION_SCHEMA.VIEW_TABLE_USAGE
WHERE TABLE_SCHEMA = 'dbo' --(or whatever your Schema name is)
AND TABLE_NAME = 'YourTableName'
Should work on any ISO SQL compliant database, not just SQL Server.
Note that cross-database dependencies are another matter. In theory, they should show up here however, in practice this may be inconsistent because SQL Server does allow deferred resolution, even for Views, when it comes to cross-database references.
SELECT QUOTENAME(OBJECT_SCHEMA_NAME([object_id]))
+ '.' + QUOTENAME(OBJECT_NAME([object_id]))
FROM sys.sql_dependencies
WHERE referenced_major_id = OBJECT_ID(N'dbo.your_table_name');
Or:
SELECT referencing_schema_name, referencing_entity_name
FROM sys.dm_sql_referencing_entities(N'dbo.your_table_name', N'OBJECT');
However note that some of these methods, including sp_depends, INFORMATION_SCHEMA, sysdepends etc. are all prone to falling out of sync. More information here:
Keeping sysdepends up to date in SQL Server 2008
A quick example:
CREATE TABLE dbo.table1(id INT);
GO
CREATE VIEW dbo.view1
AS
SELECT id FROM dbo.table1;
GO
SELECT QUOTENAME(OBJECT_SCHEMA_NAME([object_id]))
+ '.' + QUOTENAME(OBJECT_NAME([object_id]))
FROM sys.sql_dependencies
WHERE referenced_major_id = OBJECT_ID('dbo.table1');
-- returns 1 row
GO
DROP TABLE dbo.table1;
GO
CREATE TABLE dbo.table1(id INT);
GO
SELECT QUOTENAME(OBJECT_SCHEMA_NAME([object_id]))
+ '.' + QUOTENAME(OBJECT_NAME([object_id]))
FROM sys.sql_dependencies
WHERE referenced_major_id = OBJECT_ID('dbo.table1');
-- returns 0 rows!!!!
If you execute the following, it will return rows again:
EXEC sp_refreshsqlmodule N'dbo.view1';
But who wants to be refreshing every view in the system, every time you want to check the metadata?
So you may want to combine this method with brute force parsing of the text for all your views:
SELECT name FROM sys.views
WHERE OBJECT_DEFINITION([object_id])
LIKE N'%your_table_name%';
That is liable to get some false positives depending on the name of your table, but it's probably a good cross-check.
To avoid this kind of issue, I've tried to get into the habit of creating my views WITH SCHEMABINDING (or just avoiding views as much as possible). Sure, that can become a pain when you need to change the table in a way that doesn't affect the view, but table changes should be taken seriously anyway.
For same databse, you can check dependencies for that table and see what other objects uses it.
EXEC sp_depends #objname = N'your_table_name' ;

How to use a view name stored in a field for an sql query?

I have a table with a view_name field (varchar(256)) and I would like to use that field in an sql query.
Example :
TABLE university_members
id | type | view_name | count
1 | professors | view_professors | 0
2 | students | view_students2 | 0
3 | staff | view_staff4 | 0
And I would like to update all rows with some aggregate calculated on the corresponding view (for instance ..SET count = SELECT count(*) FROM view_professors).
This is probably a newbie question, I'm guessing it's either obviously impossible or trivial. Comments on the design, i.e. the way one handle meta-data here (explicity storing DB object names as strings) would be appreciated. Although I have no control over that design (so I'll have to find out the answer anyway), I'm guessing it's not so clean although some external constraints dictated it so I would really appreciate the community's view on this for my personal benefit.
I use SQL Server 2005 but cross-platform answers are welcome.
To do this you would have to do it as a bit of dynamic SQL, something like this might work, obviously you would need to edit to actually match what you are trying to do.
DECLARE #ViewName VARCHAR(500)
SELECT #ViewName = view_name
FROM University_Members
WHERE Id = 1
DECLARE #SQL VARCHAR(MAX)
SET #SQL = '
UPDATE YOURTABLE
SET YOURVALUE = SELECT COUNT(*) FROM ' + #ViewName + '
WHERE yourCriteria = YourValue'
EXEC(#SQL)
The way I see it, you could generate SQL code in a VARCHAR(MAX) variable and then execute it using EXEC keyword. I don't know of any way to do it directly, as you tried.
Example:
DECLARE #SQL VARCHAR(MAX)
SET #SQL = ''
SELECT #SQL = #SQL + 'UPDATE university_members SET count = (SELECT COUNT(*) FROM ' + view_name + ') WHERE id = ' + id + CHAR(10) + CHAR(13) FROM university_members
EXEC #SQL
Warning! This code is not tested. It's just a hint...
Dynamic SQl is the only way to do this which is why this is a bad design choice. Please read the following article if you must be using dynamic SQl in order to protect your data.
http://www.sommarskog.se/dynamic_sql.html
As HLGEM wrote, the fact that you're being forced to use dynamic SQL is a sign that there is a problem with the design itself. I'll also point out that storing an aggregate in a table like that is most likely another bad design choice.
If you need to determine a value at some point, then do that when you need it. Trying to keep a calculated value like that synchronized with your data is almost always fraught with problems - inaccuracy, extra overhead, etc.
There are very rarely situations where storing a value like that is necessary or gives an advantage and those are typically in very large data warehouses or systems with EXTREMELY high throughput. It's nothing that a school or university is likely to encounter.

SQL clone record with a unique index

Is there a clean way of cloning a record in SQL that has an index(auto increment). I want to clone all the fields except the index. I currently have to enumerate every field, and use that in an insert select, and I would rather not explicitly list all of the fields, as they may change over time.
Not unless you want to get into dynamic SQL. Since you wrote "clean", I'll assume not.
Edit: Since he asked for a dynamic SQL example, I'll take a stab at it. I'm not connected to any databases at the moment, so this is off the top of my head and will almost certainly need revision. But hopefully it captures the spirit of things:
-- Get list of columns in table
SELECT INTO #t
EXEC sp_columns #table_name = N'TargetTable'
-- Create a comma-delimited string excluding the identity column
DECLARE #cols varchar(MAX)
SELECT #cols = COALESCE(#cols+',' ,'') + COLUMN_NAME FROM #t WHERE COLUMN_NAME <> 'id'
-- Construct dynamic SQL statement
DECLARE #sql varchar(MAX)
SET #sql = 'INSERT INTO TargetTable (' + #cols + ') ' +
'SELECT ' + #cols + ' FROM TargetTable WHERE SomeCondition'
PRINT #sql -- for debugging
EXEC(#sql)
There's no easy and clean way that I can think of off the top of my head, but from a few items in your question I'd be concerned about your underlying architecture. Maybe you have an absolutely legitimate reason for wanting to do this, but usually you want to try to avoid duplicates in a database, not make them easier to cause. Also, explicitly naming columns is usually a good idea. If you're linking to outside code, it makes sure that you don't break that link when you add a new column. If you're not (and it sounds like you probably aren't in this scenario) I still prefer to have the columns listed out because it forces me to review the effects of the change/new column - even if it's just to look at the code and decide that adding the new column is not a problem.
DROP TABLE #tmp_MyTable
SELECT * INTO #tmp_MyTable
FROM MyTable
WHERE MyIndentID = 165
ALTER TABLE #tmp_MyTable
DROP Column MyIndentID
INSERT INTO MyTable
SELECT *
FROM #tmp_MyTable
This also deals with a unique key projectnum as well as the primary key.
CREATE TEMPORARY TABLE projecttemp SELECT * FROM project WHERE projectid='6';
ALTER TABLE projecttemp DROP COLUMN projectid;
UPDATE projecttemp SET projectnum = CONCAT(projectnum, ' CLONED');
INSERT INTO project SELECT NULL,projecttemp.* FROM projecttemp;
You could create an insert trigger to do this, however, you would lose the ability to do an insert with an explicit ID. It would, instead, always use the value from the sequence.
You could create a trigger to do it for you. To make sure that trigger only works for cloning, you could create a separate username CLONE and log in with it. Or, even better, if your DBMS supports it, create a role named CLONE and any user can log in using that role and do the cloning. The trigger code would be something like:
if (CURRENT_ROLE = 'CLONE') then
new.ID = assign new id from generator/sequence
Of course, you would grant that role only to the users who are allowed to clone records.