SQL Server 2016: Replicating tables generated by sp_help - sql

I am in the process of writing a relatively large query which organizes a bunch of information on tables in my database. One thing I want to add is information on indexes and constraints on each table.
I found that sp_help 'tablename' generated two tables with basically exactly what I wanted, specifically the constraint and index tables, organized in an ideal way (all applicable keys grouped together as one bit of text, separated by commas).
Is there any simple way to either command sp_help to only pull these tables for easy access, or barring that any way to replicate the exact form of these tables with a SQL query?
It seems possible to brute force a replica of these tables without too much difficulty using a clunky mixture of sys and information_schema, but is there any minimal/elegant way to do it?

Try reading through the following blog post by Kimberly Tripp -
https://www.sqlskills.com/blogs/kimberly/sp_helpindex-v20170228/
You can download a procedure called [sp_SQLskills_helpindex], to be run as per the below -
sp_SQLskills_helpindex [TableName]
The result set can be stored in a temp table and used in whichever way you wish.

Related

How can I get the modified table name(any table in db) and it's modified value without using triggers in SQL Server?

I am creating database snapshot for reporting(creating a common demoralized table from normalized tables), for which I need to enter normalized tables modified values at the same time when the table modify. I want a common solution for this. Same is achievable using triggers but I am looking for some other generic solution which will work with any table change. Is there any other way in SQL server by which I can achieve the same?

SQL Table Data Source

I've been tasked with creating a data dictionary for a DB that has 90 tables. Is there any way to identify by which procedure/task/job a table is populated? I need to source the data in each of the tables and I'm not quite sure how to do this.
Any tips would be greatly appreciated.
-T
You can search for which stored procedures use a given table with something like:
SELECT OBJECT_NAME(id) FROM SYSCOMMENTS WHERE text LIKE '%table_name%'
You'll then have to manually examine and analyse the code within those SPs to see what it actually does with that table. I expect you'll need to manually eyeball any SQL Agent tasks and SSIS packages you may have as well. This kind of work tends to be hard graft - there aren't many shortcuts to simply grinding over all the code by hand.

SAS Enterprise Guide / SQL Performance

I'm looking for a little guidance on a SAS/SQL performance issue I'm having. In SAS Enterprise Guide, I've created a program that creates a table. This table has about 90k rows:
CREATE TABLE test AS (
SELECT id, SUM(myField)
FROM table1
GROUP BY id
)
I have a much larger table with millions of rows. Each row has an id. I want to sum values on this table, using only id's present in the 'test' table. I tried this:
CREATE TABLE test2 AS(
SELECT big.id, SUM(big.myOtherField)
FROM big
INNER JOIN test
ON test.id = big.id
GROUP BY big.id
)
The problem I'm having is that it takes forever to run the second query against the big table with millions of records. I thought the inner join on the subset of id's would help (and maybe it is) but I wanted to make sure I was doing everything I could to speed it up.
I don't have any way to get information on the indexing of the underlying database. I'm more interested in getting the opinion of someone who has more SQL and SAS experience than me.
From what you show in your question, you are joining two SAS data sets, not two database objects. In any case, you can speed up the processing by defining indexes on the JOIN columns used in each table. Assuming you have permission to do so, here are examples:
proc sql;
create index id on big(id);
create index id on test(id);
quit;
Of course, you probably should first check the table definition before doing that. You can use the "describe" statement to see the structure:
proc sql;
describe table big;
quit;
Indexes improve access performance at the cost of disk space and update maintenance. Once created, the indexes will be a permanent part of the SAS data set and will be automatically updated if you use SQL INSERT or DELETE statements. But be aware that the indexes will be deleted if you recreate the data set with a simple data step.
On the other hand, if these tables really are in an external database (like Oracle for example), you have a different challenge. If that's the case, I'd ask a new question and provide a complete example of the SAS code you are using (including and libname statements).
If you are working with non-SAS data, ie, data that resides in a SQL DB or a no-SQL database for that matter, you will see significant improvements in performance using pass-through SQL or, if supported and you have the licenses for it, in-database processing.
One important point about proc sql vs pass-through sql. Proc sql, by default, creates duplication of the original source data in SAS datasets prior to doing the work. Whereas, pass-through just requests the result set from the source data provider. In short, you can imagine that a table with 5 million rows will take a lot longer to use with proc sql (even if you are only interested in about 1% of the data) than if you just have to pull that 1% of data across the network using the pass-through mechanism.

Communicating with Informix from SQL Server

Right... I've got a program I'm doing some maintenance on.
Urgh. Even describing it makes me shudder... Right, okay.
Every night, a database running on what we think is SQL Server 2000 hooks up to an Informix database and copies it over into SQL Server.
The Informix/SQL data is accessed by the program I'm maintaining, which then stores some data in a different SQL Server 2000 database. This data should have foreign key constraints on the Informix data, but doesn't.
Further on down the line, data from the SQL database is put back into the Informix/SQL database, and later still, back into the actual Informix database.
Basically, the root of my problem is that there are no foreign or primary key constraints on the non-Informix SQL database. Well, some of the tables have a Primary key on a non-meaningful "ID" column, but those aren't FK'd to any other tables.
My question is: Is it possible to link SQL Server 2000 to the native Informix database in some way, so that I can add foreign key constraints within the SQL database so that SQL Server can only create rows when it can refer to existing rows within the Informix database?
I'll do my best to answer any questions anyone has, but as far as I can tell the reasoning behind these design decisions was genuine insanity, so reasons won't be particularly forthcoming, as I can't work them out, myself...
Yuck!
Bad Luck (on the mess you've inherited)!
Good Luck (with your work fixing the mess)!
Which version of Informix, and what platform (type of machine, o/s) is it running on?
Is there a reason (other than it will break because the data is a mess) that you can't update the Informix schema to enforce the real RI constraints. But you probably need to know how bad the mess is so that you can start the cleanup process. IDS (Informix Dynamic Server) does have 'violations tables' which can be used to track problematic rows of data - 'START VIOLATIONS' and 'STOP VIOLATIONS' are the statements to look for in the Informix Guide to SQL: Syntax manual You might well need to unload and delete the data from one table before starting to load the data with the violations checking enabled.
After clarification, the question seems to be "Can I set up referential integrity constraints on tables in the SQL Server databases that are constrained by (refer to) tables in the Informix databases?"
The answer to that is (sadly):
No
Most DBMS are reluctant to have cross-database referential integrity constraints, let alone cross-DBMS constraints.
The closest approximation would be to have copies of the relevant Informix tables in the SQL Server databases, but that probably adds to the data transfer workload. OTOH, cleaning up the data probably requires that - it might be possible to relax that copying later, once the data is more nearly sane. It depends, in part, on the volatility of the referenced Informix data - how often are rows added or deleted to the referenced tables.

Move Data from Oracle to SQL Server

I would like to copy parts of an Oracle DB to a SQL Server DB. I need to move the data because the Oracle box is being decommissioned. I only need the data for reference purposes so don't need indexes or stored procedures or contstaints, etc. All I need is the data.
I have a link to the Oracle DB in SQL Server. I have tested the following query, which seemed to work just fine:
select
*
into
NewTableName
from
linkedserver.OracleTable
I was wondering if there are any potential issues with using this approach?
Using SSIS (sql integration services) may be a good alternative especially if your table names are the same on both servers. Use the import wizard via and it should create the destination tables for you and let you edit any mappings.
The only issue I see with that is you will need to execute that of course for each and every table you need. Glad you are decommissioning the oracle server :-). Otherwise if you are not concerned with indexes or any of the existing sprocs I don't see any issue in what you are doing.
The "select " approach could be very slow if tables are large. Consider writing pro*C in that case or use Fastreader http://www.wisdomforce.com/products-FastReader.html
A faster and easier approach might be to use the Data Transformation Services, depending on the number of objects you're trying to copy over.