What is the best way to create and manage global Procs and UDFs in SQL Server? - sql

Let's imagine that I have 2 separate Databases on the same instance of SQL Server.
DB1 is a Database relating to Trading and Position Keeping.
DB2 is a Database relating to market pricing variables.
Both databases have the concept of working with time/date objects and I have created some convenience UDFs. Lets further imagine I build some convenient math functions that I would like to be called from all databases. What is the best way of creating and organising them?
i.e. Should I create the UDFs and SPs in the Master Database? How should I group all my CustomDateTime UDFs, is it best to create something in the Schema so that I replace .dbo with .myDateTime or .myMath?
Suggestions appreciated as I do not like how I currently have placed most of my functions in DB1 and I know that many will be relevant to DB2?

Create a Database Project using Visual Studio (I believe 2008 and 2010 support it). Keep object scripts organized within the project. Check the project and related files into some form of version control software (SVN, git, hg, SourceSafe...).
You should have a DBA or a dedicated person who is responsible for deploying code changes to your production environments. You can configure the database project with pre- and post-deployment scripts that can make this easier to work with.
I can't recommend keeping user-defined objects in master; IMHO you're better off with copies in each database. If you're managing your code as above it won't matter that they're duplicated, since the source is in a single, managed location.

Related

Why would you create an SQL table via a query?

In most tutorials on database design, you are shown to create and manipulate tables via queries. Sorry for a newbie question but when using SQL Server Management Studio, why would you create a table using a query and not just using the built-in functions to create tables and add attributes to them? (eg: right-click\create table, go to design view and add columns and specify domains, indexes, keys etc...)
In any development, multiple environments are used. Development environment is used at coding stage, then QA, then Model Office/ UAT/ Production.
Using scripts ensures that changes can be promoted automatically. It also ensures that manual errors are either eliminated or kept to a minimum.
Hand coding in each environment will be expensive and error prone. Scripts make it possible to have same table structure.
I create tables using queries (and i store them in .sql files) because that way i can re-run them at later time to recreate the full database structure.
This sounds more useful while in a development/testing environment than it can be in productive, where i guess you wouldn't drop and re-create the entire database that often.
To add a reason not already mentioned - it allows the scripts to be audited / reviewed and potentially stored in a version controller or issue tracking system. This will be necessary in complex or secure scenarios especially in a fast-changing environment.
It looks more professional to write queries in tutorials :). In real life, it's simpler to alter a table through UI, but then again, you forget the SQL syntax that way. If you're not a Database Admin, it's not that important to know SQL syntax from a-z, in my opinion.

Database Meta tools

I'm now working with a legacy database which is missing, among almost everything you'd expect from a decent SQL relational DB, any documentation or metadata. I can't make changes to the DB schema, except my local test copy, as it exists at many client sites and there's no upgrading procedures. Are there any tools that I can use to build and keep my own meta about the database? I'm looking to keep track of relationships, basic documentation about tables and columns, and references in stored procedures. There's 200+ tables and 3300+ SPs. A base autogeneration would be very helpful, particularly with the SPs. Preferably FOSS and Linux, but I will settle for win just to have something.
Not sure what you mean with "metadata", but I'm pretty happy with Liquibase.
It manages the schema in one (or more) XML files and can reverse engineer an existing database (all major ones supported).
It's Java/JDBC based and runs fine on Linux
The main purposes of Liquibase is to handle upgrades (schema migration) smoothly, so I'm not sure if this exactly what you are looking for.

Normalization of an existing SQL database

I have a single-table database I inherited and migrated to SQL Server, and then I normalized it by creating, linking, and filling a whole bunch of lookup-type tables that represented items in the main table. I now want to replace those items in the original table with their foreign keys. Am I stuck writing a bunch of queries or UDF's and then a giant INSERT statement to accomplish this, or is there a tool I can use to point at the various fields and have it handle the grunt work for me?
Redgate SQL Refactor comes with a 14 day evaluation period and has a "Split Table" refactoring which sounds like it might do what you need?
The feature is described thus:
Split Table splits a table into two
tables, and automatically rewrites the
referencing stored procedures, views,
and so on. You can also use this
refactoring to introduce referential
integrity tables. You can select this
feature from the context menu in
Management Studio’s Object Explorer.
I have had similar experiences. I once inherited a fairly large database that required serious overhaul to the schema before I would look at it without scorn.
Because the upgrade was fairly significant, I designed an SSIS package to migrate data from the old schema to the new. Lookup activities were helpful to map old text values to the new keys. I kept a script of my old schema and data handy and would repeatedly restore the database in a sandbox and re-migrate until I could satisfy the powers-that-be that the migration was reliable.
I found there was only a moderate learning curve to getting started with SSIS. If the tool is available to you, I recommend giving it a try.

Defining table structure for a database?

Up until now, my experience with databases has always been working with an intermediate definition layer that we have where I work. i.e. SQL wasn't directly written for the table definitions, but generated from an intermediate file which wrote out SQL scripts for creating the appropriate tables, upgrade scripts between schema changes, and helper functions for doing simple queries/updates/inserts/deletes from the database.
Now I'm in a situation where I don't have access to that, for reasons I won't get into, and I find myself somewhat lost at sea regarding what to do. I need to have a small number of tables in a database, and I'm unsure what's usually done to manage the table definitions.
Do people normally just use the SQL script that does the table creation as their definition, or does everyone just use an IDE that manages the definition in a separate file and regenerates the SQL script to create the tables?
I'd really prefer not to have to introduce a dependence on a specific IDE, because as we all know, developers are whiners that are prone to religious debates over small things.
Open your favorite text editor -> Start writing CREATE scripts -> Save -> Put in Source Control
That script now becomes the basis for you database. Anytime there are schema changes, they get put back into the scripts so that they don't get lost.
These become your definition.
I find it more reliable than depending on any specific IDE/Platform generating those scripts for you.
We write the scripts ourselves and store them in source control like any other code. Then the scripts that are appropriate for a particular version are all groupd together and promoted to prod together. Make sure to use alter table when changing existing tables becasue you don't want to drop and recreate them if they have data! I use a drop and recireate for all other objects though. If you need to add records to a particular table (usually a lookup of sometype) we do that in scripts as well. Then that too gets promoted with the rest of the version code.
For me, putting the scripts in source control however they are generated is the key step. This is how you know what you have changed for the next release. This is how you can see earlier versions and revert back easily if there is a problem. Treat database code the same wayyou treat all other code.
YOu could use one of the data modelling tools that creates scripts for you if you are starting out on a database design and the eventually want to create it for you. Some tools for that are Erwin, Fabforce etc... (though not free)
If you have access to an IDE like SQL Management studio, you can create them by using an GUI thats pretty simple.
If you are writing your own code, Its always better to write your own scripts based on a good template so that you cover all the properties of the definition of the table like the file_group, Collation & stuff. Hope this helps
Once you do create a base copy generate scripts and have a base reference copy of it so that you could do "incremental" changes on them and manage them in a source control.
Though I use TOAD for Oracle, I always write the scripts to create my database objects by hand. It gives you (and your DBA's) more control and knowledge of what's being created and how.
If your schema is too difficult to describe in SQL, you probably have other issues more pressing than which IDE. Use modelling documentation if you need a graphical representation, but yeah, you don't need an IDE.
There are multiple ways out there for what you are asking.
Old traditional way is to have a script file ready with your application that has CREATE TABLE statement.
If you are a developer, and that too a Java enterprise developer, you could generate complete schema using a persistence library called Hibernate. Here is a how to
If you are a DBA level user, you could take schema export from one environment and import that in to your current environment. This is a standard practice among DBAs. But it requires admin access as you can see. Also, the methods are dependent on the database you are using (oracle, db2 etc)

How is Database Migration done?

i remember in my previous job, i needed to do data migration. in that case, i needed to migrate to a new system, i was to develop, so it has a different table schema. i think 1st, i should know:
in general, how is data migrated (with the same schema) to a different DB engine. eg. MySQL -> MSSQL. in my case, my destination DB was MySQL and i used MySQL Migration Toolkit
i am thinking, in an enterprise app, there may be stored procedures, triggers that also need to be imported.
if table schema is different, how will i then go abt doing this? in my prev job, what i did was import data (in my case, from Access) into my destination (MySQL) leaving table structures. then use SQL to select data and manipulate as required into final destination tables.
in my case, where i dont have documentation for the old db, and the columns was not named correctly, eg. it uses say 'field1', 'field2' etc. i needed to trace from the application code what the columns mean. any better way? or sometimes, columns contain multiple values in delimited data, is reading code the only way?
I really depends, but from your question I assume you want to hear what other people do.
So here is what I do in my current project.
I have to migrate from Oracle to Oracle but to a completely different schema.
The old system was 2-tier (old client, old database) the new system is 3-tier (new client, business logic, new database). We have more than 600 tables in the new schema.
After much pondering we scraped the idea of doing a migration from old database to new database in SQL. We decided that in our case i would be much easier to go:
old database -> old client -> business logic -> new database
In the old database much of the data is stored in strange ways and the old client
mangles it in complex ways. We have access to the source code of the old client but it is a very large system.
We wrote a migration tool that sits above the old client and the business logic.
We have some SQL before and some SQL after that but the bulk of data is migrated via
old client and business logic.
The downside is that it is slow, a complete migration taking more than 190 hours in our case but otherwise it works well.
UPDATE
As far as stored procedures and triggers are concerned:
Even as we use the same DBMS in old and new system (both Oracle) the procedures and
triggers are written from scratch for the new system.
When I've performed database migrations, I've used the application instead a general tool to migrate the database. The application connects to two databases and copies objects from one to the other. You don't have to worry about schema or permissions or whatnot since all that is handled in the application, just like what happens when you set up the application in the first place.
Of course, this may not help you if your application doesn't support this. But if you're writing an application, I strongly recommend doing it this way.
I recommend the wikipedia article for a good overview and links to the main commercial tools (and some non-commercial ones). Stored procedures (and kin, e.g. user-defined function), if abundant, are going to be the "hot spots" in the migration, requiring rare abd costly human skills -- as soon as you get away from the "declarative" mood of mainstream SQL, and into procedural code, you cannot expect automated tools to do a decent job (Turing's Theorem says that they actually can't, in a sufficiently general case;-). So, you need engineers with a good understanding of the procedural trappings of BOTH engines -- the one you're migrating from, the one you're migrating to. You can buy that -- it's one of the niches where consultants make REALLY good money!-)
If you are using MS SQL Server, you can use SSMS to script out the schema and all data in one go: SQL Server 2008: Script Data as Inserts.
If you are not using any/many non-standard SQL constructs, then you might be able to manually edit this scipt without too much effort.