SQL Server: Why do use SMO? - sql

I have been working with SQL Server for a couple of years. I have heard about SMO but I don't know anything about it. What are the benefits of using it? Should I learn and start using SMO in my SQL Server projects (mainly data warehouse development)? Why?

From Microsoft:
Overview (SMO)
SQL Server Management Objects (SMO)
are objects designed for programmatic
management of Microsoft SQL Server.
You can use SMO to build customized
SQL Server management applications.
Although SQL Server Management Studio
is a powerful and extensive
application for managing SQL Server,
there might be times when you would be
better served by an SMO application.
For example, the user applications
that control the SQL Server management
tasks might have to be simplified to
meet the needs of new users and to
reduce training costs. You might have
to create customized SQL Server
databases, or create an application
for creating and monitoring the
efficiency of indexes. An SMO
application might also be used to
include third-party hardware or
software seamlessly into the database
management application.
The SMO object model extends and
supersedes the Distributed Management
Objects (SQL-DMO) object model.
Compared to SQL-DMO, SMO increases
performance, control, and ease of use.
Most SQL-DMO functionality is included
in SMO, and there are various new
classes that support new features in
SQL Server. The object model is
intuitive and uses SQL-DMO
terminology, where it is possible, to
help transfer your skills.
You can download SMO here:
Microsoft® SQL Server® 2008 R2 Feature Pack
And for getting started programming:
Creating SMO Programs

It depends on what you're trying to do. SMO is SQL Server Management Objects. It is a set of libraries for managing SQL Server programmatically. For example if you're trying to build a clone of SQL Maangement Studio then SMO is something you probably want to look into. OR if you're trying to manpulate the structure of your database programmatically then that's the place to look.
Otherwise, I wouldn't bother.

I have used SMO to automatically script out object code and user permissions and add to version control.
By doing this I can save privileges or object DDL as of a point in time for my auditing team or my own research or for cloning a server.
I also use it so I can quickly compare object code from specific dates without needing a snapshot / backup.
Recently I used SMO in a Disaster Recovery Project to script out all Server Permissions and System Database Object Permissions and run the script on the replacement server.

I've recently created an POS/Work Order Management application with a SQL database backend and the SMO library. The SMO gave my a application a lot of flexibility to control the database in terms of work order records, user's records an even my own set of user roles. Helping me to differentiate SQL users when managing a specific database. So, my take is that it all depends on the extend of your use of SQL and how much you may need to automate and control certain aspects of your SQL database.

Related

Versioning options for shared database development (schema and data)

We have multiple instances(application + database) of the application running (Dev, Test, Prod).
Our implementation process currently looks like that:
multiple database developers work on the same database (application is developed by a separate team)
database development process contains of adding/altering database objects and configuring application objects
the application objects are stored in the database tables (we call them kernel tables) as data
the database is connected to an instance of an application, there is no possibility for developers to run their own instance of the application on their local machines. (I know it is not the best approach, but currently we cannot do it differently for various reasons)
We'd like to figure out an approach and tools (preferably free or at least not very expensive)
to do the versioning on a feature level, as usually, one developer works on one feature. In the end, the feature is usually some database objects (tables, views, stored procedures) and the data inserted in the kernel (application) tables. The developer does not control the data entered by the application.
The kernel tables usually have the auto-incremented primary key, plus some dependencies like foreign keys on other kernel tables.
The ultimate goal is to have the continuous integration process in place. Meaning that we want to have an automated process which would do some tests and populate our features to other environments.
We are using both SQL Server 2014 and Azure SQL Databases.
Do you have any experiences/ideas how to handle such situation?
Thanks
You can use Visual Studio Community https://www.visualstudio.com/vs/community/ with the SSDT (SQL Server Developer Tools) tools https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt
You can now create an SQL Database Project, which can handle all your database objects. In addition you can use GIT tools (github or vsts (https://www.visualstudio.com/) to store and share your code (privately).
With these tools you can manage both Azure SQL Database and SQL Server databases.
Here is a complete guide for these tools : https://msdn.microsoft.com/en-us/library/xee70aty.aspx

How to migrate SQL Data into new Microsoft access Database

We have a 3gb file of data from our propriartary CRM system which is using SQL as a database.
The CRM is not meeting our needs and we are thinking about moving to Microsoft access and building our own system from the start.
We were wondering if it is possible to easily migrate the SQL database into access?
Thanks for your time.
First of all, it has been a long time since I've had to use MS-Access (thankfully) but I'm not sure Access is suitable for databases of that size. In my opinion, it's best suited to small, desktop-type applications with few concurrent users.
To answer your question, I believe Access offers a data import feature(see under the External Data ribbon in 2013) - though I'd suspect it might balk at the idea of 3GB of data. Edit: Actually this link suggests the max databsae size is 2GB
What might be more useful however, is its Linked Table feature. If I remember correctly this allows you to access data stored in SQL Server (or a similar RDBMS) which is more suited to large volumes of data through an Access front end - complete with pre-canned forms, queries, reports etc..
It is possible and fairly straight forward to move all of your data tables from SQL Server to Access; however, SQL Server is a much more robust database engine than Access. I would highly recommend against that. I have however had very good success using Access (ADP project files) as a front for the interface and using SQL Server as the database back-end for simple to moderate complexity interfaces. If you are not getting the performance you desire from your SQL Server, you might want to consider query performance tuning and looking into memory and hardware upgrades first. I think you will get better and faster results from doing that.
The simple solution would be to “link” Access to SQL server. That way you continue to use a robust data engine, but are free to use all the reporting and coding features of Access.
In this setup then Access simply becomes a “front end” to the existing SQL database.
And you do NOT want to use an ADP project in Access since they are depreciated.
The process is thus to create a blank standard database, and then use linked tables to SQL server. This will not only eliminate the need to import data (which is likely changing all the time).

Best Practices of continuous Integration with SQL Server project or local mdf file in project

Today I maintain project that has really messy DB that need a lot of refactor and publish on clients machines.
I know that I could add a SQL Server Database project that contains just scripts of the database and creates a .dacpac file that allows me to change clients databases automatically.
Also I know that I could just add an .mdf file to the App_Data or even to Solution_Data folder and have my database there. I suppose that localDb that already exists allows me to startup my solution without SQL Server
And atlast i know that Entity Framework exist with it's own migrations. But i don't want to use it, besouse i can't add and change indexes with it's migrations and i don't have anought flexibility when i need to describe difficult migrations scenarios.
My goals:
Generate migration scripts to clients DB's automaticaly.
Make my solution self-contained, that any new Programmer that came to project don't even need to install SQL Server on his machine.
Be able to update local (development) base in 1-2 clicks.
Be able to move back in history of db changes (I have TFS server)
Be able to have clean (only with dictionaries or lookup tables) db in solution with up to date DB scheme.
Additionally i want to be able to update my DB model (EF or .dbml) automatically or very easy way.
So what I what to ask:
What's a strengths and weaknesses of using this 2 approaches if I want to achive my goals?
Can be that I should use sort of combination of this tools?
Or don't I know about other existing tool from MS?
Is there a way to update my DAL model from this DB?
What's a strengths and weaknesses of using this 2 approaches if I want to achive my goals?
Using a database project allows you to version control all of the database objects. You can publish to various database instances and roll out changes incrementally, rather than having to drop and recreate the database, thus preserving data. These changes can be in the form of a dacpac, a SQL script, or done right through the VS interface. You gain a lot of control over deployments using pre- and post-deployment scripts and publishing profiles. Developers will be required to install SQL Server (the developer/express edition is usually good enough).
LocalDB is a little easier to work with -- you can make your changes directly in the database without having to publish. LocalDB doesn't have a built-in publish process for pushing changes to other instances. No SQL Server installation required.
Use a database project if you need version control for your database objects, if you have multiple users concurrently making changes, or if you have multiple applications that use the same database. Use LocalDB if none of those conditions apply or for small apps that require their own standalone database.
Can be that I should use sort of combination of this tools?
Yes. According to Kevin's comment below, "If the Database Project is set as your startup project, hitting F5 will automatically deploy it to LocalDB. You don't even need a publish profile in this case."
Or don't I know about other existing tool from MS?
Entity Framework's Code First approach comes close.
Is there a way to update my DAL model from this DB?
Entity Framework's POCO generator works well unless you make changes to your DAL classes, then those changes get lost the next time you run the generator.
There is a new tool called SqlSharpener which can generate classes from the SQL files in a database project. I have not used it so I cannot vouch for it but it looks promising.
One way for generating client script for DB changes is to use database modeling tool like ERWin Which have a free community edition. The best way to meet your database version control requirement and easy script generation is Redgate SQL Source Control. Using Redgate tool you will meet the first five goals mentioned. Moreover, you can now update EF Model by single click after changing DB schema (i.e. Database first approach) as required in goal 6.
I do not recommend using LocalDB at all. It always make issues with source control like "DB File is in use and can't commit...” In addition, the developer in the project will not have common set of updated data to work on unless a developer add test data to the database and ask others to get latest version and overwrite their own database Or generate update script by the previous mentioned tool and ask every developer to run it on his localDB.
The best way in your situation is to use SQL Server on network. A master version that all the developers use. Since you have version control on the database using previously mentioned tool, you can rollback any buggy change in the database server.
If you think that RedGate tool is expensive for the budget of your project. A second approach is to generate single SQL file from your database that has all database object and the other developers update the SQL file in source control per their changes. This can be done easily by using schema compare tool in visual studio and appending the generated script to SQL file in the source control. With EF DB First approach, you will not have to add many migration classes as in EF Code first.

Is it possible to run SQL Express within a Azure Web Role?

I am working on a project which uses a relational database (SQL Server 2008). The local (on-premises) application both reads and writes to the database. I am working on a different front end for Azure (MVC2 Web Role), which will use the same data, but in a read only fashion. If I was deploying a traditional web app, I would use SQL Express to act as the local database, and deploy changes with updates to the application (the data changes very slowly) or via some sync system.
With Azure, the picture is a little cloudy (sorry, I had to). I can't seem to find any information to indicate if SQL Express will work inside of Web Roles, and if so, how to do it. Does anyone know if using SQL Express in an Azure web role is possible?
Other options I could do if forced: SQL CE or use SQL Azure. Both have a number of downsides, and are definitely less than perfect.
Thanks,
Erick
Edit
I think my scenario may not have been clear enough.
This data won't change between deployments, and is only accessed from within the Web Role; it is basically a static cache. The on-premises part is kind of a red herring, as it doesn't impact the data on the web role (aside from being its source). Basically, what I want to do is have a local data store/cache that I use existing T-SQL/DAL code with.
While I could use SQL Azure, it doesn't add anything, and if anything only adds additional overhead and failure points. I could also use a VM Role, but that is way too costly/complex.
In a perfect world, I would package the MDF into the cspkg (so it gets deployed with the app) and then use it locally from within the role. If there is no way to do this, then that is ok and I need to figure out the pros and cons of other solutions. We don't live in a perfect world. :)
You might be able to run SQL Express using a custom VHD but you won't be able to rely on any data every being present on that VHD. The VMs are completely reset when they reboot - there is no physical persistence across reboots.
If you wanted to, you might be able to locate your entire SQL Server installation in Azure blob storage.
However, in doing all of this, you'll only be able to have one worker/web role that can use that database. Remember: a SQL Server database can only be attached to one SQL Server at a time. If you want to scale out, you'll have to create new SQL Server instances for every web/worker role.
Outside of cost concerns, I can't think of anything that is in SQL Express that should be a show stopper for 99.9% of applications out there.
Adding to Jeremiah's answer: SQL Azure should give you nearly everything SQL Express does today, and you can use the Sync service to synchronize on-premise SQL Server with SQL Azure.
If you installed SQL Express into a VM role, you'd be consuming around $90 monthly just for that instance, plus blob storage (you'd want a Cloud Drive for durability). By definition, a VM Role (or any role) must support scale-out; if you were to scale to 2 instances for whatever reason, both instances would need their own copy of the database, so you'd need to create a blob snapshot for each instance.
Keep in mind, though, if you choose to install SQL Express in a VM: once you're at 2 instances, along with, say, 20GB per instance of blob storage, you're nearing $200 monthly and you're maintaining your VM's OS patches, SQL Express configuration and updates, failure recovery procedures, etc. In contrast, SQL Azure at 20GB, while costing the same $200, will offer better performance and works with the sync service, while completely removing any OS or database server management tasks from you.
To add to the already existing answers and for anyone wondering if its a good idea to run SQL Express in the cloud:
it does makes sense as a temporary storage area. Consider this architectural approach:
say you're spinning up nodes to run jobs. Storing a gazillion of calculation results might be a good idea inside a local SQL Express for each node, and provide the aggregated responses immediately when the job finishes on the node. Transfer of the no longer hot results to off-prem SQL server for future reporting/etc can be done afterwords. SQL Azure may not be optimal from the volume/latency/cost perspective to store gazillion of results and ATS will not always fit the bill, especially when relational data, performance or existing code are involved.
To expand on what David mentioned you can register for SQL Azure Data Sync CTP2 that would allow sync from SQL Server to SQL Azure here: http://www.microsoft.com/en-us/SQLAzure/datasync.aspx
Make sure to use CTP2 though since CTP1 did not support SQL Server.
If it's a read only local cache - SQL CE 4 or SQLite.
Both have Entity Framework providers.
If you're writing to it - SQL Azure

How can I synchronize views and stored procedures between SQL Server databases?

I have a 'reference' SQL Server 2005 database that is used as our global standard. We're all set up for keeping general table schema and data properly synchronized, but don't yet have a good solution for other objects like views, stored procedures, and user-defined functions.
I'm aware of products like Redgate's SQL Compare, but we don't really want to rely on (any further) 3rd-party tools right now.
Is there a way to ensure that a given stored procedure or view on the reference database, for example, is up to date on the target databases? Can this be scripted?
Edit for clarification: when I say 'scripted', I mean running a script that pushes out any changes to the target servers. Not running the same CREATE/ALTER script multiple times on multiple servers.
Any advice/experience on how to approach this would be much appreciated.
1) Keep all your views, triggers, functions, stored procedures, table schemas etc in Source Control and use that as the master.
2) Failing that, use your reference DB as the master and script out views and stored procedures etc: Right click DB, Tasks->Generate Scripts and choose your objects.
3) You could even use transactional replication between Reference and Target DBs.
I strongly believe the best way is to have everything scripted and placed in Source Control.
You can use the system tables to do this.
For example,
select * from sys.syscomments
The "text" column will give you all of the code for the store procedures (plus other data).
It is well worth looking at all the system tables and procedures. In fact, I suspect this is what RedGate's software and other tools do under the hood.
I have just myself begun experimenting with this, so I can't really be specific about all the gotcha's and what other system tables you need to query, but this should get you started.
Also see:
Query to list SQL Server stored procedures along with lines of code for each procedure
which is slightly different question than yours, but related.
I use (and love) the RedGate tools, but when Microsoft announced Visual Studio 2010, they decided to allow MSDN subscribers who get Visual Studio 2008 Team System to also get Visual Studio 2008 Database Edition (which has a schema compare tool).
So if you or your organization has an MSDN subscription, you might want to consider downloading and installing the Database Edition over your Team System to get all the features now.
More details at http://msdn.microsoft.com/en-us/vsts2008/products/cc990295.aspx
Take a look at ScriptDB on Codeplex (http://www.codeplex.com/ScriptDB)
It is a console C# app that creates scripts of the SQL Database objects using SMO. You can use that to compare scripts generated on two servers. Since its open source, adjust it if you need it.
Timur