Rename Tool for SQL Server - sql

I have a database that had 15 years of cruft stuffed into it by multiple teams and people in multiple languages. I am looking to rename tables/columns/constraints to match some type of a standard.
The problem is that a column may be referenced in a ton of stored procs and there is no way to find other than search each sproc with a tool like SQL Digger. The problem is that I want to rename a massive amount of entities and doing it manually for each sounds painful.
I've been looking for a tool that helps in name refactoring and can't find anything. Some tools [here] vaguely claim to do that, but don't really (I haven't looked at all the ones listed to be fair).
Has anyone had experience with such a tool?

I'm using the ApexSQL Refactor for some time. It is a freeware tool and so far it works very well.
There's an article "How to change an object name without breaking your SQL database" in their solution center.
I've noticed a new version announcement (2013), however I am not sure if it will remain free or not.

Be careful when updating objects using any of the tools mentioned above because they only search for references in you database. Important thing to keep in mind is the code you have in data access layers of your applications.
Another tool you might want to check out is ApexSQL Clean. It can find all unused database objects, show all references visually but it also searches for .NET solutions and finds references there.
Again, considering that your database is 15 years old you probably have code in different legacy applications and not only .NET. Anyway, good luck. Hope this is all done by now :)

I have used database projects (in Visual Studio 2010) for refactoring activities in the past with a good deal of success. Database projects definitely have a number of quirks but nothing you can't work around.
You can find more details about it here: http://msdn.microsoft.com/en-us/library/dd193420

I have not used this, but have used other of their Sql Server tools, and they did what they claimed.
http://www.red-gate.com/products/sql-development/sql-prompt/

A couple of options:
SQL Server Data Tools (review here)
Red-Gate SQL Refactor (well, now part of SQL Prompt)
Note that these tools can only be smart enough to find references that are exposed through direct reference or through proper dependencies. If you construct a table or column name using dynamic SQL, you're out of luck.
I also blogged about keeping sysdepends up to date a few years ago, however I'm not sure how useful it will be with columns in particular:
https://sqlblog.org/2008/09/09/keeping-sysdepends-up-to-date-in-sql-server-2008

Related

Best way to save SQL versions while working on Tableau?

I am working using Tableau and have to write down multiple different SQL each time, while making new data sources.
I have to save all changes on SQL for every data source.
Currently I would paste the SQL on notepad and save them on separate folder in my computer, along with description of the changes.
Is there any better way to do this?
Assuming you have permission to create objects in the database, begin by creating database views, As #Nick.McDermaid commented.
Then, instead of using Custom SQL data source in Tableau, just connect to the View as if it were a table.
If you need to track the changes to these SQL views of your data, you will need to learn how to use source control for the .sql files that can be scripted from within SQL Server Management Studio:
Your company or school may have a preferred source control system already in use, in which case you should use that. If they don't, or if you are learning at home, then Git and Subversion are popular open source choices.
There are many courses available on learning platforms like Coursera that will teach you how to learn how to use those systems.
I had similar problem as you.
We ended up writing the queries in SQL Editor SQL Work bench (https://www.sql-workbench.eu/), then managed the code history and performed code peer-review (logic, error check, etc) in team shared space (like confluence).
The reasons we did that is
1) SQL queries are much easy to write on Work Bench
2) Code review is a must! You will find through implementing a review process more mistakes than you could ever think about
3) The shared space is just really convenient as it is accessible by everyone, and all errors are documented. After sometimes you get a lot of visible knowledge accumulated.
I also totally agree with Nick as this is one step to a reporting solution. But developing a whole reporting server is heavy, costly and takes time. Unless management are really convinced of the importance of developing a reporting solution, you may have to get a workaround with queries and Tableau (at least that was the case for us)
A little late to the party, but I would suggest you simply version the tableau workbook. The contents of the workbook are XML, so perfect for versioning using file based tools (Dropbox, One Drive, etc.) or source control (git, etc.). The workbooks themselves are usually quite small, so just make sure to keep the extract data separate if you use it.

Writing Reports with SSDT Visual Studio - Guidance

Broadly speaking, can someone tell me if I'm headed in the right direction?
I now know how to write SQL Queries pretty well.
I would like to start aggregating multiple queries onto one "form"/template (not sure if that's the correct terminology).
I have access to lots of clean data in the form of Excel Files.
I should be able to load the excel files into Visual Studio and then write reports that refer to those excel files as databases, am I right?
I haven't been able to find a great SSDT tutorial yet, but I'll keep looking. Thanks for any feedback.
First off, I apologize that I'm writing a bit of a novel here. My understanding of your question is that you're looking for architectural guidance on the best way to go, and that's not a quick answer.
If, however, I've misinterpreted your intent and you are actually just looking for how to code up an excel file as a database, there is already a lot of articles online that you can google.
Regarding your architectural question...it is really going to come down to choosing the best trade-offs for what you're building. I will give you some pointers that I have learned and hopefully it is helpful to you and others in the community.
I would be very hard pressed to advise that you use an excel file as a database.
While it might seem like the most straight forward solution, the trade-offs here are very costly in debugging file locking issues and dealing with excel specific errors, it becomes a death by a thousand cuts. It is certainly possible, but this is a trap that I personally fell into early in my career.
Here's is a link to some descriptions of the problems that you'd have with an excel file database and here is a 2nd link.
To paraphrase your question, it sounds like you're developing a personal ETL application for improving your productivity and your company's metrics. Spreadsheets come into your e-mail inbox and transformed versions of the spreadsheets go out of your e-mail outbox. You are wanting to look at the departments' data from a historical and comparative perspective. I have done this many times in the past as well and it is a very reasonable goal.
The best way that I have found to do this is to use a SQL Server database. You can start this out in phases of minimal viable product to do this in small easy chunks.
Phase I:
Download and install SQL Server 2016 Express free. Make sure
to install localdb when you install SQL Server 2016. See the localdb
instructions for more information.
Create the localdb instance on the command line.
Connect to the new localdb instance in SQL Server
Create a new Database that you'll use for importing the data. Give it a name like "ReportData"
Import the excel files received from the variety of businesses into the new database. This stackoverflow answer gives a great description of how to do it. Here is an alternate example.
If you get any error messages about drivers you may need to download the correct drivers.
Develop your SQL queries that you want to use. For simplicity, I'm just showing a basic select statement here, but you can build some sophisticated SQL queries for aggregating the data in this step.
Export the data from the excel file into a CSV file or an excel file. You do this by right clicking in the "Results" area and selecting "Save Results As..."
Manually copy and paste the resulting values into the excel templates that you would like.
Note step 9 will be automated soon, but it is better for now to understand your domain objects and be thinking about the business logic that you're building in a quick iterative manner.
Phase II:
Create a new Console application in Visual Studio that will transform the data from the database into an Excel file output. The most powerful way to do this is to use EPPlus. Here is a detailed explanation on how to do this.
Note, when you run the source code from the detailed explanation link, you need to change the output path first, for example to c:\temp. Note also that there are plenty of other Excel spreadsheet helper packages out there, so feel free to look around at other packages as well. EPPlus is simply one that I have been successful with in my projects. This Console application will be querying your SQL Server database using the queries that you built in step 7 above.
Phase III:
In time, you many find that co-workers and managers within your company want to start accessing your data directly through a web page...
At a high level, the steps you would take are:
Backup the database and restore it onto a server.
Implement a simple MVC application
Perhaps even build web pages to allow users to import excel so that they don't need to e-mail them to you any longer.
An additional note, there are Enterprise level ETL and reporting tools out there as well, such as SSIS/SSRS, etc that you could look into if you're looking for a more sophisticated tool set, but I didn't get that impression with your question.
I hope that this answer helps and isn't too long winded. Please let me know if any of the steps are unclear, I know it's a lot of information in one post.

Best way to migrate data from Access to SQL Server

The problem
Ok, sorry that my question is somewhat abstract and subjective, but will try to make it as specific as possible. So, the situation I am in is simple - I am remaking a very old MS Access application on a new website using ASP.NET MVC. As currently the MVC site is using SQL Server 2008 (for many well known reasons) I need to find a way to migrate the tables AND the data, because the information in the old database will be used in the new application.
Alright, so far so good, however there are a few problems. The old application is written in a different language, meaning that I want to translate table names, field names, and all other names that are there to English. Furthermore, I will be making some changes on the models themselves (change the type of some fields, add additional fields to some tables, remove old unnecessary ones and more). So technically I'll be 'having my way' with everything.
Researched solutions
With those things in mind I researched for the ways to migrate data from Access database to a SQL Server. Of course, there is a lot of information on the matter, in Stack Overflow alone there are more than a few questions and solutions. So why am I struggling to find the answer ? Well I found a few solutions that will be sufficient to some extend (actually will definitely solve my problems) but I am writing to ask if someone experienced has a better perspective on it than I do. Alright, the solutions and why I am still looking for advice: /I'll be listing just a couple of the most common and popular ones that I found, many of the others share the same capabilities and/or results /
Upsize Wizzard (Access) - this is a tool devised specifically for migrating tables and data from Access. It is my most favourite one for the moment as I find it kind of straightforward to work with and it provides good overall results. I was able to migrate the tables to SQL Server (along with the data of course) which more or less is what I am intending to do. It is fast, it seems like it allows you to migrate indexes, primary keys and even to my knowledge foreign keys (table relationships). The downsides of this tool, however, include that it ignores your queries (which I don't really need honestly) and it doesn't provide a way to change the model, names or types of the properties of the table you migrate - which is the thing I kind of prefer, because I will have to make more than a few changes, adding, renaming, deleting, etc. And then continue with the development process (of the application) which will lead to a few additional minor changes. And finally I would need to apply all changes (migration + all changes) on the production server, which overall is prone to mistakes as I will be doing it by hand (and there are more than a few tables).
SQL Server Migration Assistant (SSMA) - ok, this is a separate tool (not included in Access) with again the same idea - to migrate data from Access to ... possibly everywhere, haven't researched that. Overall it offers more functionality and customizing from the Upsize Wizard, but of course it does it in a more complicated way. I haven't put enough effort to make a migration with this tool yet, as it involves a lot of installations and additional work, but according to my research it provides almost all (if not all) of the functionality I require. The downside however comes with the naming. As I mentioned it allows you to apply changes on the tables, schema, fields, indexes, keys and probably everything, but the articles advice that I change the names in Access first, as it will be easier and the migration process will run more smoothly. I am not allowed to make changes on the original Access database, as it will remain functional until the publish of the 'renewed' project, and the data inside it is being used, so a mere copy of the file is a solution I am not particularly fond of, because I might loose new records. Also I cant predict the changes I would want to make in the development process (as I said I believe I would want/need to apply some additional changes later on when I find 'weaknesses' in my data design in the development process) so I find it to be a little half baked solution.
Conclusion
The options presented, the way I see them, are two:
Use the Upsize Wizard to migrate the access tables, then write a script that applies the changes I want to make. Then in the development process add any additional changes to the script. When ready to publish on the production server, reapply the migration with the wizard, run the changes script and pray everything is fine.
Get more involved with the SSMA tool and try producing an updated version of the tables with the migration process. (See how efficient the renaming is and decide whether to use copied file to rename and then find a way to migrate only new records or do it all in the SSMA). Then again write a script for the changes that occur in the development process and re-do and apply it all on the production server when ready and then pray everything is fine.
Option I have not yet seen, apply it and then pray everything is fine.
I have researched the matter for a couple of days now, and found a few more solutions that I do not believe are better by the mentioned. However I include the possibility of missing the 'big red X on the map', a practical and easy solution which seems like it was designed specifically for me (though I doubt that a little). Anyway, reducing all the madness that I have written so far to a few simple questions will look like:
Is anyone aware if my conclusions are correct? I am leaning towards option one as it is easier to accomplish.
Has anyone experienced/found a better way to do that, or just found some 'logic-leaps' in my writings as I am overthinking the entire thing a little and may be doing some obvious miscalculation.
Very sorry for asking a trivial question and one that includes decision making that may involve deeper understanding of my project and situation, yet I am working with rather sensitive data and would appreciate feedback, even if only to improve my confidence into the chosen approach.
There is one other tool/method you might want to consider that seems to cater to your specific needs more. This would be to use the data import/export tool that ships with sqlserver to do a complete copy of all data into a temporary location within sql server and then write custom queries to reorganize the names and other changes you want to make. Is a bit more work but you could use the end product as a seed method for your migrations ;) (if you are doing code first anyway)

Alternatives to Tarantino for database Continuous Integration (CI)?

We're currently using VincentVega (now rolled into Tarantino) for our database CI. We're using CruiseControl.Net for our web app (C# using TFS).
VincentVega has worked out relatively well since it's very explicity and handles the two scenarios of create and update (while preserving existing data) equally well. I'm looking into upgrading to Tarantino, but I'd like to know if anyone might suggest some alternatives I should look into? Tools like SQL Compare that "automagically" produce delta scripts are out of the question, unfortunately, since our database is highly normalized with over 500 tables.
Thanks
Eric Tarasoff
There is also another project which may be worth looking at by Rob Reynolds; RoundHousE
http://code.google.com/p/roundhouse/
The wiki is at https://github.com/chucknorris/roundhouse/wiki
There's a similar tool by Paul Stovell and friends called DbUp.
One notable difference between Tarantino and DbUp is that while Tarantino is typically called from a build script (like Nant or msbuild), DbUp has .NET classes you use within your application. This potentially allows for better fallback handling in case a script doesn't go as planned.
http://code.google.com/p/dbup/
Here's the original announcement of DbUp from Paul Stovell's blog:
http://www.paulstovell.com/dbup
I think it might be of interest to post another answer since Redgate now has a new offering, ReadyRoll, that satisfies your key concerns.
"it [SQL Compare] just doesn't put together a synch script correctly"
Yes, diffing tools can sometimes get the script wrong. Often it's not that the script doesn't work, but it doesn't apply the change in the desired way. ReadyRoll's best-of-both-worlds approach uses SQL Compare under the hood to create each migration script, but crucially it allows the developer to customize the script afterwards.
"RoundHousE and tools like it already operate in a model similar to what we're doing now"
ReadyRoll's approach is, like RoundHousE, migrations-based, managing the upgrade process by running a series of consecutive scripts. This tool was built in recognition that many development teams prefer working this way.
"One last reason for choosing RoundHousE: Chuck Norris"
I will have to concede defeat on this point...

Database schemas WAY out of sync - need to get up to date without losing data

The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem:
What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added.
Else, if there are other known solutions, I'd like to hear them as well.
You could use a schema comparison tool like Red-Gate's SQL Compare. You can synchronize the changes and not lose any data. I wrote about this and many alternative tools ranging widely in price here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
The nice thing is that most tools have trial versions. So, you can try them our for 14 days (fully functional) and only buy it if it meets your expectations. I can't speak for the other tools, but I've been using RG for years and it is a very capable and reliable tool.
(Updated 2012-06-23 to help prevent link-rot.)
Red-Gate's SQL Compare as Aaron Bertrand mentions in his answer is a very good option. However, if you are not permitted to purchase something, an option is to try something like:
1) For each database, script out all the tables, constraints, indexes, views, procedures, etc.
2) run a DIFF, and go through all the differences and make sure that the small DB can accept them. If not implement any changes (including data) necessary onto the small DB so it can accept the changes.
3) create a new empty database from the schema of the large DB
4) import the data from the small DB into the nee DB.
You could also reverse engineer your database into Visual Studio as a database project. Visual Studio Team Suite Database Edition GDR R2 (I know long name) has the capability to do a schema comparison and data comparison, but the beauty of this approach is that you get all of your database into a nice database project where you can manage change and integrate with source control. This would allow you to build from a common source and deploy consistent changes.