alter table drop column fail in SSDT because of dependancy of non cluster index - sql

I have created an SSDT project for SQL Server 2012 database. since i have database already present in the SQL Server Database engine so i use the import feature to import all the objects into SSDT. everything works fine but i am now facing 2 problems
1) since one of the table is using the HIERARCHYID column (col1) as a datatype and there is one computed column based on the HIERARCHYID column. The definition of computed column is something like case Col1= hierarchy.GETRoot() THE NULL ELSE someexpression END. after importing the table script in SSDT, Error of unresolve reference start coming up.
If i change the defination to something like case hierarchy.GETRoot() = Col1 THE NULL ELSE someexpression END (note now col1 is now at the end) it works fine.
2) if i keep the above solution (i.e keeping col1 after =) then at the time of publishing the project,SSDT has to drop the column at the production server and then recreate it. since there is a index depend on this column the deployment get failed everytime saying the error like ALTER TABLE DROP COLUMN fail because other object access it. i have no control how SSDT design / publish the script. and if i have to keep any eye to drop every dependent object before publishing the database project then i think there is no use of it
Please suggest how i can resolve this
Thanks
Atul

I was able to reproduce the reference resolution problem you described. I would suggest submitting that issue to Microsoft via Connect here: https://connect.microsoft.com/SQLServer/feedback/CreateFeedback.aspx
I was not able to reproduce the publish failure. Which version of SSDT does the Visual Studio Help > About dialog show is installed? The most recent version ends with 40403.0. If you're not using the most recent version, I would suggest installing it to see if that fixes the publish failure. You can use Tools > Extensions and Updates to download SSDT updates.
If you do have the most recent version, could you provide an example schema that demonstrates the problem?

Compare your project to a production dacpac and have it generate scripts to make the changes. Then if need be, you can edit the scripts before they get applied to production. This is how my dev teams do it.

I was running into the same issue for a number of days now. After finding your post to confirm the issue was in SSDT, I realized that it may be fixed in a later version than the one we are currently using: 12.0.50730.0 (VS 2013, the version this project uses).
I also have version 14.0.3917.1 installed from VS 2017. I just attempted with that, no issues. So the solution is to upgrade your SSDT version.
Please ignore that solution, it appears my success last night was anomalous. While attempting to repeat it today after restoring a database with the issue, the deployment failed to account for at least one index again.
EDIT:
I have posted about this on User Voice: https://feedback.azure.com/forums/908035-sql-server/suggestions/33850309-computed-column-indexes-are-ignored-with-dacpac-de
Also, to maintain that this is at least a workable answer of sorts, the workaround I am implementing involves dropping and recreating the missed indexes myself using pre and post deployment scripts.
Not an ideal solution if the dacpac was meant to update various versions of the database that could have different levels of drift from the model, however it works for us as we have a tight control over all instances and can expect about the same delta generated each release for each db instance.

Related

SSIS Error: VS_NEEDSNEWMETADATA

I'm currently updating all of our ETLs using Visual Studio 2015 (made in BIDS 2008) and redeploying them to a new reporting server running on SQL Server 2016 (originally 2008R2).
While updating one of the ETLs and trying to run on the new server I got this error:
The package execution failed. The step failed.
Sometimes it also produces this error:
Source: Load Fact Table SSIS.Pipeline Description: "Copy To Fact
Table" failed validation and returned validation status
"VS_NEEDSNEWMETADATA".
I've tried deleting and re-adding the OLEDB Destination, connection strings and opened up the column mappings to refresh the meta data. I also recreated the whole data flow task but I'm still getting the same error.
The package runs fine on my local machine.
UPDATE:
I started taking the package apart and running only pieces of it to try and narrow down which part was failing. It seemed to be failing on loading into the staging table but I couldn't find out why.
I eventually decided to just try and re-create the whole thing. After re-creating the entire package, still no luck. The picture below is from the event viewer on the server itself but it didn't give me any new information.
Package error from event viewer
I have tried all the solutions provided above and the other sites. Nothing worked.
I got a suggestion from my friend Which worked for me.
Here are the steps:
Right click on the Source/Target Data flow component.
Go to Advanced Editor -> Component Properties
Find ValidateExternalMetadata and set it to False.
Try your luck. This is a pathetic issue and left me clueless for 2 days.
I finally found the issue and here's how I did it.
Because the error messages I was getting from SSMS weren't very insightful I first opened up my remote desktop and logged into the server. Then I went to Administrative Tools>Event Viewer and then Windows Logs>Application to see if the failed event would provide greater detail.
It didn't give me much still.
The next step I took was to run the package from the command line because the messages should be more verbose. Opened up cmd, changed directory to the one my package was in and then...
DTEXEC /FILE YourPackageName.dtsx
Finally, the error message here showed a missing column in the tables the package was trying to write to. I added those columns and voila!
As stated in comments,
if it runs ok in your development environment, then the problem isn't with the package, it's with the scheduled job on the server. Try recreating that.
If that doesn't work,
It seems like the server has a cached instance of the package it's using instead of the updated one. Try renaming your package and creating a new job with the new package name and see if that works.
If that doesn't work,
all I can recommend at that point is to cut the package down until it succeeds, then add the next step that fails.
Sounds like from your solution the development environment is more forgiving of schema updates than the deployed solution. Glad you were able to resolve, eliminating clutter helps.
I had the same problem and my issue was a difference between two environments, the same field in the same table once was written with a capital and once not. So the name was the same, but with this small difference (e.g. isActive vs IsActive).
This came from a refactoring effort, where we used VS database publish that did not updated the field name.
Have you tried deleting and re-creating the source? When I get this I can generally modify OK any object that has the error but have to delete and rebuild the paths between them, however sometimes I have to delete everything in the data flow and re-create it.
A Proxy for SSIS Package Execution should be created under the SQL Server Agent. You should then change your job step (or steps) to Run As the Proxy you've created.
I had your same problem some time ago and the proxy fixed it.
Forgive me if you've already tried this.
It is very common to get that message when 2 columns in the source file are being inserted into the same field of the table.
i.e.
My text file has twice "neighborhood" (same label for different columns) and my table has "neighborhood" and "neighborhoodb" (notice the "b" at the end). The import will try to import both text columns into the field "neighborhood" and ignore the "neighborhoodb" field, it will fail with the "VS_NEEDSNEWMETADATA" error.
Re-creating the job worked for me. Some cached version of the job may have been causing the VS_NEEDSNEWMETADATA error. The package was executing correctly but it was failing, when it was executed by an agent job.
This ended up being a permissions issue for me. The OLE DB Source was using a stored procedure that selected from a SQL view. This view joined to a table in another database and unfortunately the proxy account the SQL Agent job step was running the package under did not have SELECT permission to the table in that database. This is why the package ran fine in Visual Studio but not from a job when deployed to the server. I found the root cause of the error by taking the SELECT statement out of the stored procedure and putting it directly in the Source Query box of the OLE DB Source control which caused it to finally return the 'SELECT permission denied' error message. This error was apparently hidden from SSIS since the proxy account DID have execute permission on the stored procedure.
It works for me after changing the ValidateExternalMetadata to false. I was transferring the data from MSSQL database to MySQL database. Changed "ADO NET Destination".
You may need to strongly type your Source Query.
Example:
If your DestinationDB has a FullName field Nvarchar(255)
and in your source query you have
select firstname + lastname as FullName from...
Try this:
Select CONVERT(NVARCHAR(255),firstname + lastname) as Fullname from...
So if you are going from db to db and both are nvarchar(255) I don't have this issue, but if you are concatenating fields in your query specify the data type and length.
This error can also occur when an entire SSIS project needs to be redeployed rather than just one of the packages (for VS versions that allow deployment of a single package in a multi-package project), particularly when a project connection has been changed or added. For example, if you've added or removed columns from a flat-file project connection. In that case, you need to deploy the entire project to push out the updated project connection properties. This can be true even if the project only has one package in it. In VS Solution Explorer, rather than click on the package name to deploy, select the bolded project name at the top, and then click deploy.

Visual Studio 2013 SQL Server Project Deployment/Publish

I am looking for information regarding Visual Studio 2013 and working with SQL Server projects using VS 2013. We are currently working on a project where were're using a database that already exists and is used by an ERP application. We're creating SQL Scripts that would alter and create fields on a table on the target database.
Now, we're not looking to "publish" those scripts, but create postdeploy scripts instead, which contains all the necessary SQL scripts in the order they need to be run. Everything is working fine. When we build the project, we get a fresh copy of the PostDeploy.sql script file that we run across a target database.
At the moment, the script looks at a table, if the column that needs to be added exists, it DROPS it and then recreates it. This is fine for the testing phase, but once we go live, there will be several stages of the databases that the code needs to be tested on. The column may already exist from before and in that case, we wouldn't want to DROP that column, instead, we want to do schema and data level compare and just get over the objects that are DIFFERENT, so that the column doesn't need to be dropped, instead just "updated". I hope I am not being vague when I ask this question.
I found this video: https://www.youtube.com/watch?v=AuVpmu9CKRY and I am not sure if that is what I need to do? I would love any suggestions from you guys..
Have a wonderful day!
Well, this isn't really the best use for SSDT/DB Projects. Ideally, you'd want to pull the schema into a project and tweak that project to look the way you want. Rename columns, change types, etc. Because it sounds like this is a 3rd party app, you'd want some environment that can serve as your baseline - when you run whatever upgrade script is sent by the vendor, it goes against that environment. You'd then want to pull the appropriate changes into your project.
Once you have a project that looks the way you want, you use the publish option against your target database. In your case, I'd likely recommend generating a script. If you're in the VS environment, you can take a look at both the script and a summary of what will be changed.
For data compares, I'd really consider something like Red Gate's SQL Data Compare (pro edition if you can). You can set up a data compare against your baseline and automate pushing the data changes. You can do that through post-deploy scripts, but you'll need to hand-code the data inserts, updates, and deletes yourself.
I've blogged about SSDT before and that may give you some ideas. Jamie Thomson has also written quite a bit about Database/SQL Projects and inspired quite a bit of what I've done.
http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

Recover failed sonar upgrade (missing table)

Due a backup issue the migration from sonar 3.5.1 to 3.7 got messed up a bit. Now some tables are missing but the migration is done.
Is there some way I can rerun the db migration to create the missing tables ?
Note I have so far only seen one problem and it shows in the log as:
MySQLSyntaxErrorException: Table 'sonar.issue_filters' doesn't exist
when I show the issues page or the issues drilldown. And I see that table is created in war/sonar-server/WEB-INF/db/migrate/411_create_issue_filters.rb
So based on the info there it seems I could create that one manually directly in sql but is there a better safe way to recover this migration ? (as I suspect the issue_filters is not the only problem)
Using MySQL for the db.

Branching strategy for release based db project

We have a asp.Net vb.net 2008 project on tfs2010. The project has one main branch and for any release we create a new feature branch which is finally deployed. Post production deployment we merge back the branch back into main branch.
We are now also adding a db project to manage our SQL too. Question is how to version control the differential scripts. The db project contains all create scripts which is fine if we had to deploy thep project from scratch but the project is already live. So now any new release or hotfix would normally contain alter or change script practically.
Any ideas how to best manage both the create scripts and per release change scripts?
The way that we have been doing this for years is through the use of update database scripts which can update the database from a specific version to another.
There are two types of update scripts that we apply: table changes and data changes.
The table changes are recorded by hand as they are made and are designed in such a way that the script can be safely run multiple times against the same database without error. For example, we only add a column if it doesn't already exist in the table. This approach allows this version-specific script to be used for applying hotfixes as well as upgrading from one version to the next. The hotfixes are simply applied as additional entries at the end of the file.
This approach requires developer discipline, but when implemented correctly, we have been able to update databases that are 4 major revisions and 4 years out of date to the current version.
For the data changes, we use tools from Red Gate, specifically SQL Data Compare.
As far as the database programmability (stored procedures, triggers, etc), we keep one script that, when executed, drops all of the current items and then re-adds the current versions. This process is enabled by using a strict naming convention for all programmability elements (stored procedures are named starting with s_prefix_, functions with fn_prefix_, etc).
To ensure the correct script versions are applied, we added a small versions table (usually 1 row) that is stored in the database to record the current version of the database. This table is updated by the table update script when it is applied. We also update this table in the script that is applied to create the database from scratch.
Finally, in order to apply the scripts, we created a small tool that reads the current version of the database and a manifest that specifies which scripts to apply based on the current version of the database.
As an example:
Assuming that we have issue two major versions, 3 and 4, there are two update scripts, update_v3.sql and update_v4.sql. We have one initial structure script, tables.sql, and one programmability script, stored_procs.sql. Given those assumptions, the manifest would look something like:
tables.sql > when version = 0
update_v3.sql > when major_version <= 3
update_v4.sql > when major_version <= 4
stored_procs.sql > always
The tool evaluates the current version and applies the scripts in the order specified in the manifest to ensure that the database is always updated in a known manner.
Hopefully this helps give you some ideas.

Backup only new or edited records

I have built a SQL Server Express database that is going to be housed on an external hd. I need to be able to add/update data on the database that is on my system, as well as other systems and then only backup or transfer data that has been added or edited to the external hard drive. What is the best way to accomplish this?
You would probably use replication for this but as you're using SQL Server express this isn't an option.
You'll need some sort of mechanism to determine what has changed between backups. So each table will need a timestamp or last updated date time column that's updated every time a record is inserted or updated. It's probably easier to update this column from a trigger rather than from your application.
Once you know which records are inserted or updated then it's just a matter of searching for these from the last time the action was performed.
An alternative is to add a bit column which is updated but this seems less flexible.
Sherry, please explain the application and what the rationale is for your design. The database does not have any mechanism to do this. You'll have to track changes yourself, and then do whatever you need to do. SQL Server 2008 has a change tracking feature built in, but I don't think that will help you with Express.
Also, take a look at the Sync Framework. Adding this into your platform is a major payload, but if keeping data in sync is one of the main objectives of your app, it may pay off for you.
In an application
If you are doing this from an application, every time a row is updated or inserted - modify a bit/bool column called dirty and set to true. When you select the rows to be exported, then select only columns that have dirty set to true. After exporting, set all dirty columns to false.
Outside an application
DTS Wizard
If you are doing this outside of an application, then run this at the Command-Line:
Run "C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe"
This article explains how to get the DTS Wizard (it is not included as default).
It is included in the SQL Server
Express Edition Toolkit – and only
that. It you have installed another
version of SSE, it works fine to
install this package afterwards
without uninstalling the others. Get
it here:
http://go.microsoft.com/fwlink/?LinkId=65111
The DTS Wizard is included in the
option “Business Intelligence
Development Studio” so be sure to
select that for install
If you have installed another version
of SSE, the installer might report
that there is nothing to install.
Override this by checking the checkbox
that displays the version number (in
the installer wizard)
After install has finished, the DTS
Wizard is available at
c:\\Microsoft SQL
Server\90\DTS\Binn\dtswizard.exe you
might want to make a shortcut, or even
include it on the tools menu of SQL
Studio.
bcp Utility
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data > file in a user-specified format. The bcp utility can be used to import large numbers of > new rows into SQL Server tables or to export data out of tables into data files. Except > when used with the queryout option, the utility requires no knowledge of Transact-SQL.
To import data into a table, you must either use a format file created for that table or > understand the structure of the table and the types of data that are valid for its
columns.