Microsoft.Crm.CrmException: Database having version 6.0.0.809 is not supported for upgraded - dynamics-crm-2013

Getting following error when migrating from CRM 2011 Rollup update 14 to CRM 2013 sp1.
" Microsoft.Crm.CrmException: Database having version 6.0.0.809 is not supported for upgraded."
Before installing SP1 on CRM 2013 the database CRM was importing with out any issue.The only change is SP1 installation and the database I am importing is actually another copy of same organization of CRM2011.
Anyone knows what's causing the issue?

The error is misleading as it has nothing to do with the version of the database. If you are trying to import an organization database that has already been imported to an organization on that crm server, you will get this error, because the organization has the same ID.
Deleting the other org will do it. If you need both orgs, you can delete the previously imported one and reimport the already upgraded organization. This will assign it a new organization ID. Then you can proceed with upgrading the second copy.

I want to add to georged answer:
The solution for me was to Delete the old org, then first upgrade the second copy and only after the upgrade re-import the old one. The other way round didnt work for me.

If georged's solution doesn't work for you then try Tim's. I was going to post this as a comment, but it's too long for a comment.
I'm using a later version of CRM (CRM 9.0.4.5) but otherwise had the same problem. The only difference being that I was importing an organization that needed upgrading - the existing organization was a copy that I had previously upgraded.
From my understanding when you import an organisation, Deployment Manager is supposed to assign a new Organization ID if it detects a clash. Whilst that does seem to be how deployment manager works, that check seems to be bypassed if it also needs to upgrade the solution. (Perhaps a later version of Deployment Manager will fix that.)
So, using Deployment Manager:
delete the existing organization. The database will be preserved so it can be re-imported later.
import the new organization, it should now successfully import because there shouldn't be a clash.
re-import the original organiation that you just dropped.

Related

VS2022 Database Project Reference Keep Breaking

Vs2022, I have a database project. Within that project are some views and functions which refer to a system database. So I have added those system database references, both Master and MSDB to the project. The references work, all is good.
I close the solution and reopen it, and now the project shows two references to each database, and a bunch of script errors because an unresolved reference exists:
So the fix is again to remove these 4 references, add the database reference back to master and msdb, and then all is good, until I reopen the solution again!
One side note, this solution was originally created in VS2019. Also, this happens on 2 separate machines. I'm running VS 17.3.3 64-bit.
For anyone facing the same problem, VS 2022 adds two references to the DBs, one from VS extensions folder and the other one from SQL Server folder, its definitely a bug and happens often when updating VS 2022.
However, the solution is to delete the second one from the project references (SQL Folder Reference) and then you need to click on the project and explicitly save it by using Ctrl+S, otherwise, the change will not be saved and whenever you close and open the solution, the project will show invalid references.

Cognos Framework Removed Field Preventing Validation

I am attempting to fix a cognos framework package which has not been working due to a field it was referencing no longer being available in the data. The reson for this was the field had been renamed.
I have update the fields in the foundation and presentation layers and re-built the relationships which use the expired field but when trying to validate the package it shows an error saying that it is still using the field which doesn't exist.
Any help would be great thanks.
Please disregard. I think this was a cacheing issue. I have had the server this is on restarted and the issue has resolved itself.

Not able to do any changes in my CRM 2013 on-premise (Customization)(Customize the System)

I'm using MS CRM 2013 On-Pre, In the Trace file i found below line in error message attribute when trying to delete any field or entity even when trying to add new entity or field.In sort at this time I am not able to do any changes in my CRM.
"Microsoft Dynamics CRM has experienced an error. Reference number for administrators or support: #0494EF01"
How can I analyze the code, Is there any entity where I can find all the error code.
Please suggest I'm totally confused.
#BharatPrajapati alex is right, you probably broke the environment.
One should never change fields and Entities directly in SQL. This is highly un-supported Customizations.
If you had setup daily backups for the CRM Database. You can restore the backup on the same CRM Organization. It will start working fine. You have to map the security roles etc again.
If you want to delete the the Entity which you deleted from SQL you can/should delete that in CRM
P.S if you do not have daily backups schedule then there is no solution other than a New CRM Installation.

VS 2013 Opening SSIS Solution with multiple projects that have protection level set

We are in the process of converting all of our old SQL 2008 SSIS packages to VS 2013/SQL 2014. I've done the conversion process in VS 2013 and also started the conversion of Package Deployment to Project Deployment. We currently use Encrypt All With Password as a protection level in the packages. However, I now notice that one the ones I've converted to Project Deployment that when I open the solution each of those projects asks me for the password even though i haven't started any work in those projects. This is different than the remaining packages I have not converted where it is not asking me for the password yet. With well over 30 projects in the solution, I can't see having to type in 30 passwords for all the projects when I know I'm only going to be working on one.
Is there a setting that I am missing? Or a workaround?
Thanks.
A workaround to this is to disconnect your network connection, then open solution/projects. Then go back online. This may be faster if you have several projects.
You'll just have to enter the password for the package/project you're working on. On a related note, once you have the solution open you can "Work Offline" under the SSIS menu to open a package and skip validation. This is useful for very large packages where SSIS might normally take a long time to validate everything in the package.
I would also like to hear the non-workaround answer to this question, if one exists.

alter table drop column fail in SSDT because of dependancy of non cluster index

I have created an SSDT project for SQL Server 2012 database. since i have database already present in the SQL Server Database engine so i use the import feature to import all the objects into SSDT. everything works fine but i am now facing 2 problems
1) since one of the table is using the HIERARCHYID column (col1) as a datatype and there is one computed column based on the HIERARCHYID column. The definition of computed column is something like case Col1= hierarchy.GETRoot() THE NULL ELSE someexpression END. after importing the table script in SSDT, Error of unresolve reference start coming up.
If i change the defination to something like case hierarchy.GETRoot() = Col1 THE NULL ELSE someexpression END (note now col1 is now at the end) it works fine.
2) if i keep the above solution (i.e keeping col1 after =) then at the time of publishing the project,SSDT has to drop the column at the production server and then recreate it. since there is a index depend on this column the deployment get failed everytime saying the error like ALTER TABLE DROP COLUMN fail because other object access it. i have no control how SSDT design / publish the script. and if i have to keep any eye to drop every dependent object before publishing the database project then i think there is no use of it
Please suggest how i can resolve this
Thanks
Atul
I was able to reproduce the reference resolution problem you described. I would suggest submitting that issue to Microsoft via Connect here: https://connect.microsoft.com/SQLServer/feedback/CreateFeedback.aspx
I was not able to reproduce the publish failure. Which version of SSDT does the Visual Studio Help > About dialog show is installed? The most recent version ends with 40403.0. If you're not using the most recent version, I would suggest installing it to see if that fixes the publish failure. You can use Tools > Extensions and Updates to download SSDT updates.
If you do have the most recent version, could you provide an example schema that demonstrates the problem?
Compare your project to a production dacpac and have it generate scripts to make the changes. Then if need be, you can edit the scripts before they get applied to production. This is how my dev teams do it.
I was running into the same issue for a number of days now. After finding your post to confirm the issue was in SSDT, I realized that it may be fixed in a later version than the one we are currently using: 12.0.50730.0 (VS 2013, the version this project uses).
I also have version 14.0.3917.1 installed from VS 2017. I just attempted with that, no issues. So the solution is to upgrade your SSDT version.
Please ignore that solution, it appears my success last night was anomalous. While attempting to repeat it today after restoring a database with the issue, the deployment failed to account for at least one index again.
EDIT:
I have posted about this on User Voice: https://feedback.azure.com/forums/908035-sql-server/suggestions/33850309-computed-column-indexes-are-ignored-with-dacpac-de
Also, to maintain that this is at least a workable answer of sorts, the workaround I am implementing involves dropping and recreating the missed indexes myself using pre and post deployment scripts.
Not an ideal solution if the dacpac was meant to update various versions of the database that could have different levels of drift from the model, however it works for us as we have a tight control over all instances and can expect about the same delta generated each release for each db instance.