I have been developing against SQL Server 2012 Enterprise, and came to migrate to production, where I found our hosting provider had installed Standard. I didn't think it should be a problem, as I hadn't implemented any enterprise specific features. However when I restored the DB it failed to activate, and in the Event Log, I found a message indicating the database couldn't be activated because it contained features not supported by the version. When I dug deeper, I found that it appeared that FTS or some other function had automatically created 5 partition functions and schemes.
I then went through a time consuming process to remove the partitions functions and schemes, and could successfully restore the database on the Standard edition.
After a while I backed up the DB (with no PFs or PSs), transferred it to my dev env, restored it (on SQL Enterprise), and after some time I found that a single partition function and scheme had been created. When I next came to backup and restore to prod, this time the database activated ok without error - even though there were partition functions and schemes.
I have just run the following:
SELECT feature_name FROM sys.dm_db_persisted_sku_features ;
from here
http://msdn.microsoft.com/en-us/library/cc280724.aspx
and found that for the db with 5 partitions functions/schemes, Partitioning is listed as a version specific feature. When running the same against the db with 1 function/scheme, it's not listed.
Is there something going on here that Auto created, FTS related partition schemes are ok on standard edition, but not manually created/other types? (keep in mind I never manually implemented partitioning)
Based on MSDN article Features Supported by the Editions of SQL Server 2012:
Only Enterprise version supports Partitioning of tables and indexes. It means that if any table or index is partitioned, then it cannot be imported into any other version. However a partition scheme and partition finction can exist without being used by any table or index. In this case import succeeds, as there are no partitioned tables or indexes.
Moreover, the RDBMS Managebility section tells, that Distributed partitioned views are supported by all versions to some extent. Thus permitting partition schemes and functions to exist as definition in all versions.
Related
I've made a Database-Project, which was originally supposed to be deployed to an SQL Server Enterprise Edition 2014.
Some Tables in the project have Clustered Columnstore Indices.
As far as I know, Clustered Columnstore Indices are not supported in SQL Server Standard Edition 2014.
My question is: What will happen if someone tries to deploy this database project with CCIs to a Standard Edition?
Will the tables still be created, but without the Indices or will the whole project deployment fail?
Unfortunately, I can't test this by myself because I only have the Developer-Version of SQL Server, which includes all Enterprise Features.
As Gordon suggested above it all depends on the deployment mechanism, but there's a very high chance it'll fail (as it should IMO). Moreso if the deployment is handled within a Transaction that will be rolled back in the event of a failure.
Few deployment systems will go the extra mile to ascertain what functionality is supported by the destination RDBMS and most should follow the rule of 'This is in my code repository, deploy it'. To allow it to do otherwise could introduce breaking changes.
TBH your best bet is to keep two branches of the codeset in your source repository - one with the xVelocity indexes, one without. Either that or deploy first to a Developer instance, then run a script to strip out the Enterprise features you know about then script those changes out to deploy to a downstream SKU.
Yes, that is more work and will require regular code merges with changes, or work creating the downstream code. but it will at least allow you to deploy from a well defined codeset.
If you absolutely must deploy from an Enterprise derived script then you're going to be looking at rolling your own deployment script.
As an aside (and not that it will help you with 2014), as from Sql Server 2016 SP1, Microsoft changed the playing field and allowed 99% of Enterprise functionality in all all SKUs - this includes the CCI's
I have 2 different systems with SAP installed on them. First installation running on SQLServer, and the other installation running on Oracle.
In the first installation of SAP running on SqlServer, when i run DBACOCKPIT tcode, i get the following subfolders;
Performance, Space, Backup And Recovery, Configuration, Jobs, Alerts, Diagnostics, Download.
However, on the second installation of SAP running on Oracle. I get the following sub folders only: Performance, Space, Jobs, Diagnostics.
Why don't i get the other folders?
Both systems run ECC 6.0
SAP Basis components of both are a bit different:
Despite the difference of DBACOCKPIT layouts between MS SQL system (A) and Oracle system (B), I think it is normal. Possible reasons for this are:
lack of necessary user authorizations on system B
different set of authorizations of user A and user B
Oracle database was configured incorrectly or was not configured at all
SAP Note 1028624 regarding Oracle DBACOCKPIT says
Some performance monitors within the DBA Cockpit require special database objects. These objects are created using an SQL script. See Note 706927 for this script and more information.
Some functions in the DBA Cockpit require the Oracle Active Workload Repository. This Oracle option must therefore be available for selection (see Note 1028068).
This note also exactly specifies set of functions available in DBACOCKPIT for Oracle, and this set fully corresponds to your screenshot.
The DBA Cockpit has a navigation area that is visible in all the functions of the DBA Cockpit. This area contains a menu tree with the following access points:
- Performance (corresponds to the old transaction ST04)
- Space (corresponds to the old transaction DB02)
- Jobs (corresponds to the old transactions DB13, DB12, DB14, DB13C)
- Diagnostics
At the same time MS SQL DBACOCKPIT also corresponds to SAP standard, which is confirmed by SAP Note 1027512 and by this datasheet, for example.
Possible steps for research:
check if authorizations S_RZL_ADMIN and S_BTCH_ALL are assigned to you, as it stated in detailed DBACOCKPIT description from note 1028624.
check SAP Database Oracle Guide for compliance with you system B setup
This is absolutely standard situation.
I am also have a two systems with the same version of ECC.
One on MS SQL Server 2014
Another one on SAP HANA SPS03
and
DBACOCKPIT t-code looks different in these two systems.
All OK!
Some background:
A customer has asked an Certified SQL Server Consultant for his opinion on migrating from sql server 2005 to sql server 2008.
One of his most important recommendations was not to use backup/restore but instead use the migration wizard to copy all the data into a new database.
He said that this would ensure that the inner structure of the database would be in an SQL 2008 format, and would ultimately result in better performance.
The Customer is skeptical about this because they cant find any writing, in white papers or otherwise to corroborate the consultants statement.
So they posed me this question:
Given an SQL Database, which originally started out on SQL Server 2000, and has migrated to newer versions of SQL Server using backup/restore. (and finally being on SQL Server 2005)
Would migrating to SQL Server 2008 using the Migration Wizard, and in effect copying all the raw data into a new database, result in better performance characteristics. Then if they would be using the Backup/Restore method again?
I'll repeat what I posted on Twitter, "your consultant is an idiot".
Doing a backup and restore will be much easier, and require a much shorter downtime. Also it will ensure that the data is consistent and that no objects are missed.
So long as you are doing index maintenance (rebuilding or reorging/defragging indexes) then any page splits which have happened are fixed and there will be no performance problems.
When the database is moved from one version to another the physical database file is updated to the new version. You'll notice when you restore the database that the compatibility level is set to the old version's number. This has nothing to do with the physical structure of the database file. You can change the compatibility level at any time to a lower or higher version. You can see this if you restore the database using T-SQL as after the database is restored you'll see the specific upgrade steps which are performed.
In response to qwerty13579's comment, when the indexes are rebuild the index is written to new physical database pages so exporting and importing the data in a SQL Server database isn't needed.
For the record, the migration wizard is about the worst possible option for moving data from database to database.
I agree with Denny.
Backup/restore is the easiest way to upgrade.
For no downtime upgrade you can use database mirorring to new server and fail over to new version
One important task that improves performance is refreshing all statistics when you upgrade to a new version
Oracle and SQL server have a database change notification feature that notifies table/row level changes in a database to registered clients. The feature is mostly used for synchronization of data with other data sources.
I've been looking for this feature in DB2 but so far, no luck. Does DB2 not provide this feature at all or am I missing something?
There is no such feature out of the box, not in the LUW version anyway (since you reference Oracle and MS SQL Server, I guess that's what you're interested in). You can easily roll your own using Q Replication event publishing, InfoSphere Change Data Capture, or plain old triggers and MQ functions.
Simple situation. I've created an application which uses SQL Server as database. I could add a table to this database which contains the version number of my application, so my application can check if it's talking to the correct version of the database. But since there are no other settings that I store inside a database, this would mean that I would add a single table with a single field, which contains only one record.
What a waste of a good resource...
Is there another wat that I can tell the SQL Server database about the product version that it's linked to?
I'm not interested in the version of SQL Server itself but of the database that it's using.
(Btw, this applies to both SQL Server 2000 and 2005.)
If you're using SQL 2005 and up, you can store version info as an Extended Property of the database itself and query the sys.extended_properties view to get the info, eg :
sys.sp_addextendedproperty #name=N'CurrentDBVersion', #value=N'1.4.2'
SELECT Value FROM sys.extended_properties WHERE name = 'CurrentDBVersion' AND class_desc = 'DATABASE'
If SQL 2000, I think your only option is your own table with one row. The overhead is almost non-existent.
I'd go with the massive overhead of a varchar(5) field with a tinyint PK. It makes the most sense if you're talking about a product that already uses the SQL Server database.
You're worried about overhead on such a small part of the system, that it becomes negligible.
I would put the connection settings in the application or a config file that the application reads. Have the app check the version number in the connection settings.
Even if there was such a feature in SQL Server, I wouldn't use it. Why?
Adding a new table to store the information is negligible to both the size and speed of the application and database
A new table could store other configuration data related to the application, and you've already got a mechanism in place for it (and if your application is that large, you will have other configuration data)
Coupling the application to a specific database engine (especially this way) is very rarely a good thing
Not standard practice, and not obvious to someone new looking at the system for the first time
I highly recommend writing the data base version into the database.
In an application we maintained over a decade or so we had updates of the database schema every release.
When the user started the application after an update installation it could detect if the database was to old and convert it to the newer schema. We actually did an incremental update: In order to get from 7 to 10 we did 7 -> 8, 8->9, 9->10.
Also imagine the scenario when somebody restores the database to an older state from a backup.
Don't even think about adding a single table, just do it (and think about the use cases).