I'm getting the following error when trying to publish my database to my Azure site:
17:13:29: Could not publish the site. Unable to publish the database. For more information, see "http://go.microsoft.com/fwlink/?LinkId=205387"
17:13:29: Error detail:
17:13:29: An error occurred during execution of the database script. The error occurred between the following lines of the script: "332" and "334". The verbose log might have more information about the error. The command started with the following:
17:13:29: "INSERT [dbo].[DBStatus] ([LastIndex], [LastUpdated"
17:13:29: Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. http://go.microsoft.com/fwlink/?LinkId=178587
17:13:29: Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_SQL_EXECUTION_FAILURE.
This link explains the error: http://blogs.msdn.com/b/sqlazure/archive/2010/04/29/10004618.aspx
Is there an easy way to convert my database to have clustered indexes or should I just choose a host with SQL Server 2012 hosting?
I wouldn't change your hosting option simply due to this. Add the clustered indexes.
There is a tool called SQL Database Migration Wizard which can analyze a SQL database and make a migration script and even move the data. One of the things it will do is suggest a clustered index on tables that don't have them. However, like any tool, make sure you look at what it is suggesting and see if it makes sense in your scenario.
My suggestion is to look at the tables that do not have a clustered index and determine a reasonable index to create. A tool like the one above can make suggestions, but they are just suggestions and may not be exactly what your table would benefit from.
The requirement for clustered indexes for Azure SQL Database comes from the fact that they replicate the data in triplicate and the clustered indexes makes that a faster process.
I've done these steps:
1- Get a list of Tables without Primary Key by running this query:
USE DatabaseName;
GO
SELECT SCHEMA_NAME(schema_id) AS SchemaName,name AS TableName
FROM sys.tables
WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0
ORDER BY SchemaName, TableName;
GO`
2- Create Alert table scripts for all tables, add them a Identity field and make it Primary by scripts like this:
ALTER TABLE [dbo].[tableName] ADD identityFieldName BIGINT IDENTITY;
ALTER TABLE [dbo].[tableName] ADD CONSTRAINT PK_Table_identityFieldName_1 PRIMARY KEY CLUSTERED (identityFieldName);
Repeat above query for all tables that does not have Primary Key
Related
I have sql Upgrade script which has many sql statements(DDL,DML). When i ran this upgrade script in SQL developer, it runs successufully.I also provide in my script at the bottom commit. I can see all the changes in the database after running this upgrade script except the unique index constraints. When i insert few duplicate records it says unique constraint violated. It means the table has unique constraints. But i dont know why i cant view this constraints in oracle sql developer. The other DDL changes made i can view.I dont know is there any settings to view it in oracle sql developer.
CREATE UNIQUE INDEX "RATOR_MONITORING"."CAPTURING_UK1" ON "RATOR_MONITORING"."CAPTURING" ("DB_TABLE");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND" ("NAME");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS" ("BRAND_ID", "BP_ID");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE" ("BRAND_ID", "ENGINE_ID");
As A Hocevar noted, if you create an index
create unique index test_ux on test(id);
you see it in the Indexes tab of the table properties (not in the Constraints tab).
Please note that COMMIT is not required here, it is done implicitely in each DDL statement. More usual source of problems are stale metadata in SQL Developer, i.e. missing REFRESH (ctrl R on user or table node).
If you want to define the constraint, add following statement, that will reuse the index defined previously
alter table test add constraint test_unique unique(id) using index test_ux;
See further discussion about the option in Documentation
I am assuming you are trying to look for index on a table in the correct tab in sql developer. If you are not able to see the index there, one reason could be that your user (the one with which you are logged in) doesn't have proper rights to see the Index.
If you not obtain any error, the solution is very simple and tedious. SQL Developer doesn't refresh his fetched structures. Kindly push Refresh blue icon (or use Ctrl-R) in Connections view or disconnect and connect again (or restart SQL Developer) to see your changes in structures.
Using SSIS - VS 2008
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
I am trying to do a bulk update using a staging table. The staging table is an exact schema copy of my destination table. I have read that Indexes can hamper the performance of uploads to a staging table using the fastload option. So I disable the index before the data flow task and rebuild the index after.
However my SSIS package fails on runtime validation. It seems I cannot do a fastload to the staging table with the indexes disabled. This is the error message I recive: The query processor is unable to produce a plan because the index 'PK_StagingTable' on table or view 'StagingTable' is disabled."
If I remove the command where the index is disabled (Step 3 becomes just truncate table StagingTable) then the SSIS package works.
The question is should this have worked with the index disabled or is that just bad advice? Is there something missing from the instructions that would allow the insert to work with indexes disabled?
The destination table (in this case a staging table) you want to use SSIS' fastload option on can have the indexes disabled beforehand, but it will only work if the indexes are non-clustered. In my specific situation, the table I made a schema copy of had a Primary Key, which means it had a clustered-index. I removed the Primary Key on the staging table and created a non-clustered index using the same columns.
I have a large(~33 millions records) on-premise SQL Server database which must be replicated to an SQL Azure database (near-realtime replication is required).
I'm trying to use Attunity Replicate software to achieve this.
I created a task with Full Load option specified which successfully uploaded initial data to Azure.
After that I created another Task with Apply Changes option specified, but this task ends with errors:
Failed to execute statement: 'INSERT INTO [attrep_apply_exceptions] values ( ...'
RetCode: SQL_ERROR SqlState: 42000 NativeError: 40054 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server] Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. Line: 1 Column: -1
Attunity created [attrep_apply_exceptions] table in the Azure database which doesn't have any clustered index, so insert fails (Azure doesn't allow tables without clustered index).
Why is it happening? Should I add an index myself?
All sql-azure tables must have a clustered index. You will be able to create a table without one but once you insert your first record you will see the message:Tables without a clustered index are not supported in this version of SQL Server
There is a list of Azure Sql limitations/differences here
To answer your question, yes you must add a clustered index yourself.
I saw the command in my book project (book - teach yourself SQL in 10 mins, 2004):
ALTER TABLE Customers WITH NOCHECK
ADD CONSTRAINT PK_Customers PRIMARY KEY CLUSTERED (cust_id);
Can you tell me what these commands mean (or give links with simple tutorials for these commands) :
WITH NOCHECK
CLUSTERED
Are there any alternatives to the above commands? Can I remove them ?
I am using the free edition of SQL Server 2008 R2 with latest updates.
You may download or use online book from MSDN.
Microsoft SQL Server 2008 Books Online
SQL Server Books Online
WITH NOCHECK will tell SQL Server not to validate the particular constraint. CLUSTERED tells SQL Server to create a clustered index with the key cust_id. That will turn it from a heap to a clustered index.
There are plenty resources online. You can start here: It should have most of the basics covered. Start form the basics, such as create database, create tables, select data from tables, etc. More advanced topics such as clustered indexes and With Non-check option will make it more confusing for you.
In an effort to get rid of some fragmentation left from rebuilding and defraging
we thought that we would drop and create indexes so I went to write a script.
It identifies a clustered index that needs work and drops indexes and primary keys
and rebuilds the indexes and primary keys for a table.
Here is the problem I ran into: SQL Server makes quite a bit of its own indexes
based on statistics with its own unique naming system.
Question: Do I want to only drop and create the indexes we made or do I want to drop all indexes and create only the ones that we made or do we want to drop all indexes including
the ones SQL Server made and create all indexes including the ones that SQL Server made?
You can always defrag your indexes, which is easier than drop and recreate. There's a decent explanation of how to do it in this article.