Attunity Replicate - tables in Azure are created without clustered indexes - replication

I have a large(~33 millions records) on-premise SQL Server database which must be replicated to an SQL Azure database (near-realtime replication is required).
I'm trying to use Attunity Replicate software to achieve this.
I created a task with Full Load option specified which successfully uploaded initial data to Azure.
After that I created another Task with Apply Changes option specified, but this task ends with errors:
Failed to execute statement: 'INSERT INTO [attrep_apply_exceptions] values ( ...'
RetCode: SQL_ERROR SqlState: 42000 NativeError: 40054 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server] Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. Line: 1 Column: -1
Attunity created [attrep_apply_exceptions] table in the Azure database which doesn't have any clustered index, so insert fails (Azure doesn't allow tables without clustered index).
Why is it happening? Should I add an index myself?

All sql-azure tables must have a clustered index. You will be able to create a table without one but once you insert your first record you will see the message:Tables without a clustered index are not supported in this version of SQL Server
There is a list of Azure Sql limitations/differences here
To answer your question, yes you must add a clustered index yourself.

Related

After altering column in SQL Server, Access linked tables will not work

After running this SQL script in SQL Server 2012:
ALTER TABLE table1
ALTER COLUMN name NVARCHAR(50)
I am unable to create a linked table from Access 2013.
The following error is displayed when I attempt to link to the SQL Server table:
The issue in this case ended up being a spatial index. Access would not add the linked table if the spatial index existed. I simply removed the spatial index temporarily, added the linked table in access and then recreated the spatial index on my sql column.

Alter Memory Optimized SQL Server 2014 Table

Can I alter my memory optimized table? Like adding column or changing data types etc. If yes, how to do it?
I am using SQL Server 2014
Thanks
According to Altering Memory-Optimized Tables (SQL Server 2014):
Performing ALTER operations on memory-optimized tables is not
supported. This includes such operations as changing the bucket_count,
adding or removing an index, and adding or removing a column. This
topic provides guidelines on how to update memory-optimized tables.
Updating the definition of a memory-optimized table requires you to create a new table with the updated table definition, copy the data to the new table, and start using the new table.
But it will be possible with SQL Server 2016:
In SQL Server 2016 Community Technology Preview 2 (CTP2) you can
perform ALTER operations on memory-optimized tables by using the ALTER
TABLE statement. The database application can continue to run, and any
operation that is accessing the table is blocked until the alteration
process is completed.
In the previous release of SQL Server, you had to manually complete
several steps to update memory-optimized tables.

SSIS fastload with indexes disabled on target does not work

Using SSIS - VS 2008
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
I am trying to do a bulk update using a staging table. The staging table is an exact schema copy of my destination table. I have read that Indexes can hamper the performance of uploads to a staging table using the fastload option. So I disable the index before the data flow task and rebuild the index after.
However my SSIS package fails on runtime validation. It seems I cannot do a fastload to the staging table with the indexes disabled. This is the error message I recive: The query processor is unable to produce a plan because the index 'PK_StagingTable' on table or view 'StagingTable' is disabled."
If I remove the command where the index is disabled (Step 3 becomes just truncate table StagingTable) then the SSIS package works.
The question is should this have worked with the index disabled or is that just bad advice? Is there something missing from the instructions that would allow the insert to work with indexes disabled?
The destination table (in this case a staging table) you want to use SSIS' fastload option on can have the indexes disabled beforehand, but it will only work if the indexes are non-clustered. In my specific situation, the table I made a schema copy of had a Primary Key, which means it had a clustered-index. I removed the Primary Key on the staging table and created a non-clustered index using the same columns.

Error when trying to publish database to Azure without clustered indexes

I'm getting the following error when trying to publish my database to my Azure site:
17:13:29: Could not publish the site. Unable to publish the database. For more information, see "http://go.microsoft.com/fwlink/?LinkId=205387"
17:13:29: Error detail:
17:13:29: An error occurred during execution of the database script. The error occurred between the following lines of the script: "332" and "334". The verbose log might have more information about the error. The command started with the following:
17:13:29: "INSERT [dbo].[DBStatus] ([LastIndex], [LastUpdated"
17:13:29: Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. http://go.microsoft.com/fwlink/?LinkId=178587
17:13:29: Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_SQL_EXECUTION_FAILURE.
This link explains the error: http://blogs.msdn.com/b/sqlazure/archive/2010/04/29/10004618.aspx
Is there an easy way to convert my database to have clustered indexes or should I just choose a host with SQL Server 2012 hosting?
I wouldn't change your hosting option simply due to this. Add the clustered indexes.
There is a tool called SQL Database Migration Wizard which can analyze a SQL database and make a migration script and even move the data. One of the things it will do is suggest a clustered index on tables that don't have them. However, like any tool, make sure you look at what it is suggesting and see if it makes sense in your scenario.
My suggestion is to look at the tables that do not have a clustered index and determine a reasonable index to create. A tool like the one above can make suggestions, but they are just suggestions and may not be exactly what your table would benefit from.
The requirement for clustered indexes for Azure SQL Database comes from the fact that they replicate the data in triplicate and the clustered indexes makes that a faster process.
I've done these steps:
1- Get a list of Tables without Primary Key by running this query:
USE DatabaseName;
GO
SELECT SCHEMA_NAME(schema_id) AS SchemaName,name AS TableName
FROM sys.tables
WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0
ORDER BY SchemaName, TableName;
GO`
2- Create Alert table scripts for all tables, add them a Identity field and make it Primary by scripts like this:
ALTER TABLE [dbo].[tableName] ADD identityFieldName BIGINT IDENTITY;
ALTER TABLE [dbo].[tableName] ADD CONSTRAINT PK_Table_identityFieldName_1 PRIMARY KEY CLUSTERED (identityFieldName);
Repeat above query for all tables that does not have Primary Key

Upsizing Access to SQL Server

I use Access 2010 and SQL Server 2005. I am new to the process of "upsizing" which I understand is a legacy term. When I make changes to published tables, I like to localize them back into Access, alter them with the Access interface, and then "re-upsize" them to SQL Server. When I "re-uspize" an altered table Access warns me:
"A table named xxxx already exists. Do you want to overwrite it?"
I choose yes. Then Access reports an error
"Server Error 3726: Could not drop object 'xxxx' because it is
referenced by a FOREIGN KEY constraint."
I understand the importance of foreign key constraints. I have encountered this same trouble using MySQL. In MySQL I would simply set Foreign_Key_Checks = 0; before the import, then set Foreign_Key_Checks = 1; when finished.
Unfortunately in SQL Server, a table cannot be dropped while it's keys are only disabled, they must be deleted. I don't want to delete and recreate foreign keys every time I alter a table. Do I need to start altering my tables in the SQL Server environment? Is there a way to easily "Re-upsize" a table and ignore foreign Key constraints?
If you need to use Access for a front end, instead of keeping an Access DB locally and dealing with the issues of moving back and forth. Try to use Access and connect directly to a version of the sql database you can develop against directly through access. You will probably want to look into using a linked datasource in Access to SQL.
Connecting SQL Server to an Access Database