SSIS fastload with indexes disabled on target does not work - sql

Using SSIS - VS 2008
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
I am trying to do a bulk update using a staging table. The staging table is an exact schema copy of my destination table. I have read that Indexes can hamper the performance of uploads to a staging table using the fastload option. So I disable the index before the data flow task and rebuild the index after.
However my SSIS package fails on runtime validation. It seems I cannot do a fastload to the staging table with the indexes disabled. This is the error message I recive: The query processor is unable to produce a plan because the index 'PK_StagingTable' on table or view 'StagingTable' is disabled."
If I remove the command where the index is disabled (Step 3 becomes just truncate table StagingTable) then the SSIS package works.
The question is should this have worked with the index disabled or is that just bad advice? Is there something missing from the instructions that would allow the insert to work with indexes disabled?

The destination table (in this case a staging table) you want to use SSIS' fastload option on can have the indexes disabled beforehand, but it will only work if the indexes are non-clustered. In my specific situation, the table I made a schema copy of had a Primary Key, which means it had a clustered-index. I removed the Primary Key on the staging table and created a non-clustered index using the same columns.

Related

Alter Memory Optimized SQL Server 2014 Table

Can I alter my memory optimized table? Like adding column or changing data types etc. If yes, how to do it?
I am using SQL Server 2014
Thanks
According to Altering Memory-Optimized Tables (SQL Server 2014):
Performing ALTER operations on memory-optimized tables is not
supported. This includes such operations as changing the bucket_count,
adding or removing an index, and adding or removing a column. This
topic provides guidelines on how to update memory-optimized tables.
Updating the definition of a memory-optimized table requires you to create a new table with the updated table definition, copy the data to the new table, and start using the new table.
But it will be possible with SQL Server 2016:
In SQL Server 2016 Community Technology Preview 2 (CTP2) you can
perform ALTER operations on memory-optimized tables by using the ALTER
TABLE statement. The database application can continue to run, and any
operation that is accessing the table is blocked until the alteration
process is completed.
In the previous release of SQL Server, you had to manually complete
several steps to update memory-optimized tables.

DDL changes not showing in Oracle sql developer

I have sql Upgrade script which has many sql statements(DDL,DML). When i ran this upgrade script in SQL developer, it runs successufully.I also provide in my script at the bottom commit. I can see all the changes in the database after running this upgrade script except the unique index constraints. When i insert few duplicate records it says unique constraint violated. It means the table has unique constraints. But i dont know why i cant view this constraints in oracle sql developer. The other DDL changes made i can view.I dont know is there any settings to view it in oracle sql developer.
CREATE UNIQUE INDEX "RATOR_MONITORING"."CAPTURING_UK1" ON "RATOR_MONITORING"."CAPTURING" ("DB_TABLE");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND" ("NAME");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS" ("BRAND_ID", "BP_ID");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE" ("BRAND_ID", "ENGINE_ID");
As A Hocevar noted, if you create an index
create unique index test_ux on test(id);
you see it in the Indexes tab of the table properties (not in the Constraints tab).
Please note that COMMIT is not required here, it is done implicitely in each DDL statement. More usual source of problems are stale metadata in SQL Developer, i.e. missing REFRESH (ctrl R on user or table node).
If you want to define the constraint, add following statement, that will reuse the index defined previously
alter table test add constraint test_unique unique(id) using index test_ux;
See further discussion about the option in Documentation
I am assuming you are trying to look for index on a table in the correct tab in sql developer. If you are not able to see the index there, one reason could be that your user (the one with which you are logged in) doesn't have proper rights to see the Index.
If you not obtain any error, the solution is very simple and tedious. SQL Developer doesn't refresh his fetched structures. Kindly push Refresh blue icon (or use Ctrl-R) in Connections view or disconnect and connect again (or restart SQL Developer) to see your changes in structures.

Attunity Replicate - tables in Azure are created without clustered indexes

I have a large(~33 millions records) on-premise SQL Server database which must be replicated to an SQL Azure database (near-realtime replication is required).
I'm trying to use Attunity Replicate software to achieve this.
I created a task with Full Load option specified which successfully uploaded initial data to Azure.
After that I created another Task with Apply Changes option specified, but this task ends with errors:
Failed to execute statement: 'INSERT INTO [attrep_apply_exceptions] values ( ...'
RetCode: SQL_ERROR SqlState: 42000 NativeError: 40054 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server] Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. Line: 1 Column: -1
Attunity created [attrep_apply_exceptions] table in the Azure database which doesn't have any clustered index, so insert fails (Azure doesn't allow tables without clustered index).
Why is it happening? Should I add an index myself?
All sql-azure tables must have a clustered index. You will be able to create a table without one but once you insert your first record you will see the message:Tables without a clustered index are not supported in this version of SQL Server
There is a list of Azure Sql limitations/differences here
To answer your question, yes you must add a clustered index yourself.

Error when trying to publish database to Azure without clustered indexes

I'm getting the following error when trying to publish my database to my Azure site:
17:13:29: Could not publish the site. Unable to publish the database. For more information, see "http://go.microsoft.com/fwlink/?LinkId=205387"
17:13:29: Error detail:
17:13:29: An error occurred during execution of the database script. The error occurred between the following lines of the script: "332" and "334". The verbose log might have more information about the error. The command started with the following:
17:13:29: "INSERT [dbo].[DBStatus] ([LastIndex], [LastUpdated"
17:13:29: Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again. http://go.microsoft.com/fwlink/?LinkId=178587
17:13:29: Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_SQL_EXECUTION_FAILURE.
This link explains the error: http://blogs.msdn.com/b/sqlazure/archive/2010/04/29/10004618.aspx
Is there an easy way to convert my database to have clustered indexes or should I just choose a host with SQL Server 2012 hosting?
I wouldn't change your hosting option simply due to this. Add the clustered indexes.
There is a tool called SQL Database Migration Wizard which can analyze a SQL database and make a migration script and even move the data. One of the things it will do is suggest a clustered index on tables that don't have them. However, like any tool, make sure you look at what it is suggesting and see if it makes sense in your scenario.
My suggestion is to look at the tables that do not have a clustered index and determine a reasonable index to create. A tool like the one above can make suggestions, but they are just suggestions and may not be exactly what your table would benefit from.
The requirement for clustered indexes for Azure SQL Database comes from the fact that they replicate the data in triplicate and the clustered indexes makes that a faster process.
I've done these steps:
1- Get a list of Tables without Primary Key by running this query:
USE DatabaseName;
GO
SELECT SCHEMA_NAME(schema_id) AS SchemaName,name AS TableName
FROM sys.tables
WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0
ORDER BY SchemaName, TableName;
GO`
2- Create Alert table scripts for all tables, add them a Identity field and make it Primary by scripts like this:
ALTER TABLE [dbo].[tableName] ADD identityFieldName BIGINT IDENTITY;
ALTER TABLE [dbo].[tableName] ADD CONSTRAINT PK_Table_identityFieldName_1 PRIMARY KEY CLUSTERED (identityFieldName);
Repeat above query for all tables that does not have Primary Key

Replication in SQL Server 2008 R2 if Source and Destination Tables indexes are different

To fine tune the performance of overall system, I was checking the existing table's Indexes and found that we are using a ErrorLog table which is hit(for writing warnings and errors) for millions of transactions everyday. As we have indexes(on datetime) on this kind of table I thought this logging will definitely takes longer than the logging in to table without any indexes.
The whole indexing on CreateDateTime is used only by Developers for querying the table for troubleshooting in Production environment. Is it possible to take out index on the primary production server and have index only for the table in secondary(backup) db server. As we are doing replication on secondary server data is always in Sysc.
To Sync both the tables via replication, do we need to have same indexes on both tables?
Assuming transactional replication, there is nothing saying that the indexes have to be the same between the publisher and the subscriber. The only thing that you need to keep is the primary key as that's how replication identifies what row(s) on the subscriber need to be affected by a given statement at the publisher.