EDIT :
Well looks like the only permanent part is the last part, everytime a new session is opened the other parts also change.
Is there any way to control this ?
------------------------------------------------
I created an identity primary key column on SQL Server 2012.
I noticed that everytime a new record is created the only part that changes is the first part. The other parts marked as XXX below remain the same. Will it change in the future? Maybe after some millions of records?
(e.g 85BD420D-XXXX-XXXX-XXXX-XXXXXXXXXXXX)
The table was created using MVC Code First, however please find below the script generation
/****** Object: Table [dbo].[Transactions] Script Date: 17-01-2014 11:25:20 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Transactions](
[TransactionId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_dbo.Transactions] PRIMARY KEY CLUSTERED
(
[TransactionId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Thanks
Related
I am using SQL Server 2008 R2.
I encountered this issue when re-create index.
As I need to alter column, so I drop constraint/index first and create back my constraint/index.
However, it shows error message saying
The operation failed because an index or statistics with name 'ABC' already exists on table 'test_table'
I wonder why would this error message shown? Since I have drop constraint
I wrote this to drop index
DROP INDEX [ABC] ON [dbo].[test_table] WITH ( ONLINE = OFF )
I then re-create index
CREATE NONCLUSTERED INDEX [ABC] ON [dbo].[test_table]
(
[col_1] ASC,
[col_2] ASC,
[col_3] ASC
)
INCLUDE ( [col_4],
[col_5]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Anyone has any idea what's wrong here?
I am very new to Ruby on Rails development. Recently, there was a request to integrate an external database as part of our application. However, we were given SQL statements copied and pasted in a .docx file.
Example:
USE [portal]
GO
/****** Object: User [admin] Script Date: 9/11/2018 11:28:54 AM ******/
CREATE USER [admin] WITHOUT LOGIN WITH DEFAULT_SCHEMA=[admin]
GO
/****** Object: Schema [admin] Script Date: 9/11/2018 11:28:54 AM ******/
CREATE SCHEMA [admin]
GO
/****** Object: Table [admin].[ApplicationEnterprise] Script Date: 9/11/2018 11:28:54 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [admin].[ApplicationEnterprise](
[ApplicationID] [varchar](15) NOT NULL,
[ApplicationType] [varchar](50) NULL,
[Level] [varchar](15) NULL,
CONSTRAINT [PK_Enterprise] PRIMARY KEY CLUSTERED
(
[ApplicationID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
.....
What is the best way for me to integrate this database into my application? I have cleaned up the SQL statements by removing "GO" and the [] etc. So far, my approach is to rewrite these statements into command lines to generate models.
Example:
rails g model ApplicationEnterprise ApplicationID:string ApplicationType:string Level:string
Is this the right approach?
I would suggest, to go with the schema options. I mean you first need to convert your sql statements in to the required format of schema.rb.
Then generate models from schema. Following article may help you to proceed.
https://codeburst.io/how-to-build-a-rails-app-on-top-of-an-existing-database-baa3fe6384a0
I have a customized application in my company where I can create a place for users to input their values to a database.
The table where I am submitting the data has 5 columns with its SQL CREATE Query as below:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Log_Ongoing](
[ID] [int] IDENTITY(1,1) NOT NULL,
[LogType] [int] NULL,
[ActivityDate] [datetime] NOT NULL,
[ActivityDescription] [text] NULL,
[Train] [int] NULL,
CONSTRAINT [PK_Log_Ongoing] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Log_Ongoing] WITH CHECK ADD CONSTRAINT [FK_Log_Ongoing_Trains] FOREIGN KEY([Train])
REFERENCES [dbo].[Trains] ([Id])
GO
ALTER TABLE [dbo].[Log_Ongoing] CHECK CONSTRAINT [FK_Log_Ongoing_Trains]
GO
The purpose of this table is to record the ongoing activities in the plant.
The user can come later and modify those activities by updating, adding or deleting through the application by choosing the report data then modifying the data.
My thinking was that before the user submits the data I will delete the old data with the same report date first then insert the new data again.
Unfortunately the data is submitted successfully, but not deleted.
I made a SQL trace to check the queries that the application sends to the database, and I found the below two statements:
exec sp_executesql N'DELETE FROM Log_Ongoing WHERE ActivityDate = #StartDate',N'#startDate datetimeoffset(7)',#startDate='2017-02-12 07:00:00 +02:00'
exec sp_executesql N'INSERT INTO Log_Ongoing (LogType, ActivityDate, ActivityDescription, Train ) VALUES (1,#StartDate, #Activity, #Train)',N'#Train int,#Activity nvarchar(2),#startDate datetimeoffset(7)',#Train=1,#Activity=N'11',#startDate='2017-02-12 07:00:00 +02:00'
When I tested the INSERT staement in the SSMS, it worked fine, but then when I tested the DELETE statement, it didn't work. What is wrong with this query?
As part of a migration project, we have imported data from a JDE iSeries DB2 database. An SSIS package was created to create the destination tables and import data. The import went successfully.
Now comes the problem - The customer wants Primary Keys created in the destination DB (SQL 2008 R2). The problem table in this case, would be one table that has 104 columns and 7.5 million rows of data. The PK required for this table is composite and has 7 columns.
We are considering this :
BEGIN TRANSACTION
GO
ALTER TABLE [dbo].[F0911] ADD CONSTRAINT [F0911_PK] PRIMARY KEY CLUSTERED
(
[GLDCT] ASC,
[GLDOC] ASC,
[GLKCO] ASC,
[GLDGJ] ASC,
[GLJELN] ASC,
[GLLT] ASC,
[GLEXTL] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
COMMIT
or this:
-- Rename existing tables
sp_RENAME '[F0911]' , '[F0911_old]'
GO
-- Create new table
SELECT * INTO F0911 FROM F0911_old WHERE 1=0
GO
--Create PK constraints
ALTER TABLE [dbo].[F0911] ADD CONSTRAINT [F0911_PK] PRIMARY KEY CLUSTERED
(
[GLDCT] ASC,
[GLDOC] ASC,
[GLKCO] ASC,
[GLDGJ] ASC,
[GLJELN] ASC,
[GLLT] ASC,
[GLEXTL] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
--Insert data into new tables
INSERT INTO F0911
SELECT * FROM F0911_old
GO
-- Drop old tables
DROP TABLE F0911_old
GO
Which would be a more efficient approach, performance wise? I have a gut feeling that both are the same and even the first approach does the same thing as the second one does, implicitly. Is this understanding correct?
Please note that all these columns already exist in the table and we cannot modify the table definition.
Thanks,
Raj
They're the same. The effect of creating a clustered index is to arrange the pages which will happen in both cases. For non-clustered indexes it will help to disable the index and then turn it back on and rebuilding it.
I think the first approach is right, but I don't understand the reason of BEGIN Transaction and END transaction. I don't think Transaction keyword is necessary because you are not modifying data of the table. Transaction is used where we have to lock the data and we are modifying real time data so the old data is not used.
Should we use a flag for soft deletes, or a separate joiner table? Which is more efficient? Database is SQL Server.
Background Information
A while back we had a DB consultant come in and look at our database schema. When we soft delete a record, we would update an IsDeleted flag on the appropriate table(s). It was suggested that instead of using a flag, store the deleted records in a seperate table and use a join as that would be better. I've put that suggestion to the test, but at least on the surface, the extra table and join looks to be more expensive then using a flag.
Initial Testing
I've set up this test.
Two tables, Example and DeletedExample. I added a nonclustered index on the IsDeleted column.
I did three tests, loading a million records with the following deleted/non deleted ratios:
Deleted/NonDeleted
50/50
10/90
1/99
Results - 50/50
Results - 10/90
Results - 1/99
Database Scripts, For Reference, Example, DeletedExample, and Index for Example.IsDeleted
CREATE TABLE [dbo].[Example](
[ID] [int] NOT NULL,
[Column1] [nvarchar](50) NULL,
[IsDeleted] [bit] NOT NULL,
CONSTRAINT [PK_Example] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Example] ADD CONSTRAINT [DF_Example_IsDeleted] DEFAULT ((0)) FOR [IsDeleted]
GO
CREATE TABLE [dbo].[DeletedExample](
[ID] [int] NOT NULL,
CONSTRAINT [PK_DeletedExample] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DeletedExample] WITH CHECK ADD CONSTRAINT [FK_DeletedExample_Example] FOREIGN KEY([ID])
REFERENCES [dbo].[Example] ([ID])
GO
ALTER TABLE [dbo].[DeletedExample] CHECK CONSTRAINT [FK_DeletedExample_Example]
GO
CREATE NONCLUSTERED INDEX [IX_IsDeleted] ON [dbo].[Example]
(
[IsDeleted] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
The numbers you have seem to indicate that my initial impression was correct: if your most common query against this database is to filter on IsDeleted = 0, then performance will be better with a simple bit flag, especially if you make wise use of indexes.
If you often query for deleted and undeleted data separately, then you could see a performance gain by having a table for deleted items and another for undeleted items, with identical fields. But denormalizing your data like this is rarely a good idea, as it will most often cost you far more in code maintenance costs than it will gain you in performance increases.
I'm not the SQL expert but in my opinion,it all depends on the usage frequency of the database. If the database is accessed by the large number of users and needs to be efficient then usage of a seperate isDeleted table will be good. The better option would be using a flag during the production time and as a part of daily/weekly/monthly maintanace you may move all the soft deleted records to the isDeleted table and clear the production table of soft deleted records. The mixture of both option will be good a good one.