For a specific task I need to store the identity of a row in a tabel to access it later. Most of these tables do NOT have a numeric ID and the primary key sometimes consists of multiple fields. VARCHAR & INT combined.
Background info:
The participating tables have a trigger storing delete, update and insert events in a general 'sync' tabel (Oracle v11). Every 15 minutes a script is then launched to update corresponding tables in a remote database (SQL Server 2012).
One solution I came up with was to use multiple columns in this 'sync' table, 3 INT columns and 3 VARCHAR columns. A table with 2 VARCHAR columns would then use 2 VARCHAR columns in this 'sync' table.
A better/nicer solution would be to 'select' the value of the primary key and store this in this table.
Example:
CREATE TABLE [dbo].[Workers](
[company] [nvarchar](50) NOT NULL,
[number] [int] NOT NULL,
[name] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_Workers] PRIMARY KEY CLUSTERED ( [company] ASC, [number] ASC )
)
// Fails:
SELECT [PK_Workers], [name] FROM [dbo].[Workers]
UPDATE [dbo].[Workers] SET [name]='new name' WHERE [PK_Workers]=#PKWorkers
// Bad (?) but works:
SELECT ([company] + CAST([number] AS NVARCHAR)) PK, [name] FROM [dbo].[Workers];
UPDATE [dbo].[Workers] SET [name]='newname' WHERE ([company] + CAST([number] AS NVARCHAR))=#PK
The [PK_Workers] fails in these queries. Is there another way to get this value without manually combining and casting the index?
Or is there some other way to do this that I don't know?
for each table create a function returning a concatenated primary key. create a function based index on this function too. then use this function in SELECT and WHERE clauses
Related
We have electronic forms that filers fill out online and we store the data in an SQL Server. We want to provide a search feature that allows us to search inside each electronic filing for matching keywords. We don’t need to know what word matched or where in the form it matches, we just need a ranked list of forms that match our keywords. We think SQL Full-Text Searching would be our best option because we are already using SQL server 2016. We just started with implementing a solution but would like some guidance since this is new territory for us.
Here is an example of how our tables are structured.
Filing is our top-level table for all electronic forms. We have sub tables that are all related through the FilingId. The Form Six Published Filings table has child tables to store information like Assets. The Form One Published Filings table has child tables to store information like Liabilities.
CREATE SCHEMA [Forms]
GO
CREATE SCHEMA [Form6]
GO
CREATE SCHEMA [Form1]
GO
CREATE TABLE [Forms].[Filing](
[FilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Forms_Filing_FilingId] PRIMARY KEY CLUSTERED,
[FilerUserId] [int] NOT NULL,
[FormYear] [int] NOT NULL,
[FormTypeId] [int] NOT NULL,
[FilingStatusId] [int] NOT NULL,
[FilerSignatureId] INT NULL,
[SubmissionDate] DATETIME2(0) NULL,
[IsScannedForm] BIT NOT NULL
CONSTRAINT [DF_Forms_Filing_IsScannedForm] DEFAULT(0)
)
GO
CREATE TABLE [Form6].[FormSixPublishedFilings](
[FormSixPublishedFilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form6_FormSixPublishedFilings_FormSixPublishedFilingId] PRIMARY KEY CLUSTERED,
[FilingId] INT NOT NULL
CONSTRAINT [FK_Form6_FormSixPublishedFilings_Filings] FOREIGN KEY ([FilingId]) REFERENCES [Forms].[Filing] ([FilingId]),
[LastDateOfEmployment] DATE NULL,
[NetWorthDate] DATE NULL,
[NetWorth] MONEY NULL
)
GO
CREATE TABLE [Form6].[FormSixPublishedAssets](
[FormSixPublishedAssetId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form6_FormSixPublishedAssets_FormSixPublishedAssetId] PRIMARY KEY CLUSTERED,
[FormSixPublishedFilingId] INT NOT NULL
CONSTRAINT [FK_Form6_FormSixPublishedAssets_FormSixPublishedFilings] FOREIGN KEY ([FormSixPublishedFilingId]) REFERENCES [Form6].[FormSixPublishedFilings] ([FormSixPublishedFilingId]),
[Name] VARCHAR(8000) NOT NULL,
[Amount] MONEY NOT NULL
)
GO
CREATE TABLE [Form1].[FormOnePublishedFilings]
(
[FormOnePublishedFilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form1_FormOnePublishedFilings_FormOnePublishedFilingId] PRIMARY KEY CLUSTERED,
[FilingId] INT NOT NULL,
CONSTRAINT [FK_Form1_FormOnePublishedFilings_Filing] FOREIGN KEY ([FilingId]) REFERENCES [Forms].[Filing] ([FilingId]),
[HasServedAsAgent] BIT NULL,
[LastDateOfEmployment] DATE NULL,
[AmendmentReason] VARCHAR(1024) NULL,
)
GO
CREATE TABLE [Form1].[FormOnePublishedLiabilities]
(
[FormOnePublishedLiabilityId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form1_FormOnePublishedLiabilities_FormOnePublishedLiabilityId] PRIMARY KEY CLUSTERED,
[FormOnePublishedFilingId] INT NOT NULL,
CONSTRAINT [FK_Form1_FormOnePublishedLiabilities_FormOnePublishedFilings] FOREIGN KEY ([FormOnePublishedFilingId]) REFERENCES [Form1].[FormOnePublishedFilings] ([FormOnePublishedFilingId]),
[NameOfCreditor] VARCHAR(8000) NOT NULL,
[AddressOfCreditor] VARCHAR(8000) NOT NULL
)
GO
In order to be able to search through all the forms, I think we need to create a view that just has two columns. One for the FilingId and the other column would be an XML data type which would be an XML representation of all the data in each electronic filing. This XML column is what we will be using to set up our full-text index. I think we will be using the FreeTextTable search because we would like to have the results ranked and also the search terms will be entered by end-users.
create view ViewForFullTextSearching with schemabinding as
select f.FilingId,
(select
filing.FilingId
,filing.FormYear
,filing.FormTypeId
,filing.FilingStatusId
,filing.FilerSignatureId
,filing.SubmissionDate
,filing.IsScannedForm
,form6Filing.LastDateOfEmployment 'Form6LastDateOfEmployment'
,form6Filing.NetWorthDate
,form6Filing.NetWorth
,form6Asset.Name
,form6Asset.Amount
,form1Filing.HasServedAsAgent
,form1Filing.LastDateOfEmployment 'Form1LastDateOfEmployment'
,form1Filing.AmendmentReason
,form1Liability.NameOfCreditor
,form1Liability.AddressOfCreditor
from Forms.Filing filing
left join Form6.FormSixPublishedFilings form6Filing on filing.FilingId = form6Filing.FilingId
left join Form6.FormSixPublishedAssets form6Asset on form6Filing.FormSixPublishedFilingId = form6Asset.FormSixPublishedFilingId
left join Form1.FormOnePublishedFilings form1Filing on filing.FilingId = form1Filing.FilingId
left join Form1.FormOnePublishedLiabilities form1Liability on form1Liability.FormOnePublishedFilingId = form1Filing.FormOnePublishedFilingId
where filing.FilingId = f.FilingId
for xml auto, type
) as 'Filing'
from Forms.Filing f
GO
create unique clustered index [IX_ViewForFullTextSearching_FilingId] ON [Forms].[ViewForFullTextSearching] ([FilingId])
GO
The above SQL does not actually work because I get this error.
Cannot create an index on view "EthicsFdms.Forms.ViewForFullTextSearching" because it contains one or more subqueries. Consider changing the view to use only joins instead of subqueries. Alternatively, consider not indexing this view.
So, I’m a bit lost on how to create a view with XML to search over if I’m not allowed to create a materialized view that has subqueries.
This view results look like this:
Next we setup our Full Text Catalog and Index on this view:
CREATE FULLTEXT CATALOG [FtcFilings];
GO
CREATE FULLTEXT INDEX ON [Forms].[ViewForFullTextSearching] ([Filing] language 1033) key index [IX_ViewForFullTextSearching_FilingId] on [FtcFilings];
GO
Then I was hoping we could search the filings like so:
select ftt.*
from [Forms].[Filing] filing
inner join freetextable(Forms.ViewForFullTextSearching, Filing, 'APPLE') as ftt on filing.FilingId = ftt.[KEY]
order by rank desc
Right now my challenges are, is it possible to create a materialized view like this? Seems like I can’t because materialized views can’t have subqueries. I’m not sure how to build the XML field w/out subqueries.
If I’m not able to create a materialized view then how else can I create a full-text index that can search electronic Forms?
You cannot create an indexed view (which is a synchronous materialized view in SQL Server) only if there is a mathematical surjection and all scalar computation is deterministic and precise. By the way OUTER JOIN, SUBQUERIES and set operators (UNION, EXCEPT, INTERSECT) cannot be used...
The best ways to design your systeme is to do it in the reverse way...
Create a persistent computed column using the CONCAT function of all the columns you want to fulltext index.
Create fulltext indexes on the computed columns
Create an UDF that search in the fulltext index on each tables and concatenate the result by UNION, and then aggregate results to compute the rank.
Let me know if you want more assistance to do so...
If these form filling data are seldom changed once created and it makes sense in business to store data of form1 and form6 together with its Filling, you may consider to go with document oriented design.
SQL server has good json support now. You can save all the Filling and form info in json, against which you can do full text search, and create views to simulate your current design if needed.
Here is an example -
create table tst.form (
form_id int not null identity primary key
,content_json nvarchar(max)
)
-- inside content_json, the json may look like -
{
"filler_user_id": 111,
"filler_type_id": 1,
"is_scanned_form": 1,
"form1": [
{
"form1_filling_id": 101,
"has_served_as_agent":0,
"liabilities": [{"name_of_creditor": "abc"}]
}
]
}
I only modelled form1 related info. You can add form6 related info as needed.
Then you can do full text search against this content_json column.
Then create views to simulate your current design if needed -
create or alter view tst.form_base WITH SCHEMABINDING as
select form_id
,convert(int, JSON_VALUE(content_json, '$.filler_user_id')) filler_user_id
,convert(int, JSON_VALUE(content_json, '$.filler_type_id')) filler_type_id
,convert(bit, JSON_VALUE(content_json, '$.is_scanned_form')) is_scanned_form
,JSON_QUERY(content_json, '$.form1') form1_json
from tst.form
create unique clustered index idx_form_base_form_id on tst.form_base(form_id);
-- you can create index as needed
create index idx_form_base_filler_user_id on tst.form_base(filler_user_id);
create or alter view tst.form1 as
select form_id
,a.form1_filling_id
,a.has_served_as_agent
,a.liabilities liabilities_json
from tst.form_base cross apply OPENJSON(form1_json) WITH (
form1_filling_id int '$.form1_filling_id',
has_served_as_agent int '$.has_served_as_agent',
liabilities nvarchar(max) '$.liabilities' as json) a
create or alter view tst.form1_liabilities as
select form_id
,form1_filling_id
,a.name_of_creditor
from tst.form1 cross apply OPENJSON(liabilities_json) WITH (
name_of_creditor nvarchar(max) '$.name_of_creditor') a
Then create some test data -
insert into tst.form (content_json) values ('{
"filler_user_id": 111,
"filler_type_id": 1,
"is_scanned_form": 1,
"form1": [
{
"form1_filling_id": 101,
"has_served_as_agent":0,
"liabilities": [{"name_of_creditor": "abc"}]
}
]
}');
insert into tst.form (content_json) values ('{
"filler_user_id": 222,
"filler_type_id": 1,
"is_scanned_form": 0,
"form1": [
{
"form1_filling_id": 102,
"has_served_as_agent":1,
"liabilities": [{"name_of_creditor": "def"}]
}
]
}');
Try it -
select *
from tst.form1_liabilities
I have a table Values with 3 columns:
CREATE TABLE [dbo].[Values]
(
[Id] [uniqueidentifier] NOT NULL,
[Value] [nvarchar](150) NOT NULL,
[CreatedOnUtc] [datatime2](7) NOT NULL
)
I want SQL Server to set the value of CreatedOnUtc to UTC-Now whenever a new entry is created, and not allow an external command to set this value.
Is this possible?
This is sort of two questions. For the first:
CREATE TABLE [dbo].[Values] (
[Id] [uniqueidentifier] NOT NULL,
[Value] [nvarchar](150) NOT NULL,
[CreatedOnUtc] [datetime2](7) NOT NULL DEFAULT SYSUTCDATETIME()
);
The canonical way to prevent changes to the column is to use a trigger that prevents the value from being updated or inserted.
Note that Values is a really bad name for a table because it is a SQL keyword and SQL Server reserved word. Choose identifiers that do not need to be escaped.
There are other ways. For instance, you could turn off DML access to the table. Then create a view without CreatedOnUtc and only allow inserts and updates through the view.
I need to create an Index on two columns (within a table variable) which do not form unique key.
Table structure is shown below -
DECLARE #Sample TABLE (
[AssetSk] [int] NOT NULL,
[DateSk] [int] NOT NULL,
[Count] [numeric](38, 2) NULL
)
I am trying to add Index as shown below -
INDEX AD1 CLUSTERED([AssetSk],[DateSk])
However it gives me the following error while running it on SQL Server 2012
" Incorrect syntax near 'INDEX'. If this is intended as a part of a table hint, A WITH keyword and parenthesis are now required. See SQL Server Books Online for proper syntax."
However, this runs perfectly on SQL Server 2014 . Is there any way that I could run it on SQL Server 2012 .
You can't build index other than unique key at table variable using SQL Server version prior to 2014.
However, you can do the trick: add one more colummn with autoincremented value and create unique index including columns you need and this new one.
DECLARE #Sample TABLE (
[ID] bigint identity(1, 1),
[AssetSk] [int] NOT NULL,
[DateSk] [int] NOT NULL,
[Count] [numeric](38, 2) NULL,
UNIQUE NONCLUSTERED ([AssetSk],[DateSk], ID)
)
Update: In fact, creation of such an index on table variable can be useless. Normally SQL Server estimates that a table variable has a single row, thus it will not use this index with relatively high probability.
As far as I know in SQL Server 2012 and below you can not add indexes to table variables. To add an index you must declare the table like this:
CREATE TABLE #Sample (
[AssetSk] [int] NOT NULL,
[DateSk] [int] NOT NULL,
[Count] [numeric](38, 2) NULL
)
And after you can create the index you need like this
CREATE CLUSTERED INDEX IX_MyIndex
ON #Sample ([AssetSk],[DateSk])
Of course, after you're done with the table in four function you can call
DROP TABLE #Sample
I am using SQL Server 2012 and I have the following User Defined Table Type
CREATE TYPE [dbo].[IdentifierCodeTable] AS TABLE(
[Id] [dbo].[Identifier] NULL,
[Code] [dbo].[Code] NULL
)
I am trying to enforce that Id must be Unique except for NULL values.
I have the following code and it is working fine for NON NULL values but when I try to insert 2 NULL values it does not allow me to do it.
CREATE TYPE [dbo].[IdentifierCodeTable] AS TABLE(
[Id] [dbo].[Identifier] NULL,
[Code] [dbo].[Code] NULL,
UNIQUE(Id)
)
Is there any way to exclude the NULL values from that UNIQUE Constraint like I can do in the regular indexes with the filter?
I think this is all I need to know (It is SQL Server 2008 but i think it applies to SQL Server 2012 either).
A nonclustered index cannot be created on a user-defined table type unless the index is the result of creating a PRIMARY KEY or UNIQUE constraint on the user-defined table type. (SQL Server enforces any UNIQUE or PRIMARY KEY constraint by using an index.)
Source: https://technet.microsoft.com/en-us/library/bb522526%28v=sql.105%29.aspx
I have a sql server 2012 database. In which i have a changeLog table that contains
TableName, ColumnName, FromValue and ToValue columns. Which will be used to keep track of modified columns and data.
So if any update occur through application then only modified columns should insert into this table with its new and old value.
Can anyone help me in this.
For Example:
If the procedure updates all columns of property table (propertyName, address)
then if user update propertyName (but update also contains address column but with no data change) then only propertyName and its data will be inserted into ChangeLog table not address column and its data because address data does not contains any data change.
IF there is no other auditing requirement at all - you would not be thinking about Auditing in any way without this - then OK, go for it. However this is a very limited use of Auditing: User X changed this field at time Y. Generally this is interesting as part of a wider question: what did user X do? What happened to that customer data in the database to end up the way it is now?
Questions like that are harder to answer if you have the data structure you propose and would be quite onerous to reconstruct. My usual approach would be as follows. Starting from a base table like so (this from one of my current projects):
CREATE TABLE [de].[Generation](
[Id] [int] IDENTITY(1,1) NOT NULL,
[LocalTime] [datetime] NOT NULL,
[EntityId] [int] NOT NULL,
[Generation] [decimal](18, 4) NOT NULL,
[UpdatedAt] [datetime] NOT NULL CONSTRAINT [DF_Generation_UpdatedAt] DEFAULT (getdate()),
CONSTRAINT [PK_Generation] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
(I've excluded FK definitions as they aren't relevant here.)
First create an Audit table for this table:
CREATE TABLE [de].[GenerationAudit](
[AuditId] int identity(1, 1) not null,
[Id] [int] NOT NULL,
[LocalTimeOld] [datetime] NULL,
[EntityIdOld] [int] NULL,
[GenerationOld] [decimal](18, 4) null,
[UpdatedAtOld] [datetime] null,
[LocalTimeNew] [datetime] null,
[EntityIdNew] [int] null,
[GenerationNew] [decimal](18, 4) null,
[UpdatedAtNew] [datetime] NOT NULL CONSTRAINT [DF_GenerationAudit_UpdatedAt] DEFAULT (getdate()),
[UpdatedBy] varchar(60) not null
CONSTRAINT [PK_GenerationAudit] PRIMARY KEY CLUSTERED
(
[AuditId] ASC
)
This table has an *Old and a *New version of each column that can't change. The Id, being an IDENTITY PK, can't change so no need for an old/new. I've also added an UpdatedBy column. It also has a new AuditId IDENTITY PK.
Next create three triggers on the base table: one for INSERT, one for UPDATE and one for DELETE. In the Insert trigger, insert a row into the Audit table with the New columns selected from the inserted table and the Old values as null. In the UPDATE one, the Oldvalues come from the deleted and the new from the inserted. In the DELETE trigger, old from from deleted and the new are all null.
The UPDATE trigger would look like this:
CREATE TRIGGER GenerationAuditUpdate
ON de.Generation
AFTER UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
insert into de.GenerationAudit (Id, LocalTimeOld, EntityIdOld, GenerationOld, UpdatedAtOld,
LocalTimeNew, EntityIdNew, GenerationNew, UpdatedAtNew,
UpdatedBy)
select isnull(i.Id, d.Id), d.LocalTime, d.EntityId, d.Generation, d.UpdatedAt,
i.LocalTime, i.EntityId, d.Generation, getdate(),
SYSTEM_USER)
from inserted i
full outer join deleted d on d.Id = i.Id;
END
GO
You then have a full before/after picture of each change (and it'll be faster than seperating out diffs column by column). You can create views over the Audit table to get entries where the Old value is different to the new, and include the base table Id (which you will also need in your structures!), the user who did it, and the time they did it (UpdatedAtNew).
That's my version of Auditing and it's mine!