SQL Server triggers and sqlalchemy interference problem. Help needed - sql

I need to have a kind of 'versioning' for some critical tables, and tried to implement it in a rather simple way:
CREATE TABLE [dbo].[Address] (
[id] bigint IDENTITY(1, 1) NOT NULL,
[post_code] bigint NULL,
...
)
CREATE TABLE [dbo].[Address_History] (
[id] bigint NOT NULL,
[id_revision] bigint NOT NULL,
[post_code] bigint NULL,
...
CONSTRAINT [PK_Address_History] PRIMARY KEY CLUSTERED ([id], [id_revision]),
CONSTRAINT [FK_Address_History_Address]...
CONSTRAINT [FK_Address_History_Revision]...
)
CREATE TABLE [dbo].[Revision] (
[id] bigint IDENTITY(1, 1) NOT NULL,
[id_revision_operation] bigint NULL,
[id_document_info] bigint NULL,
[description] varchar(255) COLLATE Cyrillic_General_CI_AS NULL,
[date_revision] datetime NULL,
...
)
and a bunch of triggers on insert/update/delete for each table, that is intended to store it's changes.
My application is based on PyQt + sqlalchemy, and when I try to insert an entity, that is stored in a versioned table, sqlalchemy fires an error:
The target table 'Heritage' of the DML statement cannot have
any enabled triggers if the statement contains
an OUTPUT clause without INTO clause.
(334) (SQLExecDirectW); [42000]
[Microsoft][ODBC SQL Server Driver]
[SQL Server]Statement(s) could not be prepared. (8180)")
What should I do? I must use sqlalchemy.
If one can give an advice to me, how can I implement versioning without triggers, it'd be cool.

You should set 'implicit_returning' to 'False' to avoid "OUTPUT" usage in query generated by SQLAlchemy (and this should resolve your issue):
class Company(sqla.Model):
__bind_key__ = 'dbnamere'
__tablename__ = 'tblnamehere'
__table_args__ = {'implicit_returning': False} # http://docs.sqlalchemy.org/en/latest/dialects/mssql.html#triggers
id = sqla.Column('ncompany_id', sqla.Integer, primary_key=True)
...

I cant seem to add a comment so adding another answer.
It's not that complicated and I would suggest it's less fragile than putting 1/2 your business logic in your domain and the other half in your database trigger.
Personally I would write my own list object with a reference to the history list for the some_list_of_other_entities and in the Remove and Add methods maintain your history records.
This way your objects are automatically up to date before even saving them into your ORM.
public class ListOfOtherEntities : System.Collections.IEnumerable
{
// Add list stuff here...
public void Remove(MyEntity obj)
{
this.List.Remove(obj);
this.History.Add(new History("Added a object!");
}
public void Remove(MyEntity obj)
{
this.List.Remove(obj);
this.History.Add(new History("Removed a object!");
}
}
This way your objects are automatically up to date before even saving them into your ORM and another developer looking at the code can see what you have done quite easily.

This won't answer your question directly but in my experience using Triggers leads to endless pain so avoid them at all cost. If you manage all of the data yourself then the simple answer is populate the version history tables yourself. It also means you have all of your business logic in one place which is a bonus!

Related

SQL Full Text Index on multiple tables and columns

We have electronic forms that filers fill out online and we store the data in an SQL Server. We want to provide a search feature that allows us to search inside each electronic filing for matching keywords. We don’t need to know what word matched or where in the form it matches, we just need a ranked list of forms that match our keywords. We think SQL Full-Text Searching would be our best option because we are already using SQL server 2016. We just started with implementing a solution but would like some guidance since this is new territory for us.
Here is an example of how our tables are structured.
Filing is our top-level table for all electronic forms. We have sub tables that are all related through the FilingId. The Form Six Published Filings table has child tables to store information like Assets. The Form One Published Filings table has child tables to store information like Liabilities.
CREATE SCHEMA [Forms]
GO
CREATE SCHEMA [Form6]
GO
CREATE SCHEMA [Form1]
GO
CREATE TABLE [Forms].[Filing](
[FilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Forms_Filing_FilingId] PRIMARY KEY CLUSTERED,
[FilerUserId] [int] NOT NULL,
[FormYear] [int] NOT NULL,
[FormTypeId] [int] NOT NULL,
[FilingStatusId] [int] NOT NULL,
[FilerSignatureId] INT NULL,
[SubmissionDate] DATETIME2(0) NULL,
[IsScannedForm] BIT NOT NULL
CONSTRAINT [DF_Forms_Filing_IsScannedForm] DEFAULT(0)
)
GO
CREATE TABLE [Form6].[FormSixPublishedFilings](
[FormSixPublishedFilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form6_FormSixPublishedFilings_FormSixPublishedFilingId] PRIMARY KEY CLUSTERED,
[FilingId] INT NOT NULL
CONSTRAINT [FK_Form6_FormSixPublishedFilings_Filings] FOREIGN KEY ([FilingId]) REFERENCES [Forms].[Filing] ([FilingId]),
[LastDateOfEmployment] DATE NULL,
[NetWorthDate] DATE NULL,
[NetWorth] MONEY NULL
)
GO
CREATE TABLE [Form6].[FormSixPublishedAssets](
[FormSixPublishedAssetId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form6_FormSixPublishedAssets_FormSixPublishedAssetId] PRIMARY KEY CLUSTERED,
[FormSixPublishedFilingId] INT NOT NULL
CONSTRAINT [FK_Form6_FormSixPublishedAssets_FormSixPublishedFilings] FOREIGN KEY ([FormSixPublishedFilingId]) REFERENCES [Form6].[FormSixPublishedFilings] ([FormSixPublishedFilingId]),
[Name] VARCHAR(8000) NOT NULL,
[Amount] MONEY NOT NULL
)
GO
CREATE TABLE [Form1].[FormOnePublishedFilings]
(
[FormOnePublishedFilingId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form1_FormOnePublishedFilings_FormOnePublishedFilingId] PRIMARY KEY CLUSTERED,
[FilingId] INT NOT NULL,
CONSTRAINT [FK_Form1_FormOnePublishedFilings_Filing] FOREIGN KEY ([FilingId]) REFERENCES [Forms].[Filing] ([FilingId]),
[HasServedAsAgent] BIT NULL,
[LastDateOfEmployment] DATE NULL,
[AmendmentReason] VARCHAR(1024) NULL,
)
GO
CREATE TABLE [Form1].[FormOnePublishedLiabilities]
(
[FormOnePublishedLiabilityId] INT NOT NULL IDENTITY(1,1)
CONSTRAINT [PK_Form1_FormOnePublishedLiabilities_FormOnePublishedLiabilityId] PRIMARY KEY CLUSTERED,
[FormOnePublishedFilingId] INT NOT NULL,
CONSTRAINT [FK_Form1_FormOnePublishedLiabilities_FormOnePublishedFilings] FOREIGN KEY ([FormOnePublishedFilingId]) REFERENCES [Form1].[FormOnePublishedFilings] ([FormOnePublishedFilingId]),
[NameOfCreditor] VARCHAR(8000) NOT NULL,
[AddressOfCreditor] VARCHAR(8000) NOT NULL
)
GO
In order to be able to search through all the forms, I think we need to create a view that just has two columns. One for the FilingId and the other column would be an XML data type which would be an XML representation of all the data in each electronic filing. This XML column is what we will be using to set up our full-text index. I think we will be using the FreeTextTable search because we would like to have the results ranked and also the search terms will be entered by end-users.
create view ViewForFullTextSearching with schemabinding as
select f.FilingId,
(select
filing.FilingId
,filing.FormYear
,filing.FormTypeId
,filing.FilingStatusId
,filing.FilerSignatureId
,filing.SubmissionDate
,filing.IsScannedForm
,form6Filing.LastDateOfEmployment 'Form6LastDateOfEmployment'
,form6Filing.NetWorthDate
,form6Filing.NetWorth
,form6Asset.Name
,form6Asset.Amount
,form1Filing.HasServedAsAgent
,form1Filing.LastDateOfEmployment 'Form1LastDateOfEmployment'
,form1Filing.AmendmentReason
,form1Liability.NameOfCreditor
,form1Liability.AddressOfCreditor
from Forms.Filing filing
left join Form6.FormSixPublishedFilings form6Filing on filing.FilingId = form6Filing.FilingId
left join Form6.FormSixPublishedAssets form6Asset on form6Filing.FormSixPublishedFilingId = form6Asset.FormSixPublishedFilingId
left join Form1.FormOnePublishedFilings form1Filing on filing.FilingId = form1Filing.FilingId
left join Form1.FormOnePublishedLiabilities form1Liability on form1Liability.FormOnePublishedFilingId = form1Filing.FormOnePublishedFilingId
where filing.FilingId = f.FilingId
for xml auto, type
) as 'Filing'
from Forms.Filing f
GO
create unique clustered index [IX_ViewForFullTextSearching_FilingId] ON [Forms].[ViewForFullTextSearching] ([FilingId])
GO
The above SQL does not actually work because I get this error.
Cannot create an index on view "EthicsFdms.Forms.ViewForFullTextSearching" because it contains one or more subqueries. Consider changing the view to use only joins instead of subqueries. Alternatively, consider not indexing this view.
So, I’m a bit lost on how to create a view with XML to search over if I’m not allowed to create a materialized view that has subqueries.
This view results look like this:
Next we setup our Full Text Catalog and Index on this view:
CREATE FULLTEXT CATALOG [FtcFilings];
GO
CREATE FULLTEXT INDEX ON [Forms].[ViewForFullTextSearching] ([Filing] language 1033) key index [IX_ViewForFullTextSearching_FilingId] on [FtcFilings];
GO
Then I was hoping we could search the filings like so:
select ftt.*
from [Forms].[Filing] filing
inner join freetextable(Forms.ViewForFullTextSearching, Filing, 'APPLE') as ftt on filing.FilingId = ftt.[KEY]
order by rank desc
Right now my challenges are, is it possible to create a materialized view like this? Seems like I can’t because materialized views can’t have subqueries. I’m not sure how to build the XML field w/out subqueries.
If I’m not able to create a materialized view then how else can I create a full-text index that can search electronic Forms?
You cannot create an indexed view (which is a synchronous materialized view in SQL Server) only if there is a mathematical surjection and all scalar computation is deterministic and precise. By the way OUTER JOIN, SUBQUERIES and set operators (UNION, EXCEPT, INTERSECT) cannot be used...
The best ways to design your systeme is to do it in the reverse way...
Create a persistent computed column using the CONCAT function of all the columns you want to fulltext index.
Create fulltext indexes on the computed columns
Create an UDF that search in the fulltext index on each tables and concatenate the result by UNION, and then aggregate results to compute the rank.
Let me know if you want more assistance to do so...
If these form filling data are seldom changed once created and it makes sense in business to store data of form1 and form6 together with its Filling, you may consider to go with document oriented design.
SQL server has good json support now. You can save all the Filling and form info in json, against which you can do full text search, and create views to simulate your current design if needed.
Here is an example -
create table tst.form (
form_id int not null identity primary key
,content_json nvarchar(max)
)
-- inside content_json, the json may look like -
{
"filler_user_id": 111,
"filler_type_id": 1,
"is_scanned_form": 1,
"form1": [
{
"form1_filling_id": 101,
"has_served_as_agent":0,
"liabilities": [{"name_of_creditor": "abc"}]
}
]
}
I only modelled form1 related info. You can add form6 related info as needed.
Then you can do full text search against this content_json column.
Then create views to simulate your current design if needed -
create or alter view tst.form_base WITH SCHEMABINDING as
select form_id
,convert(int, JSON_VALUE(content_json, '$.filler_user_id')) filler_user_id
,convert(int, JSON_VALUE(content_json, '$.filler_type_id')) filler_type_id
,convert(bit, JSON_VALUE(content_json, '$.is_scanned_form')) is_scanned_form
,JSON_QUERY(content_json, '$.form1') form1_json
from tst.form
create unique clustered index idx_form_base_form_id on tst.form_base(form_id);
-- you can create index as needed
create index idx_form_base_filler_user_id on tst.form_base(filler_user_id);
create or alter view tst.form1 as
select form_id
,a.form1_filling_id
,a.has_served_as_agent
,a.liabilities liabilities_json
from tst.form_base cross apply OPENJSON(form1_json) WITH (
form1_filling_id int '$.form1_filling_id',
has_served_as_agent int '$.has_served_as_agent',
liabilities nvarchar(max) '$.liabilities' as json) a
create or alter view tst.form1_liabilities as
select form_id
,form1_filling_id
,a.name_of_creditor
from tst.form1 cross apply OPENJSON(liabilities_json) WITH (
name_of_creditor nvarchar(max) '$.name_of_creditor') a
Then create some test data -
insert into tst.form (content_json) values ('{
"filler_user_id": 111,
"filler_type_id": 1,
"is_scanned_form": 1,
"form1": [
{
"form1_filling_id": 101,
"has_served_as_agent":0,
"liabilities": [{"name_of_creditor": "abc"}]
}
]
}');
insert into tst.form (content_json) values ('{
"filler_user_id": 222,
"filler_type_id": 1,
"is_scanned_form": 0,
"form1": [
{
"form1_filling_id": 102,
"has_served_as_agent":1,
"liabilities": [{"name_of_creditor": "def"}]
}
]
}');
Try it -
select *
from tst.form1_liabilities

sql server executing update it takes more time

I have two tables (UserTable and UserProfile) and the Structure:
create table userTable(
id_user int identity(1,1) primary key ,
Name varchar(300) not null ,
Email varchar(500) not null ,
PasswordUser varchar(700) not null,
userType int ,
constraint usertype_fk foreign key(userType) REFERENCES userType(id_type)
on delete set null
)
and userPtrofile:
create table UserProfile(
id_profile int identity(1,1) primary key ,
ClientCmpName varchar(300) null,
Clientaddress varchar(500) null,
phone varchar(50) null,
descriptionClient varchar(400) null,
img image null,
messageClient text ,
fk_user int ,
constraint fkuser foreign key(fk_user) references userTable(id_user)
on delete cascade
)
I am using SQL Server 2008.
The problem is that when I update records the executing load without executing
this is sample query:
update UserProfile set messageClient=N'010383772' where fk_user=2;
screenshot
If your concern is performance for this query:
update UserProfile
set messageClient = N'010383772'
where fk_user = 2;
Then an index will be very helpful:
create index idx_UserProfile_fkuser on UserProfile(fk_user);
This should make the query almost instantaneous.
Note: indexes can slow down inserts and other operations. This is usually not a big issue, and having indexes on foreign key columns is common.
Dumb question, why are you trying to do an update based on a [userType] value ?
update UserProfile set messageClient=N'010383772' where fk_user=2;
Don't you want to update this value on one specific [UserProfile] based on its ID (which is a Primary Key, so would be much faster)
UPDATE [UserProfile]
SET [messageClient]='010383772'
WHERE id_profile=2;
Perhaps the performance problem is due to your UPDATE attempting to update all of your [UserProfile] records with this particular UserType value...?
Or I'm missing the point of what you're trying to do (and how many records you're attempting to update).
Maybe you have alredy started a transaction (BEGIN TRANSACTION) on the table in another process (maybe another query editor page) and until you don't stop that transaction the table would not be available for updates.
Check the variable select ##trancount, or try do rollback the updates you have already made (ROLLBACK TRANSACTION).
Also check if other tables can be update without issues.
Is the query ever executed? It rather seems like a deadlock. You should
open the activity monoitor and check if your query is blocked by some process.
In that case, you should kill the blocking query.
Thank you for trying to help me
my problem fixed the problem was another query editor page because i worked with asp.net and another page i use the same record to update the same record when i stop the asp.net project then query was success

Operand type clash using user defined table type on SQL Server

I have created a user defined type (with script to CREATE) given below:
CREATE TYPE [dbo].[udt_ProductID_SellingPrice] AS TABLE(
[Product_Id] [int] NOT NULL,
[Selling_Price] [money] NULL,
PRIMARY KEY CLUSTERED
(
[Product_Id] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
On the same database I have created a function that uses this type:
CREATE FUNCTION [dbo].[fn_NetGP_tbl](#InputSKUs udt_ProductID_SellingPrice READONLY, #Currency VARCHAR(3), #SiteName VARCHAR(255), #CatalogueSite VARCHAR(5), #SiteGroup VARCHAR(255))
RETURNS #Results TABLE
(
Product_Id INT NOT NULL,
Selling_Price MONEY NULL,
Net_GP MONEY NULL
)
AS
BEGIN
I declare the Input SKUs as an in-memory table as follows and call the function as follows:
DECLARE #InputSKUs dbo.udt_ProductId_SellingPrice
INSERT INTO #InputSKUs VALUES(10352316, 500.00)
SELECT * FROM Sensu_Staging.dbo.fn_NetGP_tbl(#InputSKUs, 'GBP', 'Site_Name', 'SITE', 'Site_Group')
And get the (what I'm finding, fairly confusing) error message:
Operand type clash: udt_ProductID_SellingPrice is incompatible with udt_ProductID_SellingPrice
I can't really work out what I'm doing wrong - this has worked on previous databases, the only change I've made is adding a primary key/clustered index to ProductID_SellingPrice type and this seems to have thrown everything off.
Do I need to change the function? Or is it not possible to create an index on an in-memory custom table type? We were hoping to speed the process up by indexing that table but if it's not possible then I guess we'll have to find other ways.
Cheers,
Matt
I had previously defined the user data type in a different database and had referenced it explicitly in the functions I was calling (the Net GP function in this case).
I had then changed the data type to include a primary key (both with the same name) and yet was still referencing the previous data type in the body of my function hence the confusing, but entirely accurate error message.
In short, make sure that you either change all references at all points in your function or, more sensibly, call your data types different things so as to be able to troubleshoot this better!

Selecting from a table and inserting into another table's column of a different type using query in ms access

I have some txt files that contain tables with a mix of different records on them which have diferent types of values and definitons for columns. I was thinking of importing it into a table and running a query to separate the different record types since a identifier to this is listed in the first column. Is there a way to change the value type of a column in a query? since it will be a pain to treat all of them as text. If you have any other suggestions on how to solve this please let me know as well.
Here is an example of tables for 2 record types provided by the website where I got the data from
create table dbo.PUBACC_A2
(
Record_Type char(2) null,
unique_system_identifier numeric(9,0) not null,
ULS_File_Number char(14) null,
EBF_Number varchar(30) null,
spectrum_manager_leasing char(1) null,
defacto_transfer_leasing char(1) null,
new_spectrum_leasing char(1) null,
spectrum_subleasing char(1) null,
xfer_control_lessee char(1) null,
revision_spectrum_lease char(1) null,
assignment_spectrum_lease char(1) null,
pfr_status char(1) null
)
go
create table dbo.PUBACC_AC
(
record_type char(2) null,
unique_system_identifier numeric(9,0) not null,
uls_file_number char(14) null,
ebf_number varchar(30) null,
call_sign char(10) null,
aircraft_count int null,
type_of_carrier char(1) null,
portable_indicator char(1) null,
fleet_indicator char(1) null,
n_number char(10) null
)
Yes, you can do what you want. In ms access you can use any VBA functions and with some
IIF(FirstColumn="value1", CDate(SecondColumn), NULL) as DateValue,
IIF(FirstColumn="value2", CDec(SecondColumn), NULL) as DecimalValue,
IIF(FirstColumn="value3", CStr(SecondColumn), NULL) as StringValue
You can use all/any of the above in your SELECT.
EDIT:
From your comments it seems that you want to split them into different tables - importing as text should not be a problem in that case.
a)
After you import and get it in the initial table, create the proper table manually setting you can INSERT into the proper table.
b)
You could even do a make table query, but it might be faster to create it manually. If you do a make table query you have to be sure that you have casted the data into proper type in your select.
EDIT2:
As you updated the question showing the structure it becomes obvious that my suggestion above will not help directly.
If this is one time process you can follow HLGEM's solution. Here are some more details.
1) Import into a table with two columns - RecordType char(2), Rest memo
2) Now you can split the data (make two queries that select based on RecordType) and re-export the data (to be able to use access' import wizard)
3) Now you have two text files with proper structure which can be easily imported
I did this in my last job. You start with a staging table that has one column or two coulmns if your identifier is always the same length.
Then using the record identifier, you move the data to another set of staging tables, one for each type of record you have. This will be in columns for the data and can have the correct data types. Then you do any data cleaning you need to do. Then you insert into the real production table.
If you have a column defined as text, because it has both alphas and numbers, you'll only be able to query it as if it were text. Once you've separated out the different "types" of data into their own tables, you should be able to change the schema definition. Please comment here if I'm misunderstanding what you're trying to do.

Sql Server high performance insert in two related tables

Sirs,
I have the following physical model below, resembling an class table inheritance like the pattern from Fowler (http://martinfowler.com/eaaCatalog/classTableInheritance.html)
CREATE TABLE [dbo].[ProductItem] (
[IdProductItem] INT IDENTITY (1, 1) NOT NULL,
[IdPointOfSale] INT NOT NULL,
[IdDiscountRules] INT NOT NULL,
[IdProductPrice] INT NULL);
CREATE TABLE [dbo].[Cellphone] (
[IdCellphone] INT IDENTITY (1, 1) NOT NULL,
[IdModel] INT NOT NULL,
[IMEI] NVARCHAR (150) NOT NULL,
[IdProductItem] INT NULL
);
ProductItem is my base class. It handles all actions related to the sales. Cellphone is a subclass from ProductItem. It adds the atributes and behavior specific that I need to use when I sell an cellphone (IMEI number, activate the cell phone etc)
I need to track each item of the inventory individually. When I receive a batch of 10.000 cellphone, I need to load all this information in my system. I need to create the cellphones and the productitem in my database.
If it was only one table, it is easy to use bulk insert. But, in my case I have an base class with some diferent subclasses represented by tables. What is the best approach to handle this task?
Regards
Camilo
If you're ok with buik inserts, it's still easiest to build a little script to build the tables using an appropriate sequence for referential integrity - in your case probably product, then instances of product (cellphones).