Changing cte for microsoft access - sql

I've tried searching for an answer to this but can't find one.
I have a CTE I use for SQL queries relating to 2 data tables in a database. The primary key of one table is a foreign key in the other and can appear numerous times in the 2nd table. I want to do a count of the number of times each foreign key appears in the second table, and list this as a total field in my search results along with details from the first table. As CTEs don't work in Access I've adjusted this to use a sub select in the join, but it still doesn't like it in access.
Here are the basic parts of the tables
CREATE TABLE [dbo].[Clients](
[ClientRef] [int] NOT NULL,
[Surname] [varchar](40) NULL,
[Forenames] [varchar](50) NULL,
[Title] [varchar](40) NULL,
CONSTRAINT [CLIE_ClientRef_PK] PRIMARY KEY CLUSTERED
(
[ClientRef] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[Policies](
[PolicyRef] [int] NOT NULL,
[ClientRef] [int] NULL,
CONSTRAINT [POLI_PolicyRef_PK] PRIMARY KEY CLUSTERED
(
[PolicyRef] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Here's my CTE
WITH CliPol (ClientRef, Plans) AS (SELECT ClientRef, COUNT(ClientRef) AS Plans FROM Policies GROUP BY ClientRef)
SELECT Clients.Surname, Clients.Forenames, Clients.Title, CliPol.Plans AS [No. of plans]
FROM Clients LEFT JOIN CliPol ON Clients.ClientRef = CliPol.ClientRef
ORDER BY Surname, Forenames;
And here's my adjusted query.
SELECT Clients.ClientRef, Clients.Surname, Clients.Forenames, Clients.Title , Plans.NoPlans
FROM Clients
LEFT JOIN
(SELECT ClientRef, COUNT(ClientRef) AS NoPlans FROM Policies GROUP BY ClientRef)
AS Plans ON Plans.ClientRef = Clients.ClientRef
ORDER BY Clients.Surname, Clients.Forenames
Unfortunately Access throws error #3131, "Syntax error in FROM clause", when I try to run that query.
Does anybody know how I make this work in Access?

One alternate approach would be to use the DCount() domain aggregate function
SELECT
Clients.ClientRef,
Clients.Surname,
Clients.Forenames,
Clients.Title ,
DCount("ClientRef", "Plans", "ClientRef=" & Clients.ClientRef) AS NoPlans
FROM Clients
ORDER BY Clients.Surname, Clients.Forenames

Related

SQL Server update via "WITH" statement and join

I would like to be able to update a table at once, instead of multiple statements and I don't want to make a temporary table.
To test this, I made this table:
USE [SomeSchema]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [SomeSchema].[TestTable](
[Id] [int] IDENTITY(1,1) NOT NULL,
[TextField] [varchar](250) NULL,
[updateField] [varchar](20) NULL,
CONSTRAINT [Pk_TestTable_Id] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [Idx_TestTable] UNIQUE NONCLUSTERED
(
[TextField] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
I tried to combine these two answers (https://stackoverflow.com/a/57965771 and https://stackoverflow.com/a/32431922/369122) into a working statement.
My last try:
WITH NewData AS
(
SELECT * FROM ( VALUES ('abc', 'A'),('def','d'),('ghi','g'))
x
(TextField, updateField)
)
update [SomeSchema].[TestTable]
set _a.updateField= _b.updateField
from
[SomeSchema].[TestTable] _a,
NewData _b
where _a.TextField=_b.TextField
Gave this error: Msg 4104, Level 16, State 1, Line 22
The multi-part identifier "_a.updateField" could not be bound.
Any suggestions? For the record; this is just a test. In practice I need to be able to join multiple columns to update one or more columns.
thanks,
Matthijs
#larnu's answer did the job:
"As for the problem, replace update [SomeSchema].[TestTable] with update _a. You're referencing a table in your FROM as the column to UPDATE, but the table your updating is defined as a different instance of [TestTable]"
WITH NewData AS
(
SELECT * FROM ( VALUES ('abc', 'a'),('def','d'),('ghi','g'))
x
(TextField, updateField)
)
update _tt
set _tt.updateField= _nd.updateField
from
[SomeSchema].[TestTable] _tt
left join
NewData _nd
on _tt.TextField=_nd.TextField

SQLSERVER Identity Column values Jumping by millions

I have table with an int identity column and it is skipping id in thousands at times. Searches suggest it is normal of sql server to skip it by 1000 or 1001, but mine does increase by 20000 or more on occasions, But last time it got jumped by 95216000.
Unable to find reason why this is happening, check sql server for crash logs and any other suspicious events but no luck.
Having replication on table, is that related..??
Create Table Script is like..
CREATE TABLE [dbo].[Table](
[CId] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
.
.
.
CONSTRAINT [PK_Table] PRIMARY KEY CLUSTERED
(
[CId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]
GO
Did you checked values for this
SELECT IDENT_SEED(TABLE_NAME) AS Seed,
IDENT_INCR(TABLE_NAME) AS Increment,
IDENT_CURRENT(TABLE_NAME) AS Current_Identity,
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'YourTableName') = 1
AND TABLE_TYPE = 'BASE TABLE'
Also by any chance you truncated the data into table?
Please refer this link to find the reason for the gaps.
http://sqlity.net/en/792/the-gap-in-the-identity-value-sequence/

How can I get a list of users and computers they have used from these tables?

I have a weird anomaly happening in my program. I have a piece of software that ties users and computers together, then sends user information to a server in the cloud.
The software that ties users and computers together stores the user / computer information in a local DB in these table structures:
CREATE TABLE [dbo].[Computers](
[ID] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_Computers] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[User_Computer](
[ID] [uniqueidentifier] NOT NULL,
[ID_User] [uniqueidentifier] NOT NULL,
[ID_Computer] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_User_Computer] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[User_Computer] WITH CHECK ADD CONSTRAINT [FK_User_Computer_Computers] FOREIGN KEY([ID_Computer])
REFERENCES [dbo].[Computers] ([ID])
GO
ALTER TABLE [dbo].[User_Computer] CHECK CONSTRAINT [FK_User_Computer_Computers]
GO
ALTER TABLE [dbo].[User_Computer] WITH CHECK ADD CONSTRAINT [FK_User_Computer_Users] FOREIGN KEY([ID_User])
REFERENCES [dbo].[Users] ([ID])
GO
ALTER TABLE [dbo].[User_Computer] CHECK CONSTRAINT [FK_User_Computer_Users]
GO
CREATE TABLE [dbo].[Users](
[ID] [uniqueidentifier] NOT NULL,
[Name] [nchar](50) NULL,
CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
After the user / computer is stored in the local DB, the user guid is used to transmit data to the server in the cloud. What is interesting is the server in the cloud has 200 user guids and my local DB only has 177 when I run this script.
SELECT
Users.ID AS ID_User,
Users.Name AS UserName,
User_Computer.ID AS Association,
User_Computer.ID_Computer,
Computers.Name AS ComputerName
FROM Users INNER JOIN User_Computer ON Users.ID = User_Computer.ID_User INNER JOIN
Computers ON User_Computer.ID_Computer = Computers.ID
My question : Is there anyway my script is not collecting all of the user information correctly?
NOTE: I am putting data in both the user and computer tables correctly. What I am interested in is if the script is using the correct join. should it be inner, outer or something entirely different ?
NOTE: (again) When I run this on my local computer I get this:
5FCD88C8-04B5-494C-88C8-85BCD08CBBB5 Fred B945300D-7CED-42FC-8A79-4FDBB54F6B69 29CAD425-42A0-478F-8966-1448144EB90E Comp1
E357B7E7-7328-4D2D-9A3E-20FC388C5781 Joe 7BB73859-8CE3-4383-BAFF-504DBF719524 182627C5-F91D-4C88-9AE3-A527E55A5A41 Comp2
F8C2DB79-85AC-408A-A858-B0C0FA6862F7 Moe 23267708-2A5E-497A-B91A-937D28983832 B518614E-A243-47D1-B642-D24B434D7683 Comp3
237947BC-C26D-44D0-9AF7-F231D98F1BF3 Curly F89411A9-787B-4A2A-AA1E-B56455B781E8 29CAD425-42A0-478F-8966-1448144EB90E Comp4
3C8DCE89-6764-4D57-B0AD-2CF988EADB35 Steve 1D464AB1-DA70-4ACC-8D00-ED7F47D35413 9446327A-30BA-492F-9A28-3AB132C31988 Comp5
F8C2DB79-85AC-408A-A858-B0C0FA6862F7 Moe A32FD03E-B777-4D74-8702-F58D71B53E8B 82A39A46-269B-43D5-B7A6-B14A9D5FBBD4 Comp6
So... what I really want to know with the stored proc is a listing of all of the users and what computers they are associated with.
What I want to do is combine these two stored procedures into one:
SELECT
Users.ID AS ID_User,
Users.Name AS UserName,
User_Computer.ID AS Association,
User_Computer.ID_Computer
FROM Users INNER JOIN User_Computer ON Users.ID = User_Computer.ID_User
SELECT Computers.ID,
Computers.Name,
User_Computer.ID AS Association,
User_Computer.ID_User
FROM Computers INNER JOIN User_Computer ON Computers.ID = User_Computer.ID_Computer
This query:
SELECT u.ID AS ID_User, u.Name AS UserName,
uc.ID AS Association, uc.ID_Computer,
c.Name AS ComputerName
FROM Users u INNER JOIN
User_Computer uc
ON u.ID = uc.ID_User INNER JOIN
Computers c
ON uc.ID_Computer = c.ID;
Returns one row for every user/computer pair in the data. If your foreign keys are set up, then this should be the same number of rows as in User_Computer.
If some users do not have computers (which a comment states is not possible), then you can use a LEFT JOIN to get all users and information about associated computers, if any
SELECT u.ID AS ID_User, u.Name AS UserName,
uc.ID AS Association, uc.ID_Computer,
c.Name AS ComputerName
FROM Users u LEFT JOIN
User_Computer uc
ON u.ID = uc.ID_User LEFT JOIN
Computers c
ON uc.ID_Computer = c.ID;
If your query and the other stored procedure are returning different results then either (1) they are running on different tables (perhaps in different databases) or (2) they are implementing different logic. Without seeing the other logic, it isn't possible to say what the difference is.

How to optimize a select top N Query

I have a very large table, consisting of 40 million rows, in a SQL Server 2008 Database.
CREATE TABLE [dbo].[myTable](
[ID] [bigint] NOT NULL,
[CONTRACT_NUMBER] [varchar](50) NULL,
[CUSTOMER_NAME] [varchar](200) NULL,
[INVOICE_NUMBER] [varchar](50) NULL,
[AGENCY] [varchar](50) NULL,
[AMOUNT] [varchar](50) NULL,
[INVOICE_MONTH] [int] NULL,
[INVOICE_YEAR] [int] NULL,
[Unique_ID] [bigint] NULL,
[bar_code] [varchar](50) NOT NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[ID] ASC,
[bar_code] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I am trying to optimize performance for the following query:
SELECT top 35 ID,
CONTRACT_NR,
CUSTOMER_NAME,
INVOICE_NUMBER,
AMOUNT,
AGENCY,
CONTRACT_NUMBER,
ISNULL([INVOICE_MONTH], 1) as [INVOICE_MONTH],
ISNULL([INVOICE_YEAR], 1) as [INVOICE_YEAR],
bar_code,
Unique_ID
from MyTable
WHERE
CONTRACT_NUMBER like #CONTRACT_NUMBER and
INVOICE_NUMBER like #INVOICE_NUMBER and
CUSTOMER_NAME like #CUSTOMER_NAME
ORDER BY Unique_ID desc
In order to do that i build an included index on the columns CONTRACT_NUMBER, INVOICE_NUMBER and CUSTOMER_NAME.
CREATE NONCLUSTERED INDEX [ix_search_columns_without_uniqueid] ON [dbo].[MyTable]
(
[CONTRACT_NUMBER] ASC,
[CUSTOMER_NAME] ASC,
[INVOICE_NUMBER] ASC
)
INCLUDE ( [ID],
[AGENCY],
[AMOUNT],
[INVOICE_MONTH],
[INVOICE_YEAR],
[Unique_ID],
[Contract_nr],
[bar_code]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Still the query is taking from 3 sec to 10 sec to execute. From the query execution plan i see that an index seek operation is taking place consuming about 30% of the total workload and than a Sort (Top N) operation which is consuming the other 70%. Any idea how can i optimize this query, a response time of less than 1 sec is preferred?
Note: I tried also to include dhe column [Unique_ID] in the index columns. In this case the query execution plan is doing an index scan, but with many users querying the database, i am having the same problem.
Check this page for more detail.
Update the statistic with a full scan to make the optimizer work easier.
UPDATE STATISTICS tablename WITH fullscan
GO
Set statistics time on and execute the following query
SET STATISTICS time ON
GO
SELECT num_of_reads, num_of_bytes_read,
num_of_writes, num_of_bytes_written
FROM sys.dm_io_virtual_file_stats(DB_ID('tempdb'), 1)
GO
SELECT TOP 100 c1, c2,c3
FROM yourtablename
WHERE c1<30000
ORDER BY c2
GO
SELECT num_of_reads, num_of_bytes_read,
num_of_writes, num_of_bytes_written
FROM sys.dm_io_virtual_file_stats(DB_ID('tempdb'), 1)
GO
Result
CPU time = 124 ms, elapsed time = 91 ms
Before Query execution
num_of_reads num_of_bytes_read num_of_writes num_of_bytes_written
-------------------- -------------------- -------------------- --------------------
725864 46824931328 793589 51814416384
After Query execution
num_of_reads num_of_bytes_read num_of_writes num_of_bytes_written
-------------------- -------------------- -------------------- --------------------
725864 46824931328 793589 51814416384
Source : https://www.mssqltips.com/sqlservertip/2053/trick-to-optimize-top-clause-in-sql-server/
Try and replace your clustered index (currently on two columns) with one solely on unique_id (assuming that it really is unique). This will aid your sorting. Then add a second covering index - as you have tried - on the three columns used in the WHERE. Check your statistics are upto date. Ihave a feeling that the column bar_code in your PK is preventing your sort from running as quickly as it could.
Do your variables contain wildcards?If they do,and they are leading wildcards, the index on the WHERE columns cannot be used. If they are not wildcarded, try a direct "=", assuming case-sensitivity is not an issue.
UPDATE: since you have leading wildcards, you will not be able to take advantage of an index on CONTRACT_NUMBER , INVOICE_NUMBER or CUSTOMER_NAME: as GriGrim suggested, the only alternative here is to use fulltext searches (CONTAINS keyword etc.).

Soft Delete - Use IsDeleted flag or separate joiner table?

Should we use a flag for soft deletes, or a separate joiner table? Which is more efficient? Database is SQL Server.
Background Information
A while back we had a DB consultant come in and look at our database schema. When we soft delete a record, we would update an IsDeleted flag on the appropriate table(s). It was suggested that instead of using a flag, store the deleted records in a seperate table and use a join as that would be better. I've put that suggestion to the test, but at least on the surface, the extra table and join looks to be more expensive then using a flag.
Initial Testing
I've set up this test.
Two tables, Example and DeletedExample. I added a nonclustered index on the IsDeleted column.
I did three tests, loading a million records with the following deleted/non deleted ratios:
Deleted/NonDeleted
50/50
10/90
1/99
Results - 50/50
Results - 10/90
Results - 1/99
Database Scripts, For Reference, Example, DeletedExample, and Index for Example.IsDeleted
CREATE TABLE [dbo].[Example](
[ID] [int] NOT NULL,
[Column1] [nvarchar](50) NULL,
[IsDeleted] [bit] NOT NULL,
CONSTRAINT [PK_Example] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Example] ADD CONSTRAINT [DF_Example_IsDeleted] DEFAULT ((0)) FOR [IsDeleted]
GO
CREATE TABLE [dbo].[DeletedExample](
[ID] [int] NOT NULL,
CONSTRAINT [PK_DeletedExample] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DeletedExample] WITH CHECK ADD CONSTRAINT [FK_DeletedExample_Example] FOREIGN KEY([ID])
REFERENCES [dbo].[Example] ([ID])
GO
ALTER TABLE [dbo].[DeletedExample] CHECK CONSTRAINT [FK_DeletedExample_Example]
GO
CREATE NONCLUSTERED INDEX [IX_IsDeleted] ON [dbo].[Example]
(
[IsDeleted] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
The numbers you have seem to indicate that my initial impression was correct: if your most common query against this database is to filter on IsDeleted = 0, then performance will be better with a simple bit flag, especially if you make wise use of indexes.
If you often query for deleted and undeleted data separately, then you could see a performance gain by having a table for deleted items and another for undeleted items, with identical fields. But denormalizing your data like this is rarely a good idea, as it will most often cost you far more in code maintenance costs than it will gain you in performance increases.
I'm not the SQL expert but in my opinion,it all depends on the usage frequency of the database. If the database is accessed by the large number of users and needs to be efficient then usage of a seperate isDeleted table will be good. The better option would be using a flag during the production time and as a part of daily/weekly/monthly maintanace you may move all the soft deleted records to the isDeleted table and clear the production table of soft deleted records. The mixture of both option will be good a good one.