RawMaterialID Level01 Level02 Level03 Level04 Level05 Level06 Description60Digit Description30Digit
393 POLYBAGS PB.HGR 33x46cm 30.5μm HANGER HOLE HANGER.HOLE1.5" LDPE PB.HGR 33x46cm 30.5μm HANGER HOLE HANGER.HOLE1.5" LDPE PB.HGR 33x46cm 30.5μm
I am trying to retrieve the row above using the following SQL query:
SELECT *
FROM [dbo].[RawMaterial]
WHERE [Level01] = 'POLYBAGS'
AND [Level02] = 'PB.HGR'
AND [Level03] = '33x46cm 30.5μm'
AND [Level04] = 'HANGER HOLE'
AND [Level05] = 'HANGER.HOLE1.5"'
AND [Level06] = 'LDPE'
The query fails because of the 'μ' character in the Level03 column, is there any workaround for the this ?
Table design:
[RawMaterialID] [int] IDENTITY(1,1) NOT NULL,
[RMProcurementGroupID] [int] NULL,
[Level01] [nvarchar](255) NULL,
[Level02] [nvarchar](255) NULL,
[Level03] [nvarchar](255) NULL,
[Level04] [nvarchar](255) NULL,
[Level05] [nvarchar](255) NULL,
[Level06] [nvarchar](255) NULL,
[Description60Digit] [nvarchar](255) NULL,
[Description30Digit] [nvarchar](255) NULL,
[RawMaterialTypeID] [int] NULL
make a test using:
AND [Level03]=N'33x46cm 30.5μm'
tell the system that the string to compare is NVARCHAR.
Related
I have a large SaaS system database, running for 8+ years, no problems. It is an Azure SQL database, and we host the corresponding web application through Azure too.
Suddenly, in the early hours of this morning, some of the C# web app reports start failing due to duplicate table records being detected. I check the table in question and yes, there are duplicate identical records, with clashing unique keys, in the table.
I've never seen this before. How can a unique key fail to enforce itself during inserts/updates?
EDIT:
Here's the schema:
CREATE TABLE [tenant_clientnamehere].[tbl_cachedstock](
[clusteringkey] [bigint] IDENTITY(1,1) NOT NULL,
[islivecache] [bit] NOT NULL,
[id] [uniqueidentifier] NOT NULL,
[stocklocation_id] [uniqueidentifier] NOT NULL,
[stocklocation_referencecode] [nvarchar](50) NOT NULL,
[stocklocation_description] [nvarchar](max) NOT NULL,
[productreferencecode] [nvarchar](50) NOT NULL,
[productdescription] [nvarchar](max) NOT NULL,
[unitofmeasurename] [nvarchar](50) NOT NULL,
[targetstocklevel] [decimal](12, 3) NULL,
[minimumreplenishmentquantity] [decimal](12, 3) NULL,
[minimumstocklevel] [decimal](12, 3) NULL,
[packsize] [int] NOT NULL,
[isbuffermanageddynamically] [bit] NOT NULL,
[dbmcheckperioddays] [int] NULL,
[dbmcheckperiodbuffergroup_id] [uniqueidentifier] NULL,
[ignoredbmuntildate] [datetime2](7) NULL,
[notes1] [nvarchar](100) NOT NULL,
[notes2] [nvarchar](100) NOT NULL,
[notes3] [nvarchar](100) NOT NULL,
[notes4] [nvarchar](100) NOT NULL,
[notes5] [nvarchar](100) NOT NULL,
[notes6] [nvarchar](100) NOT NULL,
[notes7] [nvarchar](100) NOT NULL,
[notes8] [nvarchar](100) NOT NULL,
[notes9] [nvarchar](100) NOT NULL,
[notes10] [nvarchar](100) NOT NULL,
[seasonaleventreferencecode] [nvarchar](50) NULL,
[seasonaleventtargetstocklevel] [decimal](12, 3) NULL,
[isarchived] [bit] NOT NULL,
[isobsolete] [bit] NOT NULL,
[currentstocklevel] [decimal](12, 3) NULL,
[quantityenroute] [decimal](12, 3) NULL,
[recommendedreplenishmentquantity] [decimal](12, 3) NULL,
[bufferpenetrationpercentage] [int] NOT NULL,
[bufferzone] [nvarchar](10) NOT NULL,
[bufferpenetrationpercentagereplenishment] [int] NOT NULL,
[bufferzonereplenishment] [nvarchar](10) NOT NULL,
CONSTRAINT [PK_tbl_cachedstock] PRIMARY KEY CLUSTERED
(
[clusteringkey] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY],
CONSTRAINT [UK_tbl_cachedstock_1] UNIQUE NONCLUSTERED
(
[islivecache] ASC,
[id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [tenant_clientnamehere].[tbl_cachedstock] ADD CONSTRAINT [DF__tbl_cache__isarc__1A200257] DEFAULT ((0)) FOR [isarchived]
GO
ALTER TABLE [tenant_clientnamehere].[tbl_cachedstock] ADD CONSTRAINT [DF__tbl_cache__isobs__1B142690] DEFAULT ((0)) FOR [isobsolete]
GO
And the clash (two of which are still in the table) is:
islivecache = 1
id = BA7AD2FD-EFAA-485C-A200-095626C583A3
The cause of this turns out to be very simple - but troubling: every single Unique Key, in every single table, in every single schema, was simultaneously set to "Is Disabled", so whilst they exist they're not being applied.
I've manually cleared out records dupes and rebuilt all indexes to re-enforce the checks, and everything is fine again, but I have no idea how technically this can suddenly occur.
I'm now currently working with Azure support to get to the bottom of it.
SQL Server 2016
I was given an interesting table structure and am being asked to make some meaningful reporting off of it to see growth change and need help in how to go about pivoting the data for a result set that will be easier to work with in SSRS.
Table Structure:
CREATE TABLE [dbo].[Person_Order_ETL_Delay](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Person_Name] [varchar](255) NULL,
[Order_DATE] [date] NOT NULL,
[Order_INTERVAL] [char](5) NOT NULL,
[00:00-00:30] [int] NOT NULL,
[00:30-01:00] [int] NOT NULL,
[01:00-01:30] [int] NOT NULL,
[01:30-02:00] [int] NOT NULL,
[02:00-02:30] [int] NOT NULL,
[02:30-03:00] [int] NOT NULL,
[03:00-03:30] [int] NOT NULL,
[03:30-04:00] [int] NOT NULL,
[04:00-04:30] [int] NOT NULL,
[04:30-05:00] [int] NOT NULL,
[05:00-05:30] [int] NOT NULL,
[05:30-06:00] [int] NOT NULL,
[06:00-06:30] [int] NOT NULL,
[06:30-07:00] [int] NOT NULL,
[07:00-07:30] [int] NOT NULL,
[07:30-08:00] [int] NOT NULL,
[08:00-08:30] [int] NOT NULL,
[08:30-09:00] [int] NOT NULL,
[09:00-09:30] [int] NOT NULL,
[09:30-10:00] [int] NOT NULL,
[10:00-10:30] [int] NOT NULL,
[10:30-11:00] [int] NOT NULL,
[11:00-11:30] [int] NOT NULL,
[11:30-12:00] [int] NOT NULL,
[12:00-12:30] [int] NOT NULL,
[12:30-13:00] [int] NOT NULL,
[13:00-13:30] [int] NOT NULL,
[13:30-14:00] [int] NOT NULL,
[14:00-14:30] [int] NOT NULL,
[14:30-15:00] [int] NOT NULL,
[15:00-15:30] [int] NOT NULL,
[15:30-16:00] [int] NOT NULL,
[16:00-16:30] [int] NOT NULL,
[16:30-17:00] [int] NOT NULL,
[17:00-17:30] [int] NOT NULL,
[17:30-18:00] [int] NOT NULL,
[18:00-18:30] [int] NOT NULL,
[18:30-19:00] [int] NOT NULL,
[19:00-19:30] [int] NOT NULL,
[19:30-20:00] [int] NOT NULL,
[20:00-20:30] [int] NOT NULL,
[20:30-21:00] [int] NOT NULL,
[21:00-21:30] [int] NOT NULL,
[21:30-22:00] [int] NOT NULL,
[22:00-22:30] [int] NOT NULL,
[22:30-23:00] [int] NOT NULL,
[23:00-23:30] [int] NOT NULL,
[23:30-00:00] [int] NOT NULL
) ON [PRIMARY]
GO
Table logic:
This table will represent ETL activity of the target application vs the availability in the source application.
Let's take ID = 12 as an example.
On 4/1/2020 the source application is taking Bilbo's orders and processing them at the order interval of 05:30 (system time [Order_INTERVAL]) and what we are seeing is that the data was made available for the target reporting application (via ETL time stamps) between [06:00-06:30] (6 volumes finally able to be captured) and [06:30-07:00] (3 more volumes processed and available for ETL).
So we have a source to target delay.
The system interval has appropriated these volumes (6+3=9 total orders) for the 05:30 time interval, however, the target system was not able to capture them until the 06:00 interval and the remainder at the 06:30 interval for the target reporting application.
So this proves that there is some delay between transactions completing, and their availability for the reporting application.
What I would like to do is produce a result set off of this to represent the data more like this:
Pivoting pertinent ETL interval data, displaying the volume and the percentage over total volume. Pertinent meaning, the floor of the volume should be the source application interval bucket for the volume and the ceiling would be the last ETL bucket in the series that contains a volume greater than 0.
Any help with this would be outstanding.
You need unpivot to do that.
please see the db<> fiddle example here.
Result_snip:
I've just created an item dimension in ssas, have not added any hierarchies, have confirmed there are no null values in my table, have confirmed there are no duplicate keys in my table, yet I still get the duplicate key error. I have the primary key set on the item key field. Any help would be greatly appreciated.
CREATE TABLE [dbo].[Dim_Items](
[item_key] [int] IDENTITY(1,1) NOT NULL,
[item_no] [varchar](30) NOT NULL,
[item_pref] [varchar](40) NOT NULL,
[item_div] [varchar](20) NOT NULL,
[item_cus] [varchar](50) NOT NULL,
[item_desc_1] [varchar](30) NOT NULL,
[item_desc_2] [varchar](30) NOT NULL,
[cus_part_no] [varchar](75) NOT NULL,
[item_loc] [char](3) NOT NULL,
[stk_uom] [char](2) NOT NULL,
[pur_uom] [char](2) NOT NULL,
[pur_to_stk_ratio] [decimal](11, 6) NOT NULL,
[mat_cost_type] [char](3) NOT NULL,
[mat_cost_desc] [char](15) NOT NULL,
[inv_category] [varchar](13) NOT NULL,
[stocked] [char](1) NOT NULL,
[controlled] [char](1) NOT NULL,
[pur_or_mfg] [varchar](20) NOT NULL,
[comm_cd] [char](4) NOT NULL,
[comm_desc] [char](30) NOT NULL,
[byr_plnr_cd] [int] NOT NULL,
[byr_plnr_name] [char](64) NOT NULL,
[min_ord_qty] [decimal](13, 4) NOT NULL,
[item_saftey_stk] [decimal](13, 4) NOT NULL,
[mrp_ord_up_to] [decimal](13, 4) NOT NULL,
[lead_time] [decimal](4, 1) NOT NULL,
[last_MPN] [varchar](50) NOT NULL,
[last_mfg] [varchar](40) NOT NULL,
[aml_list] [varchar](1100) NOT NULL,
[where_used] [varchar](6000) NOT NULL,
Relationships
Error
This problem was caused by a TAB at the end of the field value. When I changed the dimension attribute key trimming property from "RIGHT" to "NONE" it fixed the issue. The better solution would be to remove tabs from the original data source using something like:
replace(item_no,char(9),'')
I was able to identify the issue when I noticed running the following query returned 2 rows instead of 1 when not using a wildcard:
SELECT * FROM Dim_Items WHERE item_pref like'{value from the error message}'+'%'
I have a table called UserCredentials and it has a uniqueidentifier column [UserCredentialId]. When I try to create a new user, I get 00000000-0000-0000-0000-000000000000 in my first try, then when I try adding another user, it says PK cannot be duplicated. At first I had a hard time guessing what does it mean but, I think its because of my uniqueidentifier is not generating random id.
What to do?
EDIT
Here is my SQL table structure:
CREATE TABLE [dbo].[UserCredential](
[UserCredentialId] [uniqueidentifier] NOT NULL,
[UserRoleId] [int] NOT NULL,
[Username] [varchar](25) NOT NULL,
[Password] [varchar](50) NOT NULL,
[PasswordSalt] [varchar](max) NOT NULL,
[FirstName] [varchar](50) NOT NULL,
[LastName] [varchar](50) NOT NULL,
[PayorCode] [varchar](20) NOT NULL,
[ProviderCode] [varchar](50) NULL,
[CorporationCode] [varchar](50) NULL,
[Department] [varchar](50) NULL,
[Status] [varchar](1) NOT NULL,
[DateCreated] [datetime] NOT NULL,
[DateActivated] [datetime] NULL,
[Deactivated] [datetime] NULL,
[DateUpdated] [datetime] NULL,
[CreatedBy] [varchar](50) NOT NULL,
[UpdatedBy] [varchar](50) NOT NULL,
[EmailAddress] [varchar](50) NULL,
[ContactNumber] [int] NULL,
[Picture] [varbinary](max) NULL,
CONSTRAINT [PK_UserCredential_1] PRIMARY KEY CLUSTERED
(
[UserCredentialId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I already set it to (newid()) but still not working.
Set the Id of your user instance to Guid.NewGuid();
user.Id = Guid.NewGuid();
Change your table definition to
[UserCredentialId] UNIQUEIDENTIFIER DEFAULT (NEWSEQUENTIALID()) NOT NULL
Check why prefer NEWSEQUENTIALID than newid at http://technet.microsoft.com/en-us/library/ms189786.aspx.
When a GUID column is used as a row identifier, using NEWSEQUENTIALID can be faster than using the NEWID function. This is because the NEWID function causes random activity and uses fewer cached data pages. Using NEWSEQUENTIALID also helps to completely fill the data and index pages.
You can have all values in the table default to a new value during insert, much like an IDENTITY.
Run this to set the default of all inserts to use newid().
ALTER TABLE [dbo].[UserCredential] ADD CONSTRAINT [DF_UserCredential_UserCredentialID] DEFAULT (newid()) FOR [UserCredentialId]
GO
I have a program that use MSSQL 2005. My problem is that this app is wrote with VB6 and when I get customer list in a computer it return 6000 rows and it is correct. But when I get customer list with another computer with same MSSQL(2005) and same OS (Windows XP). what can I do to solve this problem?
Thanks in advanced.
EDIT
The query is simple and it is:
SELECT * FROM Buyer
I think, maybe the problem is in indexing, clustered, SATA3 HDD or something else.
This is Design of the table what I was speaking about it:
CREATE TABLE [dbo].[Buyer](
[BuyerCode] [nvarchar](10) COLLATE Arabic_CI_AS NOT NULL,
[Atbar] [money] NULL,
[AddB] [nvarchar](100) COLLATE Arabic_CI_AS NULL,
[Tel] [nvarchar](200) COLLATE Arabic_CI_AS NULL,
[CityCode] [nvarchar](6) COLLATE Arabic_CI_AS NOT NULL,
[CityName] [nvarchar](35) COLLATE Arabic_CI_AS NULL,
[TBLO] [nvarchar](150) COLLATE Arabic_CI_AS NULL,
[SKH] [nvarchar](15) COLLATE Arabic_CI_AS NULL,
[NP] [nvarchar](50) COLLATE Arabic_CI_AS NULL,
[CodeAG] [nvarchar](20) COLLATE Arabic_CI_AS NULL,
[CodeSF] [nvarchar](2) COLLATE Arabic_CI_AS NOT NULL,
[NameSF] [nvarchar](70) COLLATE Arabic_CI_AS NULL,
[KindM] [nvarchar](15) COLLATE Arabic_CI_AS NULL,
[VAZ] [bit] NOT NULL,
[name] [nvarchar](250) COLLATE Arabic_CI_AS NULL,
[vazk] [bit] NULL,
[Tozeh] [nvarchar](350) COLLATE Arabic_CI_AS NOT NULL CONSTRAINT [DF_Buyer_Tozeh] DEFAULT (N''),
[Tozehp] [nvarchar](350) COLLATE Arabic_CI_AS NOT NULL CONSTRAINT [DF_Buyer_Tozehp] DEFAULT (N''),
[Onvan] [nvarchar](50) COLLATE Arabic_CI_AS NULL,
[GhK] [smallint] NULL,
[AutoFCode] [bit] NOT NULL CONSTRAINT [DF_Buyer_AutoFCode] DEFAULT ((1)),
[CodeF] [numeric](18, 0) NULL,
[NameF] [nvarchar](100) COLLATE Arabic_CI_AS NULL,
[DateF] [char](10) COLLATE Arabic_CI_AS NULL,
CONSTRAINT [PK_Buyer] PRIMARY KEY CLUSTERED
(
[BuyerCode] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY];
I just recently did a VB6 update where the control couldnt do more then 6000 entries. Very likely the same reason here. Its probably a maximum for that control. Check to see if you can either get an updated one if available (if its third party) or maybe use a different control.
Ensure your connection string is the same on both installs of your app (recompile if necessary)
Ensure you're connecting to the same database on both machines (that is, not using localhost)
Ensure your VB code isn't modifying the ResultsSet