How to validate the date in a varchar column in SQL Server - sql
I have a staging table which contains all varchar columns. I want to validate a date stored in the data column. Since my staging table contains all varchar columns, then all csv records are inserted into table.
After inserted into the staging table, I need a validation for specific date column to validate date are properly present or not. If any string value comes then I need to eliminate from staging table
Building on #Larnu's comment, you can use TRY_CONVERT to select only the records that contain proper dates, then use those records to do some further action. Consider the following example:
-- Using a table variable as an example of the source data
DECLARE #SampleTable TABLE
(
Id int,
SomePossibleDateField varchar(20)
)
-- Now insert some sample data into the table variable, just for illustration
INSERT INTO #SampleTable
VALUES (1, '2021-05-04'),
(2, '2021-05-05'),
(3, 'not a date'),
(4, NULL),
(5, ''),
(6, '2021-05-06')
-- Now select all the records that contain proper dates:
SELECT * FROM #SampleTable WHERE TRY_CONVERT(DATE, [SomePossibleDateField], 120) > '1900-01-01'
The results of the final select statement above are
Id SomePossibleDateField
1 2021-05-04
2 2021-05-05
6 2021-05-06
Some things to note:
First, in this sample, for simplicity, all the dates are expressed as format 120 (ODBC Canonical). So you may need to try different formats depending on your data. See the date formats listed on the CAST page for the different format values.
Second, that select statement tests for dates greater than the year 1900, but you can change that to any other date that makes sense for your data.
Finally, in case you are looking specifically for records that only contain bad data, you can do that by changing the select statement to something like:
SELECT * FROM #SampleTable
WHERE TRY_CONVERT(DATE, [SomePossibleDateField], 120) = ''
OR TRY_CONVERT(DATE, [SomePossibleDateField], 120) IS NULL
Which results with:
Id SomePossibleDateField
3 not a date
4 NULL
5
Unfortunately, an empty string does not result in NULL like bad data does, it simply gets passed through as empty string. So, if you are specifically looking for bad records, you will need to check both for IS NULL and for '' as shown in the example above.
Related
Dates inserting incorrectly - SQL
Dates are not inserting correctly to table, any explanation / solution? create table test ( ID bigint, MarketOpen datetime ); insert into test (ID, MarketOpen) values (1, 2019-01-19-11-40-00); select * from test; Fiddle
Thats totally the wrong way to enter a date. SQL Server is treating your current syntax as a calculation e.g. 2019-01-19-11-40-00=1948 and then converting the number 1948 to a datetime. You need to use a formatted string e.g. insert into #test (ID, EventId, MarketId, RaceDate, MarketOpen, HorseID) values (1, 123, 153722767, '2019-01-19 11:40:00', '2019-01-18 11:40:00', 34434); Note: As mentioned by seanb its best practice to use a non-ambiguous format when specifying dates and the ISO format (yyyymmdd) is probably the best of these.
Insert current date time in a column of and keep it constant
i am using sql server in making my project with javafx. Their i have a table of purchase and sale. One of the column of both of them is date having current date and time to store them as a record that this transaction has been saved in this time. Now i am using the that date column with varchar datatype and have using computed column specification with following function: (CONVERT([varchar](25),getdate(),(120))) but when i select records from that table using query SELECT pr.Date, p.Name, pr.Quantity, s.Name, p.Pur_Price FROM (([Product] AS p INNER JOIN [Purchase] AS pr ON pr.Product_id=p.Product_id) INNER JOIN [Supplier] AS s ON s.Supplier_Id=p.Supplier_Id) WHERE pr.Date>= dateadd(dd, 0, datediff(dd, 0, getdate()-30)) but it selects all the records keeping all date records to current date and time. Thanks in advance. Looking forward for your good replies.
The problem is that your Date column is computed on the fly and not actually stored in the table. So each time you SELECT from that table, the expression of the computed column is calculated (CONVERT([varchar](25),getdate(),(120))) thus resulting in the same value for all rows. A fix would be using a PERSISTED computed column so that values are actually stored with the table when inserting or updating: CREATE TABLE Product ( OtherColumns INT, [Date] AS (CONVERT([varchar](25), getdate(), 120)) PERSISTED) The problem with this is that non-deterministic expressions can't be persisted, as this error message pops up: Msg 4936, Level 16, State 1, Line 1 Computed column 'Date' in table 'Product' cannot be persisted because the column is non-deterministic. You have several other options for this. Please use DATE or DATETIME columns to store and handle dates and avoid using VARCHAR for this as it brings many problems. The following examples use DATETIME: Use a DEFAULT constraint linked to the column with the expression you want: CREATE TABLE Product ( OtherColumns INT, [Date] DATETIME DEFAULT GETDATE()) INSERT INTO Product ( OtherColumns) -- Skip the Date column on the INSERT VALUES (1) SELECT * FROM Product OtherColumns Date 1 2018-12-14 08:49:08.347 INSERT INTO Product ( OtherColumns, Date) VALUES (2, DEFAULT) -- Or use the keyword DEFAULT to use the default value SELECT * FROM Product OtherColumns Date 1 2018-12-14 08:49:08.347 2 2018-12-14 08:50:10.070 Use a trigger to set the value. This will override any inserted or updated value that the original operation set (as it will execute after the operation, as stated in it's definition). CREATE TRIGGER utrProductSetDate ON Product AFTER INSERT, UPDATE AS BEGIN SET NOCOUNT ON UPDATE P SET Date = GETDATE() FROM inserted AS I INNER JOIN Product AS P ON I.OtherColumns = P.OtherColumns -- Assuming PK or Unique columns join END
Thanks all of you. But i solved my problem by putting my date Column of that table into datetime data type and i my query i have entered the date using getdate() method. It worked for me to save current date and time in my purchase and sale table.
SQL Pivot table - filter timestamp
I have a logger table with timestamp,tagname and tagvalue fields. Every time tag value changes, the control system writes record to the table with those 3 parameters. Timestamp for records is not synchornized. I want to run a pivot table query to get all data for 3 different tags to show the values of those 3 tags only. When I run the query below, I get in return a dataset with all timestamp records in the table and lots of null values in the value fields(the SQL returns me all timestamp values). I use the query: SELECT * FROM ( SELECT [timestamp], [_VAL] AS '_VAL', [point_id] FROM DATA_LOG) p PIVOT(SUM([_VAL]) FOR point_id in ([GG02.PV_CURNT], [GG02.PV_JACKT], [GG02.PV_SPEED], [GG02.PV_TEMP]) ) as tagvalue ORDER BY timestamp ASC Here's an example to the values I get in return from the SQL Server: Results example: Please anybody can help me how to limit the timestamp that SQL returns me only for timestamp relevant to those 3 tags and not all timestamp values in the table? (the return values list will include a record when at least one of the tags values will not be null) If anybody have other ideas and not using PIVOT query to get the data in the format shown above - any idea will be welcome.
I think you simply want: WHERE [GG02.PV_CURNT] IS NOT NULL OR [GG02.PV_JACKT] IS NOT NULL OR [GG02.PV_SPEED] IS NOT NULL OR [GG02.PV_TEMP] IS NOT NULL in the subquery.
Postgres Data type conversion
I have this dataset that's in a SQL format. However the DATE type needs to be converted into a different format because I get the following error CREATE TABLE INSERT 0 1 INSERT 0 1 INSERT 0 1 INSERT 0 1 ERROR: date/time field value out of range: "28-10-96" LINE 58: ...040','2','10','','P13-00206','','','','','1-3-95','28-10-96'... ^ HINT: Perhaps you need a different "datestyle" setting. I've definitely read the documentation on date format http://www.postgresql.org/docs/current/static/datatype-datetime.html But my question is how do I convert all of the dates in a proper format without going through all the 500 or so data rows and making sure each one is correct before inserting into a DB. Backend is handle by rails, but I figured going through SQL to cleaning it up will be best here. I have a CREATE TABLE statement above this dataset, and mind you the data set was given to be via a DBF converter/external source Here's part of my dataset INSERT INTO winery_attributes (ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST) VALUES (1,'ACACIA WINERY','PROD','8000','34436','','0','50000','250000','APPT','75','525','27375','3612','63','30987','22','97','x','001_02169-MOD_AcaciaWinery','','','','','1-11-79','1-9-82','34436','x','125000','Los Carneros','1'); INSERT INTO winery_attributes (ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST) VALUES ('2','AETNA SPRING CELLARS','PROD','2500','2500','','0','2000','20000','TST APPT','0','3','156','0','0','156','1','10','x','','','','','x','1-4-86','1-6-86','2500','','0','Napa Valley','3'); INSERT INTO winery_attributes (ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST) VALUES ('3','ALTA VINEYARD CELLAR','PROD','480','480','','0','5000','5000','NO','0','4','208','0','0','208','4','6','x','003_U-387879','','','','','2-5-79','1-9-80','480','','0','Diamond Mountain District','3'); INSERT INTO winery_attributes (ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST) VALUES ('4','BLACK STALLION','PROD','43600','43600','','0','100000','100000','PUB','50','350','18200','0','0','18200','2','45','x','P13-00391','','','','','1-5-80','1-9-85','43600','','0','Oak Knoll District of Napa Valley','3'); INSERT INTO winery_attributes (ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST) VALUES ('5','ALTAMURA WINERY','PROD','11800','11800','x','3115','50000','50000','APPT','0','20','1040','0','0','1040','2','10','','P13-00206','','','','','1-3-95','28-10-96','14915','x','50000','Napa Valley','4');
The dates in your data set are in the form of a string. Since they are not in the default datestyle (which is YYYY-MM-DD) you should explicitly convert them to a date as follows: to_date('1-5-80', 'DD-MM-YY') If you store the data in a timestamp instead, use to_timestamp('1-5-80', 'DD-MM-YY') If you are given the data set in the form of the INSERT statements that you show, then first load all the data as simple strings into varchar columns, then add date columns and do an UPDATE (and similarly for integer and boolean columns): UPDATE my_table SET estab = to_date(ESTAB_DATE, 'DD-MM-YY'), -- column estab of type date apprv = to_date(APPRV_DATE, 'DD-MM-YY'), -- etc ... When the update is done you can ALTER TABLE to drop the text columns with dates (integers, booleans).
Order by on Two Datetime Fields .If one field is Null How to avoid Sybase from populating it with default datetime
I am using a Select Query to insert data into a Temp Table .In the Select Query I am doing order by on two columns something like this insert into #temp Select accnt_no,acct_name, start_date,end_date From table Order by start_date DESC,end_date DESC Select * from #temp Here when there is an entry present in start_date field and an Null entry in the end_date field .During the order by operation Sybase is filling it with an Default date ( jan 1 1900 ) . I dont want that to happen .If the end_date field is Null . The data should be written just as Null .Any suggestion on how to keep it as Null even while fetching the data from the table .
The 1/1/1900 usually comes from trying to cast an empty string into a datetime. Is your 'date' source column an actual datetime datatype or a string-ish varchar or char?
Sounds like the table definition requires that end_date not be null, and has default values inserted automatically to prevent them. Are you sure there are even nulls when you do a straight select on the table without the confusion of ordering and inserting?
I created a temp table with a nullable datetime column that has a default value. Defaults are not there to handle nulls per se, they are there to handle missing values on inserts that were not supplied. If I run an insert without a column list (just as you have done) the default value does not apply and a null is still inserted. I suggest adding the column list to your insert statement. This might prevent the problem (or expose a different problem in having them in the wrong order.) insert into #temp (accnt_no, accnt_name, start_date, end_date) select accnt_no,acct_name, start_date,end_date from ... Here's a query that should help you find the actual defaults on any of the columns if you don't have access to the create script: select c.name, object_name(cdefault), text from tempdb..syscolumns c, tempdb..syscomments cm where c.id = object_id('tempdb..#temp') and cm.id = c.cdefault