How to use sum, replace, and case in the same setting? - sql

I have a table where numbers are stored as strings and some bigger numbers look like 1,634.5 so I have to replace the "," with blanks to get accurate data.
I have been using this which works, but I can't figure out how to do that with sum and case as I need to sum up the revenue together when it equals xyz
code that works
(replace(revenue, ',', '')
code that doesn't work
sum(replace(revenue, ',', '') case WHEN food in ('burgers', 'fries') then revenue else 0 end)) as foodsum
group by foodcategory
i also tried using a nested query but didn't work, does anyone have any recommendations?
thanks!

Here's the correct way on aggregating your revenue based on foodcategory.
select case when food in ('burgers', 'fries') then sum(replace(revenue, ',', '')) else 0 end as foodsum
from table1
group by foodcategory

Related

Split a String based on a specific pattern of characters

Basically, I would like to be able to Split a long text field after each date into unique rows that correspond to the dates. The source field "Notes" is just a long running text field with multiple comments over time with a distinct date ... initially, I tried splitting off the '-' after the date which works to some degree, except where there are dashes elsewhere in the text. So I'm thinking of something where I could split off of each unique instance of a date (mm/dd/yy) ... one issue is the length is not consistent meaning, it could be:
'm/d/yy-' or 'mm/dd/yy-' or 'mm/d/yy-'
Example Data > 'Notes' Column:
3/30/16-Had a meeting 2/5/16-LVM 10/5/15-Spoke to customer
*A single cell could have multiple dates and comments in it
Looking for end result like this:
Date Value
3/30/16 Had a meeting
2/15/16 LVM
10/5/15 Spoke to customer
I am using something basic like the below, but wondering if I can get a little more sophisticated with the STRING_SPLIT
SELECT NOTES, VALUE
FROM SRC_TABLE
CROSS APPLY STRING_SPLIT(NOTES, '-')
Appreciate any insights or ideas!
Hmmm. I think this does what you want:
select t.*, s.*
from src_table t outer apply
(select value as value_date
from string_split(t.notes, '-') s
where s.value like '%/%/%' and s.value not like '%/%/%/%'
) s;
EDIT:
If you just want to split on the first -, you can use:
select left(notes, charindex('-', notes + '-') - 1),
stuff(notes, 1, charindex('-', notes + '-'), '')
from src_table;
Here is a db<>fiddle.

Use a "case when in" expression where the list values are in a single field?

I'm using SQL Server and would like to check if todays day name is in a list of values in a single field/column.
An example of the column "start_days" contents is:
'Monday','Tuesday','Sunday'
'Thursday'
'Friday','Sunday'
'Tuesday','Sunday'
'Tuesday','Wednesday','Thursday','Friday'
The code I am trying to run on this is:
case
when datename(weekday,getdate()) in (start_days) then 1
else 0
end as today_flag
And the result is 0 for every row.
Am I doing something wrong here or is it just not possible to use a single field as a list of values in the statement?
As a starter: you should fix your data model and not store multiple values in a single column. Storing list of values in a database column basically defeats the purpopose of a relational database. Here is a related reading on that topic.
That said, here is one option using pattern matching:
case
when ',' + start_days + ',' like '%,' + datename(weekday,getdate()) + ',%' then 1
else 0
end as today_flag
If you really have single quotes around values within the list, then we need to include them in the match:
case
when ',' + start_days + ',' like '%,''' + datename(weekday,getdate()) + ''',%' then 1
else 0
end as today_flag
If the values always are weekdays, this can be simplified since there is no risk of overlapping values:
case
when start_days like '%''' + datename(weekday,getdate()) + '''%' then 1
else 0
end as today_flag
The right answer to the question is fixing your data modal. Storing multiple values like that will leads you to many issue and you're stuck on one right now.
Until that, you could use LIKE operaor to get the desired results as:
SELECT *, CASE WHEN
Value LIKE CONCAT('%', QUOTENAME(DATENAME(WEEKDAY,GETDATE()), ''''), '%')
THEN 1
ELSE 0
END
FROM
(
VALUES
('''Monday'',''Tuesday'',''Sunday'''),
('''Thursday'''),
('''Friday'',''Sunday'''),
('''Tuesday'',''Sunday'''),
('''Tuesday'',''Wednesday'',''Thursday'',''Friday''')
) T(Value)
Here is a db<>fiddle where you can see how it's working online.

SQL: How to make a replace on the field ''

I have a very but tricky question for you guys. So, listen I have a field with spaces and numbers in one of my table columns. The key part is transform the content in a decimal field. The drawback is basically that for some rows I could get something like:
' 1584.00 '
' 156546'
'545.00 '
' '
So, to clean up my column, I have done a LTRIM and RTRIM so spaces gone. So now for a couple of records where the record were just spaces the new content is ''. Finally I need to convert this result to a decimal.
Issue: The thing is that for field that contend just the spaces the new result is '' and I'm not able to apply a REPLACE on this because it's a blank and the code below doesn't work:
SELECT REPLACE('','','0')
-- Final current verison
SELECT CAST(COALESCE(REPLACE(REPLACE([Gross_Weight],' ','0'),',',''),'0') AS DECIMAL(13,3))
How could I figure it out?
thanks so much
SELECT COALESCE(NULLIF(MyColumn, ''), 0)
This has the side-effect that you will also turn NULL values into 0, which you might not want. If that's a problem then a simple CASE statement should do the trick:
SELECT CASE WHEN MyColumn = '' THEN 0 ELSE CAST(MyColumn AS DECIMAL(10, 4)) END
Obviously you'll also have to incorporate any other manipulations that you're already doing.
No need for replace, just concatenate a zero to your column, like
SELECT RTRIM('0' + LTRIM(column))
I presume your data is in a table.
Lets call this table 'DATA' and the column 'VALUE'
Then you might use the below query
UPDATE DATA SET VALUE = 0 where VALUE = ''
To select the value do the below
select case ltrim(rtrim([Gross_Weight])) when ''
THEN 0
ELSE ltrim(rtrim([Gross_Weight])) END
Let me know if i get the requirement wrong.

TSQL - Remove Everything after the last period

I have a column string that may be one of the following.
10.0.2531.0
10.50.2500
10.0.2531.60
My requirement is, if there are 3 periods/decimal points, remove the last period/decimal and everything after that.
If I use the following, this will take care of the first row where there is only ".0", however, it does not work for the third row.
select
case
when right(column_1,2) = '.0' then left(column_1,len(column_1)-2)
else column_1 end,
FROM
table_1
I also tried the following but that didn't work.
select
case
when right(column_1,2) = '.' then left(column_1,len(column_1)-2)
when right(column_1,3) = '.' then left(column_1,len(column_1)-3)
else column_1 end,
FROM
table_1
The number after the third period/decimal may be a 0 or another number.
The following works, under the assumption that there are never five periods:
select (case when ip like '%.%.%.%'
then left(ip, len(ip) - charindex('.', reverse(ip))
else ip
end) as firstThree
Use a combination of charindex and substring.
SQL Fiddle
select reverse(substring(reverse(column_1), charindex('.',reverse(column_1))+1, len(column_1)))
from table_1
where len(column_1) - len(replace(column_1,'.','')) = 3

How to quickly compare many strings?

In SQL Server, I have a string column that contains numbers. Each entry I need is only one number so no parsing is needed. I need some way to find all rows that contain numbers from 400 to 450. Instead of doing:
...where my stringcolumn like '%400%' or stringcolumn like '%401%' or stringcolumn like '%402%' or ...
is there a better that can save on some typing?
There are also other values in these rows such as: '5335154', test4559#me.com', '555-555-5555'. Filtering those out will need to be taken into account.
...where stringcolumn like '4[0-4][0-9]' OR stringcolumn = '450'
You don't need the wildcard if you want to restrict to 3 digits.
Use regex to accomplish this.
...where stringcolumn like '4[0-4][0-9]' OR stringcolumn like '450'
one way
WHERE Column like '%4[0-4][09]%'
OR Column LIKE '%500%'
keep in mind that this will pick anything with the number in it, so 5000 will be returned as well
I would do the following:
select t.*
from (select t.*,
(case when charindex('4', col) > 0
then substrint(col, charindex('4', col), charindex('4', col) + 2)
end) as col4xx
from t
) t
where (case when isnumeric(col4xx) = 1
then (case when cast(col4xx as int) between 400 and 450 then 'true'
end)
end) = 'true'
I'm not a fan of having case statements in WHERE clauses. However, to ensure conversion to a number, this is needed (or the conversion could become a column in another subquery). Note that the following is not equivalent:
where col4xx between '400' and '450'
Since the string '44A' would match.