I have the following encoding problem: In 2 databases I should have the same data (2nd database is newer version of 1st). In some tables, characters are not displaying correctly, such a currency table that holds the name and symbols for different currencies.
I use SSMS to query both databases.
Id Name R7 R8 Name Different
-------------------------------------
148 DZD DZD 0
37 EGP £ £ EGP 1
149 ERN ERN 0
150 ETB ETB 0
1 EUR € € EUR 1
40 FJD $ $ FJD 0
39 FKP £ £ FKP 1
2 GBP £ £ GBP 1
151 GEL GEL 0
46 GGP £ £ GGP 1
42 GHC ¢ ¢ GHC 1
152 GHS GHS 0
Both tables(Currency) have the same structure and encoding for symbol col(R7 & R8): SQL_Latin1_General_CP1_CI_AS. I have tried to look up encoding solutions, but have run out of ideas of what to ask in google.
Does anyone know what might cause R7 to display incorrectly while R8 displays correctly?
Column definition for R7 (ShortName):
Column definition for R8 (ShortName):
#AlwaysLearning :
Related
I'm having a difficult time writing the SQL I'm able to accomplish with Excel SUMIFS. The ultimate goal is create share of requirement (loyalty) metric for each Customer. In the below, I'm trying to create the Category_Sales column. All other columns I already have in my SQL.
Here is what my Excel SUMIFS looks like.
=(Brand_Sales (range) , Cust_ID (range), Cust_ID (row), First_Buy (range), >=Brand_Str (row), Last Buy (range) <=, Brand_End (row))
The 268 value for Innocent is attained from the sum of Innocent, Cresco, Supply, and PTS since their First_Buy & Last_Buys are all inside the range of the Innocent Brand Start & End.
State
Brand
Cust_ID
First_Buy
Last_Buy
Brand_Str
Brand_End
Brand_Sales
Category_Sales
IL
Innocent
xyz
4/9/2022
4/9/2022
4/7/2022
5/29/2022
64
268
IL
Cresco
xyz
4/15/2022
4/15/2022
1/1/2022
5/30/2022
57
446
IL
Supply
xyz
4/15/2022
4/15/2022
1/1/2022
5/30/2022
45
446
IL
Rythm
xyz
1/3/2022
1/13/2022
1/1/2022
5/30/2022
121
446
IL
Natures
xyz
1/22/2022
1/22/2022
1/1/2022
5/30/2022
57
446
IL
PTS
xyz
4/26/2022
4/26/2022
1/1/2022
5/30/2022
102
446
route_number source_id latitude_value longitude_value no_of_stores
r1 676 28.15085 32.66055 23
r2 715 28.2160253 32.5214831 23
r3 345 28.2123115 32.537211 22
r4 150 28.23009 32.50323 23
r5 534 28.0949248 32.8075467 21
r6 1789 28.2204214 32.5035782 22
r7 647 28.21548 32.50238 23
r8 667 28.21132 32.51481 22
r9 2242 28.2389 32.5 19
r10 797 28.161657 32.8416816 20
r11 1097 28.1792849 32.8255522 19
r12 591 28.2513623 32.7638247 22
r13 1091 28.251208 32.7808329 21
r14 1267 28.2102213 32.8129836 21
r15 1016 28.1654648 32.8350845 19
r16 785 28.0786012 32.9513468 4
r17 1072 28.1701673 32.8382309 1
Mentioned above is a dataframe i am dealing with.
As you can see, the no. of stores in a route_number are different.
mean(no_of_stores) = 20 in this case
What i am looking for is,
depending on the geo-locations(latitude & longitude value) of my source_id , i want to combine multiple routes which lie closer to each other into 1 such that the no_of_stores in new group are equally divided.
The condition of routes lying closer to each other can be excluded, and just merge routes with lesser no. of stores into 1 can also be done.
i.e the routes which lie closer to each other( and no_of_stores are less than the mean(no_of_stores)), combine them into 1 big route, such that no_of_stores in the new routes formed is the mean of no_of_stores column, which in case is around 19.
Final output expected something like this: (not actual)
route_number new_route_no
r1 A1 #since its already has stores greater than mean
r2 A2
r3 A3
r4 A4
....................
r9 A9 #(19 stores)
r17 A9 #(1 stores) total 20
....................
r11 A11
r16 A11
r15 A15 #19 stores , since it cannot be combined further,keep as it is
I have tried using pandas groupby and aggregate methods, but couldnt find a way to transform this dataframe,
Any leads would be helpful.
SQL Server 2008 - This is my standard export
OrderDetailID - OrderID - ProductName - TotalPrice - ShipDate
34 16 Green... 5.00 4/9/16
35 16 Green... 3.00 4/9/16
36 16 Blue... 8.00 4/9/16
37 17 Green... 9.00 4/11/16
38 17 Red... 3.00 4/11/16
39 18 Blue... 5.00 4/11/16
40 19 Green... 4.00 4/11/16
41 19 Red... 6.00 4/11/16
42 20 Green... 3.00 4/11/16
43 20 Green... 3.00 4/11/16
I need an output of all OrderIDs that contain a total sum >= 5.00 for green products bought today. (Think of it as a Saint Patricks Day Sale, buy 5.00$ worth of green items, qualify for output.)
End result would be:
OrderID
17
20
I know I can do this in excel, but having me do it every day is not something I want. Luckily, I have access to a built-in API which allows me to set stored SQL queries, so if I can find out how to word this, theoretically anyone should be able to click 1 button and get the results they desire (based on me editing the criteria as needed, ie: green, >5, ect)
So far i'm around something like this
SELECT table.OrderID
WHERE table.ProductName LIKE '%green%'
AND SUM(table.TotalPrice) > 5
GROUP BY table.OrderID
FROM table
It just keeps coming back
Incorrect syntax near the keyword 'FROM'.
Maybe someone will answer, hopefully someone can point me in the right direction, and if I figure this out I'll make sure to update.
SELECT table.OrderID
FROM table
WHERE table.ProductName LIKE '%green%'
GROUP BY
table.OrderID
HAVING SUM(table.TotalPrice) >= 5
I have two tables which contain orders. I need to check that the tables match up. The table look something like below,
tblOrders_Sent
Side Tag FxBuy BuyAmount FxCost CostAmount
B CHFEUR CHF 50 EUR 0
B EURSEK SEK 75 EUR 0
B EURNOK NOK 35 EUR 0
B DKKEUR DKK 20 EUR 0
S CHFEUR EUR 0 CHF 10
S EURSEK EUR 0 SEK 20
S EURNOK EUR 0 NOK 40
tblOrders_Recieved
Tag FxBuy MktBuy FxCost MktCost
EURNOK NOK 35 EUR 0
DKKEUR DKK 20 EUR 0
CHFEUR CHF 50 EUR 0
EURSEK SEK 75 EUR 0
EURNOK EUR 0 NOK 40
EURSEK EUR 0 SEK 20
CHFEUR EUR 0 CHF 10
When the Side is "B" in the Orders_Sent table I need to check the BuyAmount against the MktBuy amount in the Orders_Recieved table. If the Side is "S" though I need to check the CostAmount vs the MktCost in the Orders_Recieved table. Is this possible or do I need to split the query into two seperate queries?
I'm using SQL Server 2012
Desired Output
FxBuy FxCost SentAmt RecievedAmt Diff
CHF EUR 50 50 0
SEK EUR 75 75 0
NOK EUR 35 35 0
DKK EUR 20 20 0
EUR CHF 10 10 0
EUR SEK 20 20 0
EUR NOK 40 40 0
The CASE keyword helps there. In your case as far as I understood the example it could look like this:
select
tblOrders_sent.fxBuy As FxBuy,
tblorders_sent.fxCost As FxCost,
tblorders_sent.buyamount as SentAmt,
tblorders_received.mktbuy as ReceivedAmt,
CASE Side WHEN 'B' THEN tblorders_sent.buyamount - tblorders_received.mktbuy END
tblorders_sent.costamount - tblorders_received.mktcost END AS diff
from tblOrders_sent
inner join tblOrders_received on
tblorders_sent.tag = tblorders_received.tag
I have the following excel file:
W1000x554 1032 408 52.1 29.5 70700 12300
W1000x539 1030 407 51.1 28.4 68700 12000
W1000x483 1020 404 46 25.4 61500 10700
W1000x443 1012 402 41.9 23.6 56400 9670
W1000x412 1008 402 40 21.1 52500 9100
W1000x371 1000 400 36.1 19 47300 8140
W1000x321 990 400 31 16.5 40900 6960
W1000x296 982 400 27.1 16.5 37800 6200
W1000x584 1056 314 64 36.1 74500 12500
I want to define a function that can ask the user for one of the first column's names and then read all the relevant data of that row later.
For example if the user defines W1000x412 then read : 1008 402 40 21.1 52500 9100.
Any ideas?
I suspect what #Marc means is that a formula such as in J2 below (copied across and down as necessary) will 'pick out' the values you want. It is not clear to me from your question whether these should be kept separate (as in Row2 of example) or strung together (CONCATENATE [&] as in J7 of example, where these are space [" "] delimited):
I am also not entirely sure about your 'define a function' but have assumed you do not require a UDF.
I have used Row1 to provide the offset for VLOOKUP, to save adjusting manually the formula for each column.
ColumnI is the expected user input, that might be best by selection from a Data Validation List with Source $A$2:$A$10.