Consider the following recordset:
1 1000 -1
2 500 2
3 1000 -1
4 500 3
5 500 2
6 1000 -1
7 500 1
So 3x a number 1000 with -1, total -3.
4x a number 500 with different values
Now I'm in need of a query which divides the sum of code 1000 over the 4 number 500 and removes code 1000.
So the end result would look like:
1 500 1.25
2 500 2.25
3 500 1.25
4 500 0.25
The sum of code 1000 = -3
There's 4 times code 500 in the table over which -3 has to be divided.
-3/4 = -0.75
so the record "2 500 2" becomes "2 500 (2+ -0.75)" = 1.25
etc
As an SQL newbie I have no clue how to get this done, can anyone help?
You can use CTEs to do it "step-wise" and build your solution. Like this:
with sumup as
(
select sum(colb) as s
from table
where cola = 1000
), countup as
(
select count(*) as c
from table
where cola = 500
), change as
(
select s / c as v
from sumup, countup
)
select cola, colb - v
from table, change
where cola = 500
Two things to note:
This might not be the fastest solution, but it is often close.
You can test this code easy, just change to final select statement to select the name of the CTE and see what it is. For example this would be a good test if you are getting a bad result:
with sumup as
(
select sum(colb) as s
from table
where cola = 1000
), countup as
(
select count(*) as c
from table
where cola = 500
), change as
(
select s / c as v
from sumup, countup
)
select * form change
Select col1,(
(Select sum(col2 )
from tab
where col1 =1000)
/
(Select count(*)
from tab
where col1 =500))+Col2 as new_value
From tab
Where col1=500
Here tab, col1,col2 are table name, column with (1000 , 500) value, column with (1,2,3 value)
This will give the results you are after:
DECLARE #T TABLE (ID INT, Number INT, Value INT)
INSERT #T (ID, Number, Value)
VALUES
(1, 1000, -1),
(2, 500, 2),
(3, 1000, -1),
(4, 500, 3),
(5, 500, 2),
(6, 1000,-1),
(7, 500, 1);
SELECT Number, Value, NewValue = Value + (x.Total / COUNT(*) OVER())
FROM #T T
CROSS JOIN
( SELECT Total = CAST(SUM(Value) AS FLOAT)
FROM #T
WHERE Number = 1000
) x
WHERE T.Number = 500;
Inside the cross join we simply get the sum where the number is 1000, this could just as easily be done as a subselect:
SELECT Number, Value, NewValue = Value + ((SELECT CAST(SUM(Value) AS FLOAT) FROM #T WHERE Number = 1000) / COUNT(*) OVER())
FROM #T T
WHERE T.Number = 500;
Or with a variable:
DECLARE #Total FLOAT = (SELECT SUM(Value) FROM #T WHERE Number = 1000);
SELECT Number, Value, NewValue = Value + (#Total / COUNT(*) OVER())
FROM #T T
WHERE T.Number = 500;
Then using the analytic function COUNT(*) OVER() you can count the total number of results that are 500.
And here is another solution:
select number1, value1,
value1
+ (select sum(value1) from table1 where number1=1000)/
(select count(*) from table1 where number1=500) calc_value
from table1 where number1=500
http://sqlfiddle.com/#!6/c68a0/1
I hope I got your question right. Then this is imho the best to read.
Related
I'm struggling to retrieve a "weighted probability" from a database table in my SQL statement.
What do I need to do:
I have tabular information of probable financial values like:
Table my_table
ID
P [%]
Value [$]
1
50
200
2
50
200
3
60
100
I need to calculate the weighted probability of reasonable worst case financial value to occur.
The formula is:
P_weighted = 1 - (1 - P_1 * Value_1/Max(Value_1-n) * (1 - P_2 * Value_2/Max(Value_1-n) * ...
i.e.
P_weighted = 1 - Product(1 - P_i * Value_i / Max(Value_1-n)
P_weighted = 1 - (1 - 50% * 200 / 200) * (1 - 50% * 200 / 200) * (1 - 60% * 100 / 200) = 82.5%
I know the is not product function in (Oracle) SQL, and this can be substituted by EXP( SUM LN(x))) ensuring x is always positive.
Hence, if I were only to calculate the combined probability I could (regardless of the value I could do like:
SELECT EXP(SUM(LN(1 - t.P))) FROM FROM my_table t WHERE condition
When I need to include the Max(t.Value) I've got the following problem:
A SELECT list cannot include both a group function, such as AVG, COUNT, MAX, MIN, SUM, STDDEV, or VARIANCE, and an individual column expression, unless the individual column expression is included in a GROUP BY clause.
So I tried the following:
SELECT ROUND(1-EXP(SUM(LN(1 - t.P*t.Value/max(t.Value)))),1) FROM FROM my_table t WHERE condition GROUP BY t.P, t.Value
But this does obviously group the output by probability rather than multiplying it and just returns 0.5 or 50% instead of the product which should be 0.825 or 82.5%.
How do I get the weighted probability from by table above using (Oracle) SQL?
Does this do it:
with da as (select .50 as p, 200 as v from dual union all select .50 , 200 from dual union all select .60,100 from dual),
mx as (select max(v) mx from da)
select exp(sum(ln(1-da.p*da.v/mx))) from da, mx;
EXP(SUM(LN(1-DA.P*DA.V/MX)))
----------------------------
.175
with
test1 as(
select max(value) v_max from my_table
),
test2 as(
select 1-(my.p/100* value/t1.v_max) rez
from my_table my, test1 t1
)
select to_char(round((1-(EXP (SUM (LN (rez)))))*100,2))||'%' "Weighted probability"
from test2
RESULT:
Weighted probability
--------------------
82,5%
If you want the calculation per-row then you can use an analytic SUM:
SELECT id,
ROUND(1 - EXP(SUM(LN(1 - wp)) OVER (ORDER BY id)), 3) AS cwp
FROM (
SELECT id,
p * value / MAX(value) OVER () AS wp
FROM table_name
)
Which, for the sample data:
CREATE TABLE table_name (ID, P, Value) AS
SELECT 1, .50, 200 FROM DUAL UNION ALL
SELECT 2, .50, 200 FROM DUAL UNION ALL
SELECT 3, .60, 100 FROM DUAL;
Outputs the cumulative weighted probabilities:
ID
CWP
1
.5
2
.75
3
.825
If you just want the total weighted probability then:
SELECT ROUND(1 - EXP(SUM(LN(1 - wp))), 3) AS twp
FROM (
SELECT id,
p * value / MAX(value) OVER () AS wp
FROM table_name
)
Which, for the sample data, outputs:
TWP
.825
db<>fiddle here
I have a requirement to find the current operation of a part. The table I have to get this information from lists operation statuses of complete (1) or 0. So the table typically looks like:
ID Operation Status
1 100 1
2 200 1
3 250 1
4 300 0
5 350 0
So in this case Operation 300 is the current op which I get using MIN(Operation) WHERE Status = 0.
However, some cases have appeared where some operations are skipped which would look like:
ID Operation Status
1 100 1
2 200 0
3 250 1
4 300 0
5 350 0
So in this case the current operation is still Operation 300 but MIN(Operation) doesn't work. What I need is the first occurrence of the row where Status = 0 that follows the last occurrence of a Status = 1 row. How could I achieve this?
Edit: Also have to consider the case where all operations are Status 0, where the correct result would be the first row (Operation 100)
This will give you the entire row to work with:
DECLARE #MyTable TABLE (
ID INT,
Operation INT,
Status BIT
);
INSERT INTO #MyTable VALUES
(1, 100, 1)
,(2, 200, 0)
,(3, 250, 1)
,(4, 300, 0)
,(5, 350, 0)
;
WITH MaxOperation AS (
SELECT MAX(x.Operation) AS MaxOperation
FROM #MyTable x
WHERE x.Status = 1
)
SELECT TOP 1 t.*
FROM #MyTable t
CROSS APPLY (SELECT MaxOperation FROM MaxOperation) x
WHERE t.Operation > x.MaxOperation
OR x.MaxOperation IS NULL
ORDER BY t.Operation
This will result in:
ID Operation Status
----------- ----------- ------
4 300 0
It will also produce this if all the Status values are 0:
ID Operation Status
----------- ----------- ------
1 100 0
I'm sure there is a clever window function way to do it, but in vanilla sql this is the idea
SELECT MIN(Operation)
FROM SOME_TABLE
WHERE Operation >
( SELECT MAX(Operation)
FROM SOME_TABLE
WHERE status = 1
)
As indicated by user Error_2646, a good way would be something like
select
min(ID)
from
[YourTable]
where
ID > (select max(ID) from [YourTable] where Status = 1)
I hope this answer will give you the correct answer. If you can add the expected output in a image it is more easy to identify what you need. Please add schema and data when, so that it is easy for user to put their solutions.
Schema and data I used:
(
ID INT
,operation INT
,Status INT
)
insert into Occurances values(1,100,1)
insert into Occurances values(2,200,0)
insert into Occurances values(1,250,1)
insert into Occurances values(1,300,0)
insert into Occurances values(1,350,0)
SELECT *
FROM
(
SELECT
Rank() OVER ( ORDER BY operation) AS [rank]
,MIN([operation]) AS [min]
,id
,[status]
FROM Occurances
WHERE [Status]= 0
GROUP BY id
,[status]
,operation
UNION
SELECT
Rank() OVER ( ORDER BY operation DESC) AS [rank]
,MAX([operation]) AS [min]
,id
,[status]
FROM Occurances
WHERE [Status]= 1
GROUP BY id
,[status]
,operation
) AS A
WHERE A.[rank]= 1
This is the answer I am getting:
You can do this very efficiently with a window function:
SELECT TOP (1) *
FROM (
SELECT *, LEAD(Status) OVER (ORDER BY Operation DESC) AS PreviousStatus
FROM myTable
)
WHERE Status = 0 AND PreviousStatus = 1
ORDER BY Operation DESC
Try this:
DECLARE #Table TABLE
(
ID int
, Operation int
, [Status] bit
)
;
INSERT INTO #Table (ID, Operation, [Status])
VALUES
(1, 100, 1)
, (2, 200, 0)
, (3, 250, 1)
, (4, 300, 1)
, (5, 350, 0)
;
SELECT TOP 1 T.*
FROM #Table T
WHERE T.[Status] = 0
AND T.ID > (
SELECT TOP 1 T.ID
FROM #Table T
WHERE T.[Status] = 1
ORDER BY ID DESC
)
ORDER BY ID
Say I have the following schema:
SENSOR
--------------
ID (numeric)
READ_DATE (date)
VALUE (numeric)
I want to find spikes in data that lasts at least X amount of days. We take 1 reading from the sensor only once per day so ID and READ_DATE are pretty much interchangeable in terms of uniqueness.
For example I have the following records:
1, 2019-01-01, 100
2, 2019-01-02, 1000
3, 2019-01-03, 1500
4, 2019-01-04, 1100
5, 2019-01-05, 500
6, 2019-01-06, 700
7, 2019-01-07, 1500
8, 2019-01-08, 2000
In this example, for X = 2 with VALUE >= 1000, I want to get row 3, 4, 8 because (2, 3), (3, 4), (7, 8) are consecutively >= to 1000.
I am not sure about how to approach this. I was thinking of doing a COUNT window function but don't know how to check whether there are X records >= 1000.
This is about as generic as I think this can get.
First I create some data, using a table variable, but this could be a temporary/ physical table:
DECLARE #table TABLE (id INT, [date] DATE, [value] INT);
INSERT INTO #table SELECT 1, '20190101', 100;
INSERT INTO #table SELECT 2, '20190102', 1000;
INSERT INTO #table SELECT 3, '20190103', 1500;
INSERT INTO #table SELECT 4, '20190104', 1100;
INSERT INTO #table SELECT 5, '20190105', 500;
INSERT INTO #table SELECT 6, '20190106', 700;
INSERT INTO #table SELECT 7, '20190107', 1500;
INSERT INTO #table SELECT 8, '20190108', 2000;
Then I use a CTE (which could be swapped out for a less efficient subquery):
WITH x AS (
SELECT
*,
CASE WHEN [value] >= 1000 THEN 1 END AS spike
FROM
#table)
SELECT
x2.id,
x2.[date],
x2.[value]
FROM
x x1
INNER JOIN x x2 ON x2.id = x1.id + 1
WHERE
x1.spike = 1
AND x2.spike = 1;
This assumes your ids are sequential, if they aren't you would need to join on date instead, which is trickier.
Results:
id date value
3 2019-01-03 1500
4 2019-01-04 1100
8 2019-01-08 2000
Okay, this isn't Postgres, and it isn't very generic (recursive CTE), but it seems to work??
DECLARE #spike_length INT = 3;
WITH x AS (
SELECT
*,
CASE WHEN [value] >= 1000 THEN 1 ELSE 0 END AS spike
FROM
#table),
y AS (
SELECT
x.id,
x.[date],
x.[value],
x.spike AS spike_length
FROM
x
WHERE
id = 1
UNION ALL
SELECT
x.id,
x.[date],
x.[value],
CASE WHEN x.spike = 0 THEN 0 ELSE y.spike_length + 1 END
FROM
y
INNER JOIN x ON x.id = y.id + 1)
SELECT * FROM y WHERE spike_length >= #spike_length;
Results:
id date value spike_length
4 2019-01-04 1100 3
You can approach this as a gaps-and-islands problem -- finding consecutive values above the threshold. The following gets the first date of such sequences:
select s.read_date
from (select s.*,
row_number() over (order by date) as seqnum
from sensor s
where value >= 1000
) s
group by (date - seqnum * interval '1 day')
having count(*) >= 2;
The observation here is that (date - seqnum * interval '1 day') is constant for rows that are adjacent.
You can get the original rows with one more layer of subqueries:
select s.*
from (select s.*, count(*) over (partition by (date - seqnum * interval '1 day') as cnt
from (select s.*,
row_number() over (order by date) as seqnum
from sensor s
where value >= 1000
) s
) s
where cnt >= 2;
I ended up with the following:
-- this parts helps filtering values < 1000 later on
with a as (
select *,
case when value >= 1000 then 1 else 0 end as indicator
from sensor),
-- using the indicator, create a window that calculates the length of the spike
b as (
select *,
sum(indicator) over (order by id asc rows between 2 preceding and current row) as spike
from a)
-- now filter out all spikes < 3
-- (because the window has a size of 3, it can never be larger than 3, so = 3 is okay)
select id, value from b where spike = 3;
This is expanding on #Gordon Linoff's answer, but which I found too complicated.
If you are able to use analytic functions, then you should be able to do something like this to get what you need (I altered your 1000 limit to 1500 else it would have brought back all rows which consecutively add up to 1000 and above)
CREATE TABLE test1 (
id number,
value number
);
insert all
into test1 (id, value) values (1, 100)
into test1 (id, value) values (2, 1000)
into test1 (id, value) values (3, 1500)
into test1 (id, value) values (4, 1100)
into test1 (id, value) values (5, 500)
into test1 (id, value) values (6, 700)
into test1 (id, value) values (7, 1500)
into test1 (id, value) values (8, 2000)
select * from dual;
EDIT - After re-reading again - and from comment - have re-done to answer the actual question! Using 2 lags - one to make sure previous day was 1000 or greater and another to count up how many times has happened for X filtering.
SELECT * FROM
(
SELECT id,
value,
spike,
CASE WHEN spike = 0 THEN 0 ELSE (spike + LAG(spike, 1, 0) OVER (ORDER BY id) + 1) END as SPIKE_LENGTH
FROM (
select id,
value,
CASE WHEN LAG(value, 1, 0) OVER (ORDER BY id) >= 1000 AND value >= 1000 THEN 1 ELSE 0 END AS SPIKE
from test1
)
)
WHERE spike_length >= 2;
Which returns
ID Value spike spike_length
3 1500 1 2
4 1100 1 3
8 2000 1 2
If you increase the spike length filter to >= 3 - only get ID 4 which is the only ID with 3 over 1000 in a row.
I need to write a stored procedure or table function to return a new data table as a new data source.
I wish to loop through the original table for every 5 rows base on the invoice ID column (it's possible not start from 1), the first 5 rows add to the left of the new table and the second 5 rows add to the right of the new table, the third 5 rows to the left and so on.
For example, Here is the original table:
Here is the expect table:
Thanks in advance!
declare #rowCount int = 5;
with cte as (
select *,( (IN_InvoiceID-1) / #rowCount ) % 2 group1
,( (IN_InvoiceID-1) / #rowCount ) group2
,IN_InvoiceID % #rowCount group3
from T
)
select * from cte
select T1.INID,T1.IN_InvoiceID,T1.IN_InvoiceAmount,T2.INID,T2.IN_InvoiceID,T2.IN_InvoiceAmount
from CTE T1
left join CTE T2 on T2.group1 = 1 and T1.group2 = T2.group2-1 and T1.group3 = T2.group3
where T1.group1 = 0
Test DDL
CREATE TABLE T
([INID] varchar(38), [IN_InvoiceID] int, [IN_InvoiceAmount] int)
;
INSERT INTO T
([INID], [IN_InvoiceID], [IN_InvoiceAmount])
VALUES
('DB3E17E6-35C5-41:121-93B1-F809BF6B2972', 1, 2999),
('3212F048-8213-4FCC-AB64-121485B77D4E43', 2, 3737),
('E3526373-A204-40F5-801C-7F8302A4E5E2', 3, 3175),
('76CC9C19-BF79-4E8A-8034-A33805AD3390', 4, 391),
('EC7A2FBC-B62D-4865-88DE-A8097975F125', 5, 1206),
('52AD3046-21331-4F0A-BD1D-67F232C54244', 6, 402),
('CA48F132-A9F5-4516-9E58-CDEE6644AAD1', 7, 1996),
('02E10C31-CAB2-4220-B66A-CEE5E67A9378', 8, 3906),
('98F1EEFF-B07A-4B65-87F4-E165264284DD', 9, 2575),
('91EBDD8B-B73C-470C-8900-DD66078483DB', 10, 2965),
('6E2490E5-C4DE-4833-877F-1590F7BDC1B8', 11, 1603),
('00985921-AC3C-4E3E-BAE1-7F58302F831A', 12, 1302)
;
Result:
Could you please check article Display Data in Multiple Columns using SQL showing with example case how a database developer can show the list of data rows in a columnar mode using Row_Number() function and mode arithmetic expression
You need to add additional columns from the same row that is different in the sample
Seems as if you want to split the table into 2 tables with alternating 5 rows. An easy way to do this would be:
Take data into a temp table having an extra column (lets say
grouping_id)
Update the grouping id so that each 5 rows have the same id. You can
use in_invoiceId % 5 (the nod function). After this step the first 5
rows will have grouping_id 0, next 5 will have 1, next will have 2
(assuming your invoice id is incremented +1 for all rows).
You can just do a normal select with where clause for odd and even grouping_id
Ideally, you can manage with the 2 tables Master and detail table.
But due to my curiosity, I am able to solve and give the answer as
Declare #table table(id int identity, invoice_id int)
; WITH Numbers AS
(
SELECT n = 1
UNION ALL
SELECT n + 1
FROM Numbers
WHERE n+1 <= 50
)
insert into #table SELECT n
FROM Numbers
Select (a.id )%5 ,* from #table a join #table b on a.id+5 = b.id and a.id != b.id
;WITH Numbers AS
(
SELECT n = 1, o = 5
UNION ALL
SELECT n + 10, o = o+10
FROM Numbers
WHERE n+1 <= 50
)
select a.id ParentId,a.invoice_id ParentInvoiceId, --b.n, b.o,
c.invoice_id childInvoiceID from #table a
join Numbers b on a.id between b.n and b.o
left join #table c on a.id + 5 = c.id
Here is my solution
First i create grps based on whether the in_invoiceid is divisible by 5 or not.(Ignore the remainders)
After that i create a category to indicate between alternative groups(ie by checking if the remainder is 0 or otherise)
Then its a matter of dense_ranking the records on the basis of the category field ordered by in_invoiceid
Lastly a join with category=1 rows with same dense_rank as those records in category=0
create table Invoicetable(IN_ID varchar(100), IN_InvoiceID int)
INSERT INTO Invoicetable (IN_ID, IN_InvoiceID)
VALUES
('2345-BCDE-6645-1DDF', 1),
('2345-BCDE-6645-3DDF', 2),
('2345-BCDE-6645-4DDF', 3),
('2345-BCDE-6645-5DDF', 4),
('2345-BCDE-6645-6DDF', 5),
('2345-BCDE-6645-7DDF', 6),
('2345-BCDE-6645-aDDF', 7),
('2345-BCDE-6645-sDDF', 8),
('2345-BCDE-6645-dDDF', 9),
('2345-BCDE-6645-dDDF', 10),
('2345-BCDE-6645-dDDF', 11),
('2345-BCDE-6645-dDDF', 12);
with data
as (
select *
,(in_invoiceid-1)/5 as grp
,case when ((in_invoiceid-1)/5)%2=0 then '1' else '0' end as category
,dense_rank() over(partition by case when ((in_invoiceid-1)/5)%2=0 then '1' else '0' end
order by in_invoiceid) as rnk
from invoicetable a
)
select *
from data a
left join data b
on a.rnk=b.rnk
and b.category=0
where a.category=1
Here is db fiddle link.
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=287f101737c580ca271940764b2536ae
You may try with the following approach. Dividing the table is done with (((ROW_NUMBER() OVER (ORDER BY IN_InvoiceID) - 1) / 5) % 2 = 0) which groups records in left and right groups.
CREATE TABLE #InvoiceTable(
IN_ID varchar(24),
IN_InvoiceID int
)
INSERT INTO #InvoiceTable (IN_ID, IN_InvoiceID)
VALUES
('2345-BCDE-6645-1DDF', 1),
('2345-BCDE-6645-3DDF', 2),
('2345-BCDE-6645-4DDF', 3),
('2345-BCDE-6645-5DDF', 4),
('2345-BCDE-6645-6DDF', 5),
('2345-BCDE-6645-7DDF', 6),
('2345-BCDE-6645-aDDF', 7),
('2345-BCDE-6645-sDDF', 8),
('2345-BCDE-6645-dDDF', 9),
('2345-BCDE-6645-dDDF', 10),
('2345-BCDE-6645-dDDF', 11),
('2345-BCDE-6645-dDDF', 12);
WITH cte AS (
SELECT
IN_ID,
IN_InvoiceID,
CASE
WHEN (((ROW_NUMBER() OVER (ORDER BY IN_InvoiceID) - 1) / 5) % 2 = 0) THEN 'L'
ELSE 'R'
END AS IN_Position
FROM #InvoiceTable
),
cteL AS (
SELECT IN_ID, IN_InvoiceID, ROW_NUMBER() OVER (ORDER BY IN_InvoiceID) AS IN_RowNumber
FROM cte
WHERE IN_Position = 'L'
),
cteR AS (
SELECT IN_ID, IN_InvoiceID, ROW_NUMBER() OVER (ORDER BY IN_InvoiceID) AS IN_RowNumber
FROM cte
WHERE IN_Position = 'R'
)
SELECT cteL.IN_ID, cteL.IN_InvoiceID, cteR.IN_ID, cteR.IN_InvoiceID
FROM cteL
LEFT JOIN cteR ON (cteL.IN_RowNumber = cteR.IN_RowNumber)
Output:
IN_ID IN_InvoiceID IN_ID IN_InvoiceID
2345-BCDE-6645-1DDF 1 2345-BCDE-6645-7DDF 6
2345-BCDE-6645-3DDF 2 2345-BCDE-6645-aDDF 7
2345-BCDE-6645-4DDF 3 2345-BCDE-6645-sDDF 8
2345-BCDE-6645-5DDF 4 2345-BCDE-6645-dDDF 9
2345-BCDE-6645-6DDF 5 2345-BCDE-6645-dDDF 10
2345-BCDE-6645-dDDF 11 NULL NULL
2345-BCDE-6645-dDDF 12 NULL NULL
I am having the following problem.
I would like to select a currency value from a database which will act as a default value on the top result of the query (this part is already done and is not a part of my main problem).
I want to use a query that kind of looks like this:
SELECT valkurs, valkurs 'vk'
FROM xx
WHERE valkod='EUR' AND foretagkod=300
UNION
--(My problem is that i can't find out what to write here)
My problem is that I would like to attach a range of values from 1.0 to 20.0 with 0.1 in incremental steps to the original query mentioned above.
An example output can look like this:
8.88, 8.88
1.0, 1.0
1.1, 1.1
1.2, 1.2
...
20.0, 20.0
Is it possible anyhow?
Due to implementation issues this has to be done in a query...
You can use the system table Master..spt_values to generate a sequential list:
SELECT Number = CAST(1 + (Number / 10.0) AS DECIMAL(4, 1)),
Number2 = CAST(1 + (Number / 10.0) AS DECIMAL(4, 1))
FROM Master..spt_values
WHERE Type = 'P'
AND Number BETWEEN 0 AND 200
So to combine in the correct order with your current query I would use:
SELECT valkurs, VK = valkurs
FROM ( SELECT valkurs, SortOrder = 0
FROM xx
WHERE valkod = 'EUR'
AND foretagkod = 300
UNION ALL
SELECT valkurs = CAST(1 + (Number / 10.0) AS DECIMAL(4, 1)), SortOrder = 1
FROM Master..spt_values
WHERE Type = 'P'
AND Number BETWEEN 0 AND 190
) T
ORDER BY T.SortOrder, t.valkurs;
ADDENDUM
There are some that do not advocate the use of Master..spt_values due to the fact that it is not documented, so it could be removed from future versions of sql-server. If this is a major concern you can use ROW_NUMBER() to generate a sequential list (using any table with enough rows as the source, I have gone for sys.all_objects):
SELECT valkurs, VK = valkurs,
FROM ( SELECT valkurs, SortOrder = 0
FROM xx
WHERE valkod = 'EUR'
AND foretagkod = 300
UNION ALL
SELECT TOP 191
valkurs = 1 + ((ROW_NUMBER() OVER(ORDER BY object_id) - 1) / 10.0),
SortOrder = 1
FROM sys.all_objects
) T
ORDER BY T.SortOrder, t.valkurs;
Old, but I think some people will benefit from my answer, which is a much better implementation than the accepted answer
WITH e1(n) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), -- 10
e2(n) AS (SELECT 1 FROM e1 CROSS JOIN e1 AS b), -- 10*10
e3(n) AS (SELECT 1 FROM e1 CROSS JOIN e2), -- 10*100
numbers as (SELECT n = ROW_NUMBER() OVER (ORDER BY n)/10.0
FROM e3)
select n, n from numbers
where n between 1 and 20