I am using the line_index function and would like to search for two values, not only for carrid but also for connid. Is it possible? If so, in what way?
Because right now, this works:
lv_index = line_index( lt[ carrid = 'LH' ] ).
But after adding the code [ connid = '2407' ] like this:
lv_index = line_index( lt[ carrid = 'LH' ][ connid = '2407' ] ).
I get a syntax error:
LT[ ] is not an internal table
All fields (conditions) just one after the other inside one bracket:
lv_index = line_index( lt[ carrid = 'LH'
connid = '2407' ] ).
I'd like to comment about the chaining of Table Expressions.
So the answer corresponding to the OP example is that a single Table Expression must be used (itab[...]) with as many components as needed, and not a chain of table expressions as was done (itab[...][...]).
lt[ carrid = 'LH' ][ connid = '2407' ] can never be valid (because connid = '2407' would imply that each line of LT is itself an internal table but carrid = 'LH' is contradictory as it means that each line of LT is a structure).
But other syntaxes of chained table expressions can be valid, like e.g. (provided that the internal tables are defined adequately)
itab[ 1 ][ comp1 = 'A' ]
itab[ comp1 = 'A' ][ 1 ]
itab[ comp1 = 'A' ]-itabx[ compx = 42 ]
Here is an example that you can play with:
TYPES: BEGIN OF ty_structure,
connid TYPE c LENGTH 4,
END OF ty_structure,
ty_table TYPE STANDARD TABLE OF ty_structure WITH EMPTY KEY,
BEGIN OF ty_structure_2,
carrid TYPE c LENGTH 2,
table TYPE ty_table,
END OF ty_structure_2,
ty_table_2 TYPE STANDARD TABLE OF ty_structure_2 WITH EMPTY KEY,
ty_table_3 TYPE STANDARD TABLE OF ty_table_2 WITH EMPTY KEY.
DATA(lt) = VALUE ty_table_3( ( VALUE #( ( carrid = 'LH' table = VALUE #( ( connid = '2407' ) ) ) ) ) ).
DATA(structure) = lt[ 1 ][ carrid = 'LH' ]-table[ connid = '2407' ].
Related
Given data as such:
Month ValueA
1 T
2 T
3 T
4 F
Is there a way to make a measure that would find if for each month, last three Values were True?
So the output would be (F,F,T,F)?
That would propably mean that my actual problem is solvable, which is finding from:
Month ValueA ValueB ValueC
1 T F T
2 T T T
3 T T T
4 F T F
the count of those booleans for each row, so the output would be (0,0,2[A and C],1[B])
EDIT:
Okay, I managed to solve the first part with this:
Previous =
VAR PreviousDate =
MAXX(
FILTER(
ALL( 'Table' ),
EARLIER( 'Table'[Month] ) > 'Table'[Month]
),
'Table'[Month]
)
VAR PreviousDate2 =
MAXX(
FILTER(
ALL( 'Table' ),
EARLIER( 'Table'[Month] ) - 1 > 'Table'[Month]
),
'Table'[Month]
)
RETURN
IF(
CALCULATE(
MAX( 'Table'[Value] ),
FILTER(
'Table',
'Table'[Month] = PreviousDate
)
) = "T"
&& CALCULATE(
MAX( 'Table'[Value] ),
FILTER(
'Table',
'Table'[Month] = PreviousDate2
)
) = "T"
&& 'Table'[Value] = "T",
TRUE,
FALSE
)
But is there a way to use it with unknown number of columns?
Without hard - coding every column name? Like a loop or something.
I would redo the data table in power query (upivoting the ValueX-columns) and changing T/F to 1/0. Then have a dim table with a relationship to Month, like this:
Then add a measure like this:
Three Consec T =
var maxMonth = MAX('Data'[Month])
var tempTab =
FILTER(
dimMonth;
'dimMonth'[MonthNumber] <= maxMonth && 'dimMonth'[MonthNumber] > maxMonth -3
)
var sumMonth =
MAXX(
'dimMonth';
CALCULATE(
SUM('Data'[OneOrZero]);
tempTab
)
)
return
IF(
sumMonth >= 3;
"3 months in a row";
"No"
)
Then I can have a visual like this when the slicer indicates which time window I'm looking at and the table shows if there has been 3 consecutive Ts or not.
I have a Minizinc program for generating the optimal charge/discharge schedule for a grid-connected battery, given a set of prices by time-interval.
My program works (sort of; subject to some caveats), but my question is about two 'constraint' statements which are really just assignment statements:
constraint forall(t in 2..T)(MW_SETPOINT[t-1] - SALE[t] = MW_SETPOINT[t]);
constraint forall(t in 1..T)(PROFIT[t] = SALE[t] * PRICE[t]);
These just mean Energy SALES is the delta in MW_SETPOINT from t-1 to 1, and PROFIT is SALE * PRICE for each interval. So it seems counterintuitive to me to declare them as 'constraints'. But I've been unable to formulate them as assignment statements without throwing syntax errors.
Question:
Is there a more idiomatic way to declare such assignment statements for an array which is a function of other params/variables? Or is making assignments for arrays in constraints the recommended/idiomatic way to do it in Minizinc?
Full program for context:
% PARAMS
int: MW_CAPACITY = 10;
array[int] of float: PRICE;
% DERIVED PARAMS
int: STARTING_MW = MW_CAPACITY div 2; % integer division
int: T = length(PRICE);
% DECISION VARIABLE - MW SETPOINT EACH INTERVAL
array[1..T] of var 0..MW_CAPACITY: MW_SETPOINT;
% DERIVED/INTERMEDIATE VARIABLES
array[1..T] of var -1*MW_CAPACITY..MW_CAPACITY: SALE;
array[1..T] of var float: PROFIT;
var float: NET_PROFIT = sum(PROFIT);
% CONSTRAINTS
%% If start at 5MW, and sell 5 first interval, setpoint for first interval is 0
constraint MW_SETPOINT[1] = STARTING_MW - SALE[1];
%% End where you started; opt schedule from arbitrage means no net MW over time
constraint MW_SETPOINT[T] = STARTING_MW;
%% these are really justassignment statements for SALE & PROFIT
constraint forall(t in 2..T)(MW_SETPOINT[t-1] - SALE[t] = MW_SETPOINT[t]);
constraint forall(t in 1..T)(PROFIT[t] = SALE[t] * PRICE[t]);
% OBJECTIVE: MAXIMIZE REVENUE
solve maximize NET_PROFIT;
output["DAILY_PROFIT: " ++ show(NET_PROFIT) ++
"\nMW SETPOINTS: " ++ show(MW_SETPOINT) ++
"\nMW SALES: " ++ show(SALE) ++
"\n$/MW PRICES: " ++ show(PRICE)++
"\nPROFITS: " ++ show(PROFIT)
];
It can be run with
minizinc opt_sched_hindsight.mzn --solver org.minizinc.mip.coin-bc -D "PRICE = [29.835, 29.310470000000002, 28.575059999999997, 28.02416, 28.800690000000003, 32.41052, 34.38542, 29.512390000000003, 25.66587, 25.0499, 26.555529999999997, 28.149440000000002, 30.216509999999996, 32.32415, 31.406609999999997, 36.77642, 41.94735, 51.235209999999995, 50.68137, 64.54481, 48.235170000000004, 40.27663, 34.93675, 31.10404];"```
You can play with Array Comprehensions: (quote from the docs)
Array comprehensions have this syntax:
<array-comp> ::= "[" <expr> "|" <comp-tail> "]"
For example (with the literal equivalents on the right):
[2*i | i in 1..5] % [2, 4, 6, 8, 10]
Array comprehensions have more flexible type and inst requirements than set comprehensions (see Set Comprehensions).
Array comprehensions are allowed over a variable set with finite type,
the result is an array of optional type, with length equal to the
cardinality of the upper bound of the variable set. For example:
var set of 1..5: x;
array[int] of var opt int: y = [ i * i | i in x ];
The length of array will be 5.
Array comprehensions are allowed where the where-expression
is a var bool. Again the resulting array is of optional
type, and of length equal to that given by the generator expressions. For example:
var int x;
array[int] of var opt int: y = [ i | i in 1..10 where i != x ];
The length of the array will be 10.
The indices of an evaluated simple array comprehension are
implicitly 1..n, where n is the length of the evaluated
comprehension.
Example:
int: MW_CAPACITY = 10;
int: STARTING_MW = MW_CAPACITY div 2;
array [int] of float: PRICE = [1.0, 2.0, 3.0, 4.0];
int: T = length(PRICE);
array [1..T] of var -1*MW_CAPACITY..MW_CAPACITY: SALE;
array [1..T] of var 0..MW_CAPACITY: MW_SETPOINT = let {
int: min_i = min(index_set(PRICE));
} in
[STARTING_MW - sum([SALE[j] | j in min_i..i])
| i in index_set(PRICE)];
array [1..T] of var float: PROFIT =
[SALE[i] * PRICE[i]
| i in index_set(PRICE)];
solve satisfy;
Output:
~$ minizinc test.mzn
SALE = array1d(1..4, [-10, -5, 0, 0]);
----------
Notice that index_set(PRICE) is nothing else but 1..T and that min(index_set(PRICE)) is nothing else but 1, so one could write the above array comprehensions also as
array [1..T] of var 0..MW_CAPACITY: MW_SETPOINT =
[STARTING_MW - sum([SALE[j] | j in 1..i])
| i in 1..T];
array [1..T] of var float: PROFIT =
[SALE[i] * PRICE[i]
| i in 1..T];
I have a function that rounds to the nearest value in SQL as per below. When I pass my value in and run the function manually, it works as expected. However when I use it within a select statement, it removes the decimal places.
E.g. I expect the output to be 9.00 but instead I only see 9.
CREATE FUNCTION [dbo].[fn_PriceLadderCheck]
(#CheckPrice FLOAT,
#Jur VARCHAR(10))
RETURNS FLOAT
AS
BEGIN
DECLARE #ReturnPrice FLOAT
IF (#Jur = 'SE')
BEGIN
SET #ReturnPrice = (SELECT [Swedish Krona ]
FROM tbl_priceladder_swedishkrona
WHERE [Swedish Krona ] = #CheckPrice +
(SELECT MIN(ABS([Swedish Krona ] - #CheckPrice))
FROM tbl_priceladder_swedishkrona)
OR [Swedish Krona ] = #CheckPrice -
(SELECT MIN(ABS([Swedish Krona ] - #CheckPrice))
FROM tbl_priceladder_swedishkrona))
END
IF (#Jur = 'DK')
BEGIN
SET #ReturnPrice = (SELECT [Danish Krone ]
FROM tbl_priceladder_danishkrone
WHERE [Danish Krone ] = #CheckPrice +
(SELECT MIN(ABS([Danish Krone ] - #CheckPrice))
FROM tbl_priceladder_danishkrone)
OR [Danish Krone ] = #CheckPrice -
(SELECT MIN(ABS([Danish Krone ] - #CheckPrice))
FROM tbl_priceladder_danishkrone))
END
RETURN #ReturnPrice
END
Run SQL manually:
declare #checkprice float
set #checkprice = '10.3615384615385'
SELECT [Swedish Krona ]
FROM tbl_priceladder_swedishkrona
WHERE [Swedish Krona ] = #CheckPrice +
( SELECT MIN(ABS([Swedish Krona ] - #CheckPrice))
FROM tbl_priceladder_swedishkrona
)
OR [Swedish Krona ] = #CheckPrice -
( SELECT MIN(ABS([Swedish Krona ] - #CheckPrice))
FROM tbl_priceladder_swedishkrona
)
When I use this function with a SQL select statement for some reason it removes the 2 decimal points.
SELECT
Article, Colour,
dbo.fn_PriceLadderCheck([New Price], 'se') AS [New Price]
FROM
#temp2 t
[New Price] on its own is example output is 10.3615384615385
Any ideas?
Cast the result into a Decimal and specify the scale.
See the example below.
RETURN SELECT CAST(#ReturnPrice AS DECIMAL(16,2))
I have a data set with these columns:-
FMID,County,WIC,WICcash
Here is a sample of data:-
1002267,Douglas,Y,N
21005876,Douglas,Y,N
1001666,Douglas,N,Y
I have grouped the data based on County and have filtered the data based on County = 'Douglas'. Here is the output:
(Douglas,{(1002267,Douglas,Y,N),(21005876,Douglas,Y,N),(1001666,Douglas,N,Y)})
Now if the WIC and WICcash columns have value as Y then I want to take the combine count of the values from both the columns.
Here, combining WIC and WICcash columns I have 3 Y values, so my output will be
Douglas 3
How can I achieve this?
Below is the code that I have written till now
load_data = LOAD 'PigPrograms/Markets/DATA_GOV_US_Farmers_Market_DataSet.csv' USING PigStorage(',') as (FMID:long,County:chararray, WIC:chararray, WICcash:chararray);
group_markets_by_county = GROUP load_data BY County;
filter_county = FILTER group_markets_by_county BY group == 'Douglas';
DUMP filter_county;
For looking inside a bag, you can use a nested-foreach.
A = LOAD 'input3.txt' AS (FMID:long,County:chararray, WIC:chararray, WICcash:chararray);
B = GROUP A by County;
describe B; /* B: {group: chararray,A: {(FMID: long,County: chararray,WIC: chararray,WICcash: chararray)}} */
C = FOREACH B {
FILTER_WIC_Y = FILTER A by WIC == 'Y';
COUNT_WIC_Y = COUNT(FILTER_WIC_Y);
FILTER_WICcash_Y = FILTER A by WICcash == 'Y';
COUNT_WICcash_Y = COUNT(FILTER_WICcash_Y);
GENERATE group, COUNT_WIC_Y + COUNT_WICcash_Y as count;
}
dump C;
Or, you can replace 'Y'&'N' into 1&0 and add them up.
A = LOAD 'input3.txt' AS (FMID:long,County:chararray, WIC:chararray, WICcash:chararray);
B = FOREACH A GENERATE FMID, County, (WIC == 'Y' ? 1 : 0 ) as wic, (WICcash == 'Y' ? 1 : 0 ) as wiccash;
C = GROUP B by County;
D = FOREACH C GENERATE group, SUM(B.wic) + SUM(B.wiccash) as count;
dump D;
I have two queries which links to different databases
query = "select name ,ctry from xxxx where xxxx"
cursor.execute(query)
results1 = list(cursor.fetchall())
for row in results1:
query1 = "SELECT sessionname, country FROM xxx where and sessions.sessionname = '"+row[0] +"'"
cur.execute(query1)
results2.append(cur.fetchall())
How to connect them if they have common value(sessionname and name) and save it's output to file. Both queries are located in different dbo (oracle, postgresql)
My code is here :
try:
query = """select smat.s_name "SQLITE name" ,smed.m_ctry as "Country", smed.m_name "HDD Label" from smart.smed2smat ss, smart.smed smed, smart.smat smat where ss.M2S_SMAT=smat.s_id and ss.m2s_smed=smed.m_id and smed.m_name like '{0}%' order by smat.s_name""" .format(line_name)
cursor.execute(query)
columns = [i[0] for i in cursor.description]
results1 = cursor.fetchall()
for row in results1:
query1 = "SELECT sessions.sessionname, projects.country , projects.projectname FROM momatracks.sessions, momatracks.projects, momatracks.sessionsgeo where sessions.projectid = projects.id and sessionsgeo.sessionname = sessions.sessionname and sessions.sessionname = '"+row[0] +"' order by sessions.sessionname"
cur.execute(query1)
results2 =cur.fetchall()
print "results1 -----> \n:", row
tmp=[]
output_items = []
for tmp in results2:
print "---> \n", tmp
try:
stations_dict = dict([(item[0], item[1:]) for item in tmp])
for item in row:
output_item = list(item) + stations_dict.get(item[0], [])
output_items.append(output_item)
except Exception, f:
print str (f)
cursor.close()
cur.close()
except Exception, g:
print str ( g )
except Exception, e:
print str ( e )
My results from row and tmp are :
row - WE246JP_2015_10_11__14_53_33', 'NLD', '031_025_SQLITE_NLD1510_03INDIA
and
tmp - WE246JP_2015_10_11__14_53_33', 'NLD', 'NLD15_N2C1-4_NL'
How to properly connect them? I want output look like this :
output_items - WE246JP_2015_10_11__14_53_33', 'NLD', '031_025_SQLITE_NLD1510_03INDIA', 'NLD15_N2C1-4_NL'
At the moment i get this error :
can only concatenate list (not "str") to list
Also value station_dict looks like this :( And this is not what i intended to do
'W': 'E246JP_2015_10_11__15_23_33', 'N': 'LD15_N2C1-4_NL3'
I know there is something wrong with my code which is simmilar to join. Can anyone explain this to me ? I used method below :
http://forums.devshed.com/python-programming-11/join-arrays-based-common-value-sql-left-join-943177.html
If the sessions are exactly the same in both databases then just zip the results:
query = """
select
smat.s_name "SQLITE name",
smed.m_ctry as "Country",
smed.m_name "HDD Label"
from
smart.smed2smat ss
inner join
smart.smed smed on ss.M2S_SMAT = smat.s_id
inner join
smart.smat smat on ss.m2s_smed = smed.m_id
where smed.m_name like '{0}%'
order by smat.s_name
""".format(line_name)
cursor.execute(query)
results1 = cursor.fetchall()
query1 = """
select
sessions.sessionname,
projects.country,
projects.projectname
from
momatracks.sessions,
inner join
momatracks.projects on sessions.projectid = projects.id
inner join
momatracks.sessionsgeo on sessionsgeo.sessionname = sessions.sessionname
where sessions.sessionname in {}
order by sessions.sessionname
""".format(tuple([row[0] for row in results1]))
cur.execute(query1)
results2 = cur.fetchall()
zipped = zip(results1, results2)
output_list = [(m[0][0], m[0][1], m[0][2], m[1][2]) for m in zipped]
If the sessions are different then make each result a dictionary to join.
I think you can use a subquery here. There's no way for me to test it, but I think it should look like this:
SELECT *
FROM (SELECT smat.s_name "SQLITE name" ,
smed.m_ctry as "Country",
smed.m_name "HDD Label"
FROM smart.smed2smat ss,
smart.smed smed,
smart.smat smat
WHERE ss.M2S_SMAT=smat.s_id
AND ss.m2s_smed=smed.m_id
AND smed.m_name like '{0}%'
ORDER BY smat.s_name) t1,
(SELECT sessions.sessionname,
projects.country ,
projects.projectname
FROM momatracks.sessions,
momatracks.projects,
momatracks.sessionsgeo
WHERE sessions.projectid = projects.id
AND sessionsgeo.sessionname = sessions.sessionname
AND sessions.sessionname = '"+row[0] +"'
ORDER BY sessions.sessionname) t2
WHERE t1."SQLITE name" = t2.sessionname ;