How to remove all Text before and after in notepad++ - gps

I have GPS .SRT file of DJI Mavic Pro, I want to only Longitude and Latitude values, and delete all other text.
GPS string as per below
1
00:00:00,000 --> 00:00:00,020
[iso : 450] [shutter : 1/50.0] [fnum : 280] [ev : 0.3] [ct : 5500] [color_md : default] [focal_len : 280] [latitude : 25.144090] [longtitude : 76.072992] [altitude: 285.026001] </font>
2
00:00:00,020 --> 00:00:00,040
[iso : 450] [shutter : 1/50.0] [fnum : 280] [ev : 0.3] [ct : 5500] [color_md : default] [focal_len : 280] [latitude : 25.144090] [longtitude : 76.072992] [altitude: 285.026001] </font>
3
00:00:00,040 --> 00:00:00,059
[iso : 450] [shutter : 1/50.0] [fnum : 280] [ev : 0.3] [ct : 5500] [color_md : default] [focal_len : 280] [latitude : 25.144090] [longtitude : 76.072992] [altitude: 285.026001] </font>
I want only from above string
1
00:00:00,000 --> 00:00:00,020
[longtitude : 76.072992] [altitude: 285.026001]
2
00:00:00,020 --> 00:00:00,040
[longtitude : 76.072992] [altitude: 285.026001]
3
00:00:00,040 --> 00:00:00,059
[longtitude : 76.072992] [altitude: 285.026001]
Please help me out. Thanks in advance

You may try using the following find and replace in regex mode:
Find: ^.*(\[longtitude : \d+(?:\.\d+)?\] \[altitude: \d+(?:\.\d+)?\]) <\/font>$
Replace: $1
Here is a working demo showing that the replacement logic is working. Please run this regex in multiline mode from Notepad++.

You may also like ...
Find:^.*?(\[long.*\]).*
Replace all:$1

Related

Pandas How to rename columns that don't have names, but they're indexed as 0, 1, 2, 3... etc

I don't know how to rename columns that are unnamed.
I have tried both approaches where I am putting the indices in quoutes and not, like this, and it didn't work:
train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',\
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',\
14 : 'Fisher Trigger'
})
And
train_dataset_with_pred_new_df.rename(columns={
'0' : 'timestamp', '1' : 'open', '2' : 'close', '3' : 'high', '4' : 'low', '5' : 'volume', '6' : 'CCI7', '8' : 'DI+',\
'9' : 'DI-', '10' : 'ADX', '11' : 'MACD Main', '12' : 'MACD Signal', '13' : 'MACD histogram', '15' : 'Fisher Transform',\
'16' : 'Fisher Trigger'
})
So If both didn't worked, how do I rename them?
Thank you for your help in advance :)
pandas.DataFrame.rename returns a new DataFrame if the parameter inplace is False.
You need to reassign your dataframe :
train_dataset_with_pred_new_df= train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',\
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',\
14 : 'Fisher Trigger'})
Or simply use inplace=True:
train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',
14 : 'Fisher Trigger'
}, inplace=True)
df.rename(columns={ df.columns[1]: "your value" }, inplace = True)
What you are trying to do is renaming the index. Instead of renaming existing columns you are renaming index. So rename index and not columns.
train_dataset_with_pred_new_df.rename(
index={ 0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+', 8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform', 14 : 'Fisher Trigger'
}, inplace=True)
As it looks like you want to reassign all names, simply do:
df.columns = ['timestamp', 'open', 'close', 'high', 'low', 'volume',
'CCI7', 'DI+', 'DI-', 'ADX', 'MACD Main', 'MACD Signal',
'MACD histogram', 'Fisher Transform', 'Fisher Trigger']
Or, in a chain:
df.set_axis(['timestamp', 'open', 'close', 'high', 'low', 'volume',
'CCI7', 'DI+', 'DI-', 'ADX', 'MACD Main', 'MACD Signal',
'MACD histogram', 'Fisher Transform', 'Fisher Trigger'],
axis=1)

Erroneous and inconsistent output from env.render() in openai gym Taxi-v3 in Google Colab

I am trying to set up the OpenAI gym environment for the Taxi - V3 application in Google Colab and using the following code :
from IPython.display import clear_output
import gym
env = gym.make("Taxi-v3", render_mode = 'ansi').env
#env = gym.make("Taxi-v3", render_mode = 'ansi')
Then I have a function that shows the Taxi position in the Colab cell
def showStateVec(txR=3, txC=1,pxI=2,des=0):
env.reset()
state = env.encode(txR,txC,pxI,des)
env.s = state
print("State ", env.s, list(env.decode(env.s)))
env.s = state
p = env.render()
print(p[0])
for k,v in env.P[state].items():
print(v)
When I call
# taxi at 3,1, passenger at 2, destination = 0
# note, moving to the WEST is not possible, the position does not change
showStateVec(3,1,2,0)
I get the following output ( i have replaced the yellow box with 'x'). Evidently, this is not correct. The taxi is being somewhere else
State 328 [3, 1, 2, 0]
+---------+
|R: |x: :G|
| : | : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
[(1.0, 428, -1, False)]
[(1.0, 228, -1, False)]
[(1.0, 348, -1, False)]
[(1.0, 328, -1, False)]
[(1.0, 328, -10, False)]
[(1.0, 328, -10, False)]
However if I run the command again a second time, the yellow box moves elsewhere, even though the rest of the output is identical
State 328 [3, 1, 2, 0]
+---------+
|R: | : :G|
| : | : : |
| : : : : |
| | : | : |
|Y| :x|B: |
+---------+
[(1.0, 428, -1, False)]
[(1.0, 228, -1, False)]
[(1.0, 348, -1, False)]
[(1.0, 328, -1, False)]
[(1.0, 328, -10, False)]
[(1.0, 328, -10, False)]
Here is the link to the Colab notebook where you can replicate the problem. I have also seen this and other solutions in stackoverflow but none seem to work.
What should I do to ensure that the taxi ( or the yellow box representing the taxi) is displayed exactly where the state of the taxi says it should be. Please help.

Why is this ANTLR grammar reporting errors?

I have a fairly simple grammar designed to parse URIs. It is compiled with the help of antlr4-maven-plugin. Compiling produces no warnings or errors. I wrote a simple test.
Uri.g4:
/**
* Uniform Resource Identifier (RFC 3986).
*
* #author Oliver Yasuna
* #see RFC 3986
* #since 1.0.0
*/
grammar Uri;
options {
tokenVocab = Common;
}
#header {
package com.oliveryasuna.http.antlr;
}
// Parser
//--------------------------------------------------
pctEncoded
: '%' HEXDIG HEXDIG
;
reserved
: genDelims | subDelims
;
genDelims
: ':' | '/' | '?' | '#' | '[' | ']' | '#'
;
subDelims
: '!' | '$' | '&' | '\'' | '(' | ')' | '*' | '+' | ',' | ';' | '='
;
unreserved
: ALPHA | DIGIT | '-' | '.' | '_' | '~'
;
uri
: scheme ':' hierPart ('?' query)? ('#' fragment_)?
;
hierPart
: '//' authority pathAbEmpty
| pathAbsolute
| pathRootless
| pathEmpty
;
scheme
: ALPHA (ALPHA | DIGIT | '+' | '-' | '.')*
;
authority
: (userinfo '#')? host (':' port)?
;
userinfo
: (unreserved | pctEncoded | subDelims | ':')*
;
host
: ipLiteral
| ipv4Address
| regName
;
ipLiteral
: '[' (ipv6Address | ipvFuture) ']'
;
ipvFuture
: 'v' HEXDIG+ '.' (unreserved | subDelims | ':')+
;
ipv6Address
: '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| h16? '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| ((h16 ':')? h16)? '::' (h16 ':') (h16 ':') (h16 ':') ls32
| ((h16 ':')? (h16 ':')? h16)? '::' (h16 ':') (h16 ':') ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' h16 ':' ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' h16
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::'
;
ls32
: (h16 ':' h16)
| ipv4Address
;
h16
: HEXDIG HEXDIG? HEXDIG? HEXDIG?
;
ipv4Address
: decOctet '.' decOctet '.' decOctet '.' decOctet
;
decOctet
: DIGIT
| ('1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') DIGIT
| '1' DIGIT DIGIT
| '2' ('0' | '1' | '2' | '3' | '4') DIGIT
| '2' '5' ('0' | '1' | '2' | '3' | '4' | '5')
;
regName
: (unreserved | pctEncoded | subDelims)*
;
port
: DIGIT*
;
path
: pathAbEmpty
| pathAbsolute
| pathNoScheme
| pathRootless
| pathEmpty
;
pathAbEmpty
: ('/' segment)*
;
pathAbsolute
: '/' (segmentNz ('/' segment)?)?
;
pathNoScheme
: segmentNzNc ('/' segment)?
;
pathRootless
: segmentNz ('/' segment)?
;
pathEmpty
: // TODO: 0<pchar>.
;
segment
: pchar*
;
segmentNz
: pchar+
;
segmentNzNc
: (unreserved | pctEncoded | subDelims | '#')+
;
pchar
: unreserved | pctEncoded | subDelims | ':' | '#'
;
query
: (pchar | '/' | '?')*
;
fragment_
: (pchar | '/' | '?')*
;
uriReference
: uri
| relativeRef
;
relativeRef
: relativePart ('?' query)? ('#' fragment_)?
;
relativePart
: '//' authority pathAbEmpty
| pathAbEmpty
| pathNoScheme
| pathEmpty
;
absoluteUri
: scheme ':' hierPart ('?' query)?
;
Common.g4:
lexer grammar Common;
// ASCII
//--------------------------------------------------
BANG : '!' ;
//DOUBLE_QUOTE : '"' ;
HASH : '#' ;
DOLLAR : '$' ;
PERCENT : '%' ;
AND : '&' ;
SINGLE_QUOTE : '\'' ;
LEFT_PARENTHESES : '(' ;
RIGHT_PARENTHESES : ')' ;
STAR : '*' ;
PLUS : '+' ;
COMMA : ',' ;
MINUS : '-' ;
DOT : '.' ;
SLASH : '/' ;
COLON : ':' ;
SEMICOLON : ';' ;
LEFT_ANGLE_BRACKET : '<' ;
EQUAL : '=' ;
RIGHT_ANGLE_BRACKET : '>' ;
QUESTION : '?' ;
AT : '#' ;
LEFT_SQUARE_BRACKET : '[' ;
BACKSLASH : '\\' ;
RIGHT_SQUARE_BRACKET : ']' ;
CARROT : '^' ;
UNDERSCORE : '_' ;
BACKTICK : '`' ;
LEFT_CURLY_BRACKET : '{' ;
BAR : '|' ;
RIGHT_CURLY_BRACKET : '}' ;
TILDE : '~' ;
// Core
//--------------------------------------------------
// Taken from ABNF.
ALPHA : [a-zA-Z] ;
DIGIT : [0-9] ;
HEXDIG : [0-9a-fA-F] ;
DQUOTE : '"' ;
SP : ' ' ;
HTAB : '\t' ;
WSP : SP | HTAB ;
//LWSP : (WSP | CRLF WSP)* ;
VCHAR : [\u0021-\u007F] ;
CHAR : [\u0001-\u007F] ;
OCTET : [\u0000-\u00FF] ;
CTL : [\u0000-\u001F\u007F] ;
CR : '\r' ;
LF : '\n' ;
CRLF : CR LF ;
BIT : '0' | '1' ;
// Miscellaneous
//--------------------------------------------------
DOUBLE_SLASH : '//' ;
DOUBLE_COLON : '::' ;
LOWER_V : 'v' ;
ZERO : '0' ;
ONE : '1' ;
TWO : '2' ;
THREE : '3' ;
FOUR : '4' ;
FIVE : '5' ;
SIX : '6' ;
SEVEN : '7' ;
EIGHT : '8' ;
NINE : '9' ;
Test method:
#Test
final void google() {
final String uri = "https://www.google.com/";
final UriLexer lexer = new UriLexer(new ANTLRInputStream(uri));
final UriParser parser = new UriParser(new CommonTokenStream(lexer));
parser.addErrorListener(new BaseErrorListener() {
#Override
public void syntaxError(final Recognizer<?, ?> recognizer, final Object offendingSymbol, final int line, final int charPositionInLine, final String msg, final RecognitionException e) {
throw new IllegalStateException("[" + line + ":" + charPositionInLine + "] Symbol [" + offendingSymbol + "] produced error: " + msg + ".", e);
}
});
Assertions.assertDoesNotThrow(parser::uri);
}
I get the following errors when I input https://www.google.com/.
I have absolute no idea what is causing these parsing errors. Does anyone have an idea?
Output:
line 1:0 token recognition error at: 'h'
line 1:1 token recognition error at: 't'
line 1:2 token recognition error at: 't'
line 1:3 token recognition error at: 'p'
line 1:4 token recognition error at: 's'
line 1:5 missing '6' at ':'
ANTLR's lexer has a strict separation between parsing and tokenizing/lexing. The lexer also works independently from the parser and creates tokens based on 2 simple rules:
try to consume as many characters for a single lexer rule
when 2 or more lexer rules match the same characters, let the one defined first "win"
If we now look at your rules:
ALPHA : [a-zA-Z] ;
DIGIT : [0-9] ;
HEXDIG : [0-9a-fA-F] ;
it is clear that the lexer rule HEXDIG will never be matched because either ALPHA or DIGIT will match whatever HEXDIG matches and are defined before HEXDIG. Switching the order:
HEXDIG : [0-9a-fA-F] ;
ALPHA : [a-zA-Z] ;
DIGIT : [0-9] ;
will not work because any digit will now never become a DIGIT token, and a F will now also never become a ALPHA.
Note that this is just a single example: there are more of such cases in you lexer grammar.
A solution would be to move some of the responsibility to the parser instead of the lexer:
A : [aA];
B : [bB];
C : [cC];
D : [dD];
E : [eE];
F : [fF];
G : [gG];
H : [hH];
I : [iI];
J : [jJ];
K : [kK];
L : [lL];
M : [mM];
N : [nN];
O : [oO];
P : [pP];
Q : [qQ];
R : [rR];
S : [sS];
T : [tT];
U : [uU];
V : [vV];
W : [wW];
X : [xX];
Y : [yY];
Z : [zZ];
D0 : '0';
D1 : '1';
D2 : '2';
D3 : '3';
D4 : '4';
D5 : '5';
D6 : '6';
D7 : '7';
D8 : '8';
D9 : '9';
and then in the parser you do:
alpha
: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
;
digit
: D0 | D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9
;
hexdig
: A | B | C | D | E | F | digit
;
Also, remove all the literal tokens like '6' from the parser and use the proper lexer rule instead (D6, in this case). Whenever the parser sees such a literal token, which is not defined in the lexer, it "magically" creates a new token for it, resulting in mysterious error/warning messages. Best to remove all (and I mean all!) such literal token from the parser.
In addition to the answer Bart made on the grammar--all correct--this is not how to write a split grammar!
You must have "parser grammar UriParser;" in UriParser.g4 (rename Uri.g4 to UriParser.g4), and "lexer grammar UriLexer;" in UriLexer.g4 (rename Common.g4 to UriLexer.g4).
If you try to generate the parser for your original "split" grammar, you get three .tokens files generated by the Antlr tool, all different in size and contents. That indicates there is likely no coordination of the token types between the lexer and parser. That doesn't have anything to do with the "token recognition error" because as Bart says, the lexer operates completely independently from the parser. But, it will have an impact when you start testing the grammar productions with other input.
Also, you should never include #header { package ...; } in the grammar. You need to the -package option instead. Using the #header makes the grammar completely unportable to other targets, and creates a problem if you have multiple grammars in one directory, some with the #header and some without.
If you fix these problems, the code parses your input--with the caveat that your lexer rules are not correct (see Bart's answer).
It's not clear why you split the grammar to begin with.
UriParser.g4:
/**
* Uniform Resource Identifier (RFC 3986).
*
* #author Oliver Yasuna
* #see RFC 3986
* #since 1.0.0
*/
parser grammar UriParser;
options {
tokenVocab = UriLexer;
}
// Parser
//--------------------------------------------------
pctEncoded
: '%' HEXDIG HEXDIG
;
reserved
: genDelims | subDelims
;
genDelims
: ':' | '/' | '?' | '#' | '[' | ']' | '#'
;
subDelims
: '!' | '$' | '&' | '\'' | '(' | ')' | '*' | '+' | ',' | ';' | '='
;
unreserved
: ALPHA | DIGIT | '-' | '.' | '_' | '~'
;
uri
: scheme ':' hierPart ('?' query)? ('#' fragment_)?
;
hierPart
: '//' authority pathAbEmpty
| pathAbsolute
| pathRootless
| pathEmpty
;
scheme
: ALPHA (ALPHA | DIGIT | '+' | '-' | '.')*
;
authority
: (userinfo '#')? host (':' port)?
;
userinfo
: (unreserved | pctEncoded | subDelims | ':')*
;
host
: ipLiteral
| ipv4Address
| regName
;
ipLiteral
: '[' (ipv6Address | ipvFuture) ']'
;
ipvFuture
: 'v' HEXDIG+ '.' (unreserved | subDelims | ':')+
;
ipv6Address
: '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| h16? '::' (h16 ':') (h16 ':') (h16 ':') (h16 ':') ls32
| ((h16 ':')? h16)? '::' (h16 ':') (h16 ':') (h16 ':') ls32
| ((h16 ':')? (h16 ':')? h16)? '::' (h16 ':') (h16 ':') ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' h16 ':' ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' ls32
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::' h16
| ((h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? (h16 ':')? h16)? '::'
;
ls32
: (h16 ':' h16)
| ipv4Address
;
h16
: HEXDIG HEXDIG? HEXDIG? HEXDIG?
;
ipv4Address
: decOctet '.' decOctet '.' decOctet '.' decOctet
;
decOctet
: DIGIT
| ('1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') DIGIT
| '1' DIGIT DIGIT
| '2' ('0' | '1' | '2' | '3' | '4') DIGIT
| '2' '5' ('0' | '1' | '2' | '3' | '4' | '5')
;
regName
: (unreserved | pctEncoded | subDelims)*
;
port
: DIGIT*
;
path
: pathAbEmpty
| pathAbsolute
| pathNoScheme
| pathRootless
| pathEmpty
;
pathAbEmpty
: ('/' segment)*
;
pathAbsolute
: '/' (segmentNz ('/' segment)?)?
;
pathNoScheme
: segmentNzNc ('/' segment)?
;
pathRootless
: segmentNz ('/' segment)?
;
pathEmpty
: // TODO: 0<pchar>.
;
segment
: pchar*
;
segmentNz
: pchar+
;
segmentNzNc
: (unreserved | pctEncoded | subDelims | '#')+
;
pchar
: unreserved | pctEncoded | subDelims | ':' | '#'
;
query
: (pchar | '/' | '?')*
;
fragment_
: (pchar | '/' | '?')*
;
uriReference
: uri
| relativeRef
;
relativeRef
: relativePart ('?' query)? ('#' fragment_)?
;
relativePart
: '//' authority pathAbEmpty
| pathAbEmpty
| pathNoScheme
| pathEmpty
;
absoluteUri
: scheme ':' hierPart ('?' query)?
;
UriLexer.g4:
lexer grammar UriLexer;
// ASCII
//--------------------------------------------------
BANG : '!' ;
//DOUBLE_QUOTE : '"' ;
HASH : '#' ;
DOLLAR : '$' ;
PERCENT : '%' ;
AND : '&' ;
SINGLE_QUOTE : '\'' ;
LEFT_PARENTHESES : '(' ;
RIGHT_PARENTHESES : ')' ;
STAR : '*' ;
PLUS : '+' ;
COMMA : ',' ;
MINUS : '-' ;
DOT : '.' ;
SLASH : '/' ;
COLON : ':' ;
SEMICOLON : ';' ;
LEFT_ANGLE_BRACKET : '<' ;
EQUAL : '=' ;
RIGHT_ANGLE_BRACKET : '>' ;
QUESTION : '?' ;
AT : '#' ;
LEFT_SQUARE_BRACKET : '[' ;
BACKSLASH : '\\' ;
RIGHT_SQUARE_BRACKET : ']' ;
CARROT : '^' ;
UNDERSCORE : '_' ;
BACKTICK : '`' ;
LEFT_CURLY_BRACKET : '{' ;
BAR : '|' ;
RIGHT_CURLY_BRACKET : '}' ;
TILDE : '~' ;
// Core
//--------------------------------------------------
// Taken from ABNF.
ALPHA : [a-zA-Z] ;
DIGIT : [0-9] ;
HEXDIG : [0-9a-fA-F] ;
DQUOTE : '"' ;
SP : ' ' ;
HTAB : '\t' ;
WSP : SP | HTAB ;
//LWSP : (WSP | CRLF WSP)* ;
VCHAR : [\u0021-\u007F] ;
CHAR : [\u0001-\u007F] ;
OCTET : [\u0000-\u00FF] ;
CTL : [\u0000-\u001F\u007F] ;
CR : '\r' ;
LF : '\n' ;
CRLF : CR LF ;
BIT : '0' | '1' ;
// Miscellaneous
//--------------------------------------------------
DOUBLE_SLASH : '//' ;
DOUBLE_COLON : '::' ;
LOWER_V : 'v' ;
ZERO : '0' ;
ONE : '1' ;
TWO : '2' ;
THREE : '3' ;
FOUR : '4' ;
FIVE : '5' ;
SIX : '6' ;
SEVEN : '7' ;
EIGHT : '8' ;
NINE : '9' ;

Select with subtotals using postgres sql

I've the following query:
select
json_build_object('id', i.id, 'task_id', i.task_id, 'time_spent', i.summary)
from
intervals I
where
extract(month from "created_at") = 10
and extract(year from "created_at") = 2021
group by
i.id, i.task_id
order by i.task_id
Which gives the following output:
json_build_object
{"id" : 53, "task_id" : 1, "time_spent" : "3373475"}
{"id" : 40, "task_id" : 1, "time_spent" : "3269108"}
{"id" : 60, "task_id" : 2, "time_spent" : "2904084"}
{"id" : 45, "task_id" : 4, "time_spent" : "1994341"}
{"id" : 38, "task_id" : 5, "time_spent" : "1933766"}
{"id" : 62, "task_id" : 5, "time_spent" : "2395378"}
{"id" : 44, "task_id" : 6, "time_spent" : "3304280"}
{"id" : 58, "task_id" : 6, "time_spent" : "3222501"}
{"id" : 48, "task_id" : 6, "time_spent" : "1990195"}
{"id" : 55, "task_id" : 7, "time_spent" : "1984300"}
How can I add subtotals of time_spent by each task?
I'd like to have an array structure of objects like this:
{
"total": 3968600,
"details:" [
{"id" : 55, "task_id" : 7, "time_spent" : "1984300"},
{"id" : 55, "task_id" : 7, "time_spent" : "1984300"}
]
}
How can I achieve it? Thank you!
You may try the following modification which groups your data based on the task_id and uses json_agg and json_build_object to produce your desired schema.
select
json_build_object(
'total', SUM(i.summary),
'details',json_agg(
json_build_object(
'id', i.id,
'task_id', i.task_id,
'time_spent', i.summary
)
)
) as result
from
intervals I
where
extract(month from "created_at") = 10
and extract(year from "created_at") = 2021
group by
i.task_id
order by i.task_id
See working demo fiddle online here

Mixed Integer Linear Optimization with Pyomo - Travelling salesman problem

I am trying to solve a travelling salesman problem with Pyomo framework. However, I am stuck, as the solver is informing me that I have formulated it as infeasible.
import numpy as np
import pyomo.environ as pyo
from pyomo.environ import *
from pyomo.opt import SolverFactory
journey_distances = np.array([[0, 28, 34, 45, 36],
[28, 0, 45, 52, 64],
[34, 45, 0, 11, 34],
[45, 52, 11, 0, 34],
[36, 64, 34, 34, 0]])
# create variables - integers
num_locations = M.shape[0]
model = pyo.ConcreteModel()
model.journeys = pyo.Var(range(num_locations), range(num_locations), domain=pyo.Binary, bounds = (0,None))
journeys = model.journeys
# add A to B constraints
model.AtoB = pyo.ConstraintList()
model.BtoA = pyo.ConstraintList()
AtoB = model.AtoB
BtoA = model.BtoA
AtoB_sum = [sum([ journeys[i,j] for j in range(num_locations) if i!=j]) for i in range(num_locations)]
BtoA_sum = [sum([ journeys[i,j] for i in range(num_locations) if j!=i]) for j in range(num_locations)]
for journey_sum in range(num_locations):
AtoB.add(AtoB_sum[journey_sum] == 1)
if journey_sum <num_locations -1:
BtoA.add(BtoA_sum[journey_sum] == 1)
# add auxilliary variables to ensure that each successive journey ends and starts on the same town. E.g. A to B, then B to C.
# u_j - u_i >= -(n+1) + num_locations*journeys_{ij} for i,j = 1...n, i!=j
model.successive_aux = pyo.Var(range(0,num_locations), domain = pyo.Integers, bounds = (0,num_locations-1))
model.successive_constr = pyo.ConstraintList()
successive_aux = model.successive_aux
successive_constr = model.successive_constr
successive_constr.add(successive_aux[0] == 1)
for i in range(num_locations):
for j in range(num_locations):
if i!=j:
successive_constr.add(successive_aux[j] - successive_aux[i] >= -(num_locations - 1) + num_locations*journeys[i,j])
obj_sum = sum([ sum([journey_distances [i,j]*journeys[i,j] for j in range(num_locations) if i!=j]) for i in range(num_locations)])
model.obj = pyo.Objective(expr = obj_sum, sense = minimize)
opt = SolverFactory('cplex')
opt.solve(model)
journey_res = np.array([model.journeys[journey].value for journey in journeys])
print(journey_res)
# results output is:
print(results)
Problem:
- Lower bound: -inf
Upper bound: inf
Number of objectives: 1
Number of constraints: 31
Number of variables: 26
Number of nonzeros: 98
Sense: unknown
Solver:
- Status: ok
User time: 0.02
Termination condition: infeasible
Termination message: MIP - Integer infeasible.
Error rc: 0
Time: 0.10198116302490234
# model.pprint()
7 Set Declarations
AtoB_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {1, 2, 3, 4, 5}
BtoA_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 4 : {1, 2, 3, 4}
journeys_index : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 2 : journeys_index_0*journeys_index_1 : 25 : {(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (4, 0), (4, 1), (4, 2), (4, 3), (4, 4)}
journeys_index_0 : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
journeys_index_1 : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
successive_aux_index : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
successive_constr_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 21 : {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}
2 Var Declarations
journeys : Size=25, Index=journeys_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
(0, 0) : 0 : None : 1 : False : True : Binary
(0, 1) : 0 : None : 1 : False : True : Binary
(0, 2) : 0 : None : 1 : False : True : Binary
(0, 3) : 0 : None : 1 : False : True : Binary
(0, 4) : 0 : None : 1 : False : True : Binary
(1, 0) : 0 : None : 1 : False : True : Binary
(1, 1) : 0 : None : 1 : False : True : Binary
(1, 2) : 0 : None : 1 : False : True : Binary
(1, 3) : 0 : None : 1 : False : True : Binary
(1, 4) : 0 : None : 1 : False : True : Binary
(2, 0) : 0 : None : 1 : False : True : Binary
(2, 1) : 0 : None : 1 : False : True : Binary
(2, 2) : 0 : None : 1 : False : True : Binary
(2, 3) : 0 : None : 1 : False : True : Binary
(2, 4) : 0 : None : 1 : False : True : Binary
(3, 0) : 0 : None : 1 : False : True : Binary
(3, 1) : 0 : None : 1 : False : True : Binary
(3, 2) : 0 : None : 1 : False : True : Binary
(3, 3) : 0 : None : 1 : False : True : Binary
(3, 4) : 0 : None : 1 : False : True : Binary
(4, 0) : 0 : None : 1 : False : True : Binary
(4, 1) : 0 : None : 1 : False : True : Binary
(4, 2) : 0 : None : 1 : False : True : Binary
(4, 3) : 0 : None : 1 : False : True : Binary
(4, 4) : 0 : None : 1 : False : True : Binary
successive_aux : Size=5, Index=successive_aux_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
0 : 0 : None : 4 : False : True : Integers
1 : 0 : None : 4 : False : True : Integers
2 : 0 : None : 4 : False : True : Integers
3 : 0 : None : 4 : False : True : Integers
4 : 0 : None : 4 : False : True : Integers
1 Objective Declarations
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : minimize : 28*journeys[0,1] + 34*journeys[0,2] + 45*journeys[0,3] + 36*journeys[0,4] + 28*journeys[1,0] + 45*journeys[1,2] + 52*journeys[1,3] + 64*journeys[1,4] + 34*journeys[2,0] + 45*journeys[2,1] + 11*journeys[2,3] + 34*journeys[2,4] + 45*journeys[3,0] + 52*journeys[3,1] + 11*journeys[3,2] + 34*journeys[3,4] + 36*journeys[4,0] + 64*journeys[4,1] + 34*journeys[4,2] + 34*journeys[4,3]
3 Constraint Declarations
AtoB : Size=5, Index=AtoB_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : journeys[0,1] + journeys[0,2] + journeys[0,3] + journeys[0,4] : 1.0 : True
2 : 1.0 : journeys[1,0] + journeys[1,2] + journeys[1,3] + journeys[1,4] : 1.0 : True
3 : 1.0 : journeys[2,0] + journeys[2,1] + journeys[2,3] + journeys[2,4] : 1.0 : True
4 : 1.0 : journeys[3,0] + journeys[3,1] + journeys[3,2] + journeys[3,4] : 1.0 : True
5 : 1.0 : journeys[4,0] + journeys[4,1] + journeys[4,2] + journeys[4,3] : 1.0 : True
BtoA : Size=4, Index=BtoA_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : journeys[1,0] + journeys[2,0] + journeys[3,0] + journeys[4,0] : 1.0 : True
2 : 1.0 : journeys[0,1] + journeys[2,1] + journeys[3,1] + journeys[4,1] : 1.0 : True
3 : 1.0 : journeys[0,2] + journeys[1,2] + journeys[3,2] + journeys[4,2] : 1.0 : True
4 : 1.0 : journeys[0,3] + journeys[1,3] + journeys[2,3] + journeys[4,3] : 1.0 : True
successive_constr : Size=21, Index=successive_constr_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : successive_aux[0] : 1.0 : True
2 : -Inf : -4 + 5*journeys[0,1] - (successive_aux[1] - successive_aux[0]) : 0.0 : True
3 : -Inf : -4 + 5*journeys[0,2] - (successive_aux[2] - successive_aux[0]) : 0.0 : True
4 : -Inf : -4 + 5*journeys[0,3] - (successive_aux[3] - successive_aux[0]) : 0.0 : True
5 : -Inf : -4 + 5*journeys[0,4] - (successive_aux[4] - successive_aux[0]) : 0.0 : True
6 : -Inf : -4 + 5*journeys[1,0] - (successive_aux[0] - successive_aux[1]) : 0.0 : True
7 : -Inf : -4 + 5*journeys[1,2] - (successive_aux[2] - successive_aux[1]) : 0.0 : True
8 : -Inf : -4 + 5*journeys[1,3] - (successive_aux[3] - successive_aux[1]) : 0.0 : True
9 : -Inf : -4 + 5*journeys[1,4] - (successive_aux[4] - successive_aux[1]) : 0.0 : True
10 : -Inf : -4 + 5*journeys[2,0] - (successive_aux[0] - successive_aux[2]) : 0.0 : True
11 : -Inf : -4 + 5*journeys[2,1] - (successive_aux[1] - successive_aux[2]) : 0.0 : True
12 : -Inf : -4 + 5*journeys[2,3] - (successive_aux[3] - successive_aux[2]) : 0.0 : True
13 : -Inf : -4 + 5*journeys[2,4] - (successive_aux[4] - successive_aux[2]) : 0.0 : True
14 : -Inf : -4 + 5*journeys[3,0] - (successive_aux[0] - successive_aux[3]) : 0.0 : True
15 : -Inf : -4 + 5*journeys[3,1] - (successive_aux[1] - successive_aux[3]) : 0.0 : True
16 : -Inf : -4 + 5*journeys[3,2] - (successive_aux[2] - successive_aux[3]) : 0.0 : True
17 : -Inf : -4 + 5*journeys[3,4] - (successive_aux[4] - successive_aux[3]) : 0.0 : True
18 : -Inf : -4 + 5*journeys[4,0] - (successive_aux[0] - successive_aux[4]) : 0.0 : True
19 : -Inf : -4 + 5*journeys[4,1] - (successive_aux[1] - successive_aux[4]) : 0.0 : True
20 : -Inf : -4 + 5*journeys[4,2] - (successive_aux[2] - successive_aux[4]) : 0.0 : True
21 : -Inf : -4 + 5*journeys[4,3] - (successive_aux[3] - successive_aux[4]) : 0.0 : True
13 Declarations: journeys_index_0 journeys_index_1 journeys_index journeys AtoB_index AtoB BtoA_index BtoA successive_aux_index successive_aux successive_constr_index successive_constr obj
If anyone can see what the problem is, and let me know, then that would be a great help.
I'm not overly familiar w/ coding TSP problems, and I'm not sure of all the details in your code, but this (below) is a problem. It seems you are coding successive_aux (call it sa for short) as a sequencing of integers. In this snippet (I chopped down to 3 points), if you think about the legal route of 0-1-2-0, sa_1 > sa_0 and sa_2 > sa_1, then it is infeasible to require sa_0 > sa_2. Also, your bounds on sa appear infeasible as well. In this example, sa_0 is 1, and the upper bound on sa is 2. Those are 2 "infeasibilities" in your formulation.
Key : Lower : Body : Upper : Active
1 : 1.0 : successive_aux[0] : 1.0 : True
2 : -Inf : -2 + 3*journeys[0,1] - (successive_aux[1] - successive_aux[0]) : 0.0 : True
3 : -Inf : -2 + 3*journeys[0,2] - (successive_aux[2] - successive_aux[0]) : 0.0 : True
4 : -Inf : -2 + 3*journeys[1,0] - (successive_aux[0] - successive_aux[1]) : 0.0 : True
5 : -Inf : -2 + 3*journeys[1,2] - (successive_aux[2] - successive_aux[1]) : 0.0 : True
6 : -Inf : -2 + 3*journeys[2,0] - (successive_aux[0] - successive_aux[2]) : 0.0 : True
7 : -Inf : -2 + 3*journeys[2,1] - (successive_aux[1] - successive_aux[2]) : 0.0 : True
I'm not an optimization expert but it looks like you need to change the distances between the cities since you're basically saying that the distance from city1 to city1 = 0, city2 to city2 = 0 etc. If you change these distances to a very large number (say 1000000) the optimizer will never pick to go from city1 back to city1.
Hope this helps.