I was looking at one of the tutorial exercises (the LOGIK extended exercise), and for some reason one of the macro rules only expands once. I didn't change anything in logik.k other than adding the following lines to the LOGIK module so K would actually run the files:
syntax KResult ::= Val
configuration <T> <k> $PGM:Pgm </k> </T>
Then I ran:
kompile --backend java logik.k -d .
krun tests/list-member-1.logik
And I got (I added some newlines for readability):
<T>
<k>
member ( X , [ X , .Terms | _ ] , .Terms ) .
member ( X , [ _ , .Terms | T ] , .Terms ) :- member ( X , T , .Terms ) , .Predicates .
?- member ( 5 , [ 1 , .Terms | [ 2 , 3 , 4 , 5 , 6 , 5 , .Terms | [ .Terms ] ] ] , .Terms ) , .Predicates .
</k>
</T>
But I would expect the query to be
?- member ( 5 , [ 1 , .Terms | [ 2 , .Terms | [ 3 , .Terms | [4 , .Terms | [5 , .Terms | [6 , .Terms | [5 , .Terms | [ .Terms ] ] ] ] ] ] ] ] , .Terms ).
To be clear, the following rules seem to be the issue, as I would expect the rules to keep being used until they can't anymore, and I can't see why they would stop now.
rule [T1:Term,T2:Term,Ts:Terms|T':Term] => [T1|[T2,Ts|T']] [macro]
rule [T:Term,Ts:Terms] => [T,Ts|[.Terms]] [macro]
We switched the meaning of macro to mean "non-recursive macro". You need to use macro-rec to tell K that this is a macro you want to apply recursively.
This changed happened here: https://github.com/kframework/k/pull/592
Related
I want to receive streamed json inputs and reduce them to an array containing the leaf values.
Demo: https://jqplay.org/s/cZxLguJFxv
Please consider
filter:
try reduce range(30) as $i ( []; (.+[until(length==2;input)[1]] // error(.)) )
catch empty
input:
[
[
0,
0,
"a"
],
null
]
[
[
0,
0,
"a"
]
]
[
[
0,
1,
"b"
],
null
]
[
[
0,
1,
"b"
]
]
[
[
0,
1
]
]
[
[
1
],
0
]
...
output:
empty
I expect the output: [null, null, 0, ...] but I get empty instead.
I told reduce to iterate 30 times but the size of inputs is less than that. I'm expecting it will empty those input of length other than 2 and produce an array containing all leaf values.
I don't know how this will behave when there is no more input with length 2 left and there are iterations of reduce left.
I want to know why my filter returns empty. What am I doing wrong? Thanks!
These filters should do what you want:
jq -n 'reduce inputs as $in ([]; if $in | has(1) then . + [$in[1]] else . end)'
Demo
jq -n '[inputs | select(has(1))[1]]'
Demo
I am trying to print 2 column output using awk. I need to separate them out with a space. In this example below the first column value is '1' and the 2nd column is '1['. As seen in the output the two values are merged together. I am not able to print a space in between. The -vOFS flag does not seem to help. I am also printing just the last line of a cmd output in this awk statement.
In addition, I would also like to get rid of the '[' in the 2nd column output ('1['). So it's left with the '1' only. How exactly do I do that?
awk command:
sudo iblinkinfo | awk -vOFS=' ' 'NR==1; END{print $11 $12}'
awk'd Output I get:
CA: MT25408 ConnectX Mellanox Technologies:
11[
awk'd Output I want:
1 1
Original cmd output: (the last line starts with "CA: MT..."). Although first column output (with a $1) is the hex value 0xe41d2d0300e29e01. I would like to print the 11th, and 12th columns; which are 1 1[ (towards the end)
1 34[ ] ==( Down/ Polling)==> [ ] "" ( )
1 35[ ] ==( Down/ Polling)==> [ ] "" ( )
1 36[ ] ==( Down/ Polling)==> [ ] "" ( )
CA: MT25408 ConnectX Mellanox Technologies:
0xe41d2d0300e29e01 2 1[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 1 1[ ] "Infiniscale-IV Mellanox Technologies" ( )
Is this what you're trying to do?
$ cat file
1 34[ ] ==( Down/ Polling)==> [ ] "" ( )
1 35[ ] ==( Down/ Polling)==> [ ] "" ( )
1 36[ ] ==( Down/ Polling)==> [ ] "" ( )
CA: MT25408 ConnectX Mellanox Technologies:
0xe41d2d0300e29e01 2 1[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 1 1[ ] "Infiniscale-IV Mellanox Technologies" ( )
$ awk 'END{print $11, $12+0}' file
1 1
The above relies on undefined behavior since the values of $0, $1, etc. in the END section are undefined by the POSIX standard but it'll work in GNU awk as you're using.
I need to extract only the content inside the brackets in pandas dataframe. I tried using str.exratct() but its not working . I need help with the extraction
DATA: ( IS IN DATA FRAME, This is a sample data from one row )
By:Chen TX (Chen Tianxu)[ 1 ] ; Tribbitt MA (Tribbitt Mark A.)[ 2 ] ; Yang Y (Yang Yi)[ 3 ] ; Li XM (Li Xiaomei)[ 4 ]
You can use the regular expression:
import pandas as pd
import re
dataset = pd.DataFrame([{'DATA': 'By:Chen TX (Chen Tianxu)[ 1 ] ; Tribbitt MA (Tribbitt Mark A.)[ 2 ] ; Yang Y (Yang Yi)[ 3 ] ; Li XM (Li Xiaomei)[ 4 ]'}])
print(dataset)
Dataframe is:
DATA
0 By:Chen TX (Chen Tianxu)[ 1 ] ; Tribbitt MA (Tribbitt Mark A.)[ 2 ] ; Yang Y (Yang Yi)[ 3 ] ; Li XM (Li Xiaomei)[ 4 ]
Then, using regular expression with lambda function, such that you extract names and save it to different column named names:
# regular expression from: https://stackoverflow.com/a/31343831/5916727
dataset['names'] = dataset['DATA'].apply(lambda x: re.findall('\((.*?)\)',x))
print(dataset['names'])
Output of names column would be:
0 [Chen Tianxu, Tribbitt Mark A., Yang Yi, Li Xiaomei]
How can I refer to date as argument in f within the foreach loop if date is also used as block element var ? Am I obliged to rename my date var ?
f: func[data [block!] date [date!]][
foreach [date o h l c v] data [
]
]
A: simple, compose is your best friend.
f: func[data [block!] date [date!]][
foreach [date str] data compose [
print (date)
print date
]
]
>> f [2010-09-01 "first of sept" 2010-10-01 "first of october"] now
7-Sep-2010/21:19:05-4:00
1-Sep-2010
7-Sep-2010/21:19:05-4:00
1-Oct-2010
You need to either change the parameter name from date or assign it to a local variable.
You can access the date argument inside the foreach loop by binding the 'date word from the function specification to the data argument:
>> f: func[data [block!] date [date!]][
[ foreach [date o h l c v] data [
[ print last reduce bind find first :f 'date 'data
[ print date
[ ]
[ ]
>> f [1-1-10 1 2 3 4 5 2-1-10 1 2 3 4 5] 8-9-10
8-Sep-2010
1-Jan-2010
8-Sep-2010
2-Jan-2010
It makes the code very difficult to read though. I think it would be better to assign the date argument to a local variable inside the function as Graham suggested.
>> f: func [data [block!] date [date!] /local the-date ][
[ the-date: :date
[ foreach [date o h l c v] data [
[ print the-date
[ print date
[ ]
[ ]
>> f [1-1-10 1 2 3 4 5 2-1-10 1 2 3 4 5] 8-9-10
8-Sep-2010
1-Jan-2010
8-Sep-2010
2-Jan-2010
The program below can generate random data according to some specs (example here is for 2 columns)
It works with a few hundred of thousand lines on my PC (should depend on RAM). I need to scale to dozen of millions row.
How can I optimize the program to write directly to disk ? Subsidiarily how can I "cache" the parsing rule execution as it is always the same pattern repeated 50 Millions times ?
Note: to use the program below, just type generate-blocks and save-blocks output will be db.txt
Rebol[]
specs: [
[3 digits 4 digits 4 letters]
[2 letters 2 digits]
]
;====================================================================================================================
digits: charset "0123456789"
letters: charset "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
separator: charset ";"
block-letters: [A B C D E F G H I J K L M N O P Q R S T U V W X Y Z]
blocks: copy []
generate-row: func[][
Foreach spec specs [
rule: [
any [
[
set times integer! [['digits (
repeat n times [
block: rejoin [block random 9]
]
)
|
'letters (repeat n times [
block: rejoin [ block to-string pick block-letters random 24]
]
)
]
|
[
'letters (repeat n times [block: rejoin [ block to-string pick block-letters random 24]
]
)
|
'digits (repeat n times [block: rejoin [block random 9]]
)
]
]
|
{"} any separator {"}
]
]
to end
]
block: copy ""
parse spec rule
append blocks block
]
]
generate-blocks: func[m][
repeat num m [
generate-row
]
]
quote: func[string][
rejoin [{"} string {"}]
]
save-blocks: func[file][
if exists? to-rebol-file file [
answer: ask rejoin ["delete " file "? (Y/N): "]
if (answer = "Y") [
delete %db.txt
]
]
foreach [field1 field2] blocks [
write/lines/append %db.txt rejoin [quote field1 ";" quote field2]
]
]
Use open with /direct and /lines refinement to write directly to file without buffering the content:
file: open/direct/lines/write %myfile.txt
loop 1000 [
t: random "abcdefghi"
append file t
]
Close file
This will write 1000 random lines without buffering.
You can also prepare a block of lines (lets say 10000 rows) then write it directly to file, this will be faster than writing line-by-line.
file: open/direct/lines/write %myfile.txt
loop 100 [
b: copy []
loop 1000 [append b random "abcdef"]
append file b
]
close file
this will be much faster, 100000 rows less than a second.
Hope this will help.
Note that, you can change the number 100 and 1000 according to your needs an memory of your pc, and use b: make block! 1000 instead of b: copy [], it will be faster.