Dumb7Fill queen and rook attacks seem to jump over "ours" pieces - chess

I am trying to play with chess programming and currently program BitBoards. All is fine except for Dumb7Fill (unrolled loops) generating attacks that allow queen and rook jump over their pieces. Bellow are traces of execution. What am I doing wrong here? The code for *Attack is taken straight from the wiki page, which means that moves are extended into attacks by north, south, west and east shift. This is programmed in Java.
long rooks = (One << index);
long empty = ~rep.getOccupancy();
long attacks = southAttack(rooks, empty)
| northAttack(rooks, empty)
| eastAttack(rooks, empty)
| westAttack(rooks, empty);
long actual = (attacks & rep.getCurrentPosition(getColor().inverse()));
ROOK ATTACKS w
rooks w
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
1 . . . . . . .
n: 1
empty w
. . . . . . . .
1 . . . . . . .
1 1 1 1 1 1 1 1
. 1 1 1 1 1 1 1
. 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 . . . . . . .
. . . . . . . .
n: 1fffefeff0100
theirs w
1 1 1 1 1 1 1 1
. 1 1 1 1 1 1 1
. . . . . . . .
p . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
n: fffe000100000000
ours w
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
P . . . . . . .
. . . . . . . .
. 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
n: 100feff
attacks from Dumb7Fill w
. . . . . . . .
. . . . . . . .
. . . . . . . .
1 . . . . . . .
1 . . . . . . .
1 . . . . . . .
. . . . . . . .
. . 1 . . . . .
n: 101010004
actual attack w
. . . . . . . .
. . . . . . . .
. . . . . . . .
1 . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
n: 100000000

It turns out I misinterpreted the algorithms on the wiki page. The attacks include the shift by one and are attacks and not moves, as I have interpreted them. For moves one needs to use occluded fills.

Related

Identifying specific strings and checking subsequent rows for another string

I have the following DataFrame.
df = pd.DataFrame({'1': ['A','.','.','X','.','.'],
'2':['.','.','.','.','A','.'],
'3':['.','.','.','.','.','.'],
'4':['.','.','.','.','.','X']})
I want to identify all instances where 'A' occurs and check to see if 'X' occurs within the next 3 rows.
After doing that I would like to execute a command based on these conditions.
an example of what I am trying to do would be...
for i, idx in df.iterrows():
if idx == A:
if X exists within next 3 rows:
x= idx['1']
y= idx['2']
Any help would be greatly appreciated.
I am sure that the other answer could work if you were to explain what you really want to do. It would be more efficient as iterating over rows is slow.
However, here is a solution based on iterrows:
mask = df.eq('X').any(1)
mask = mask.where(mask).bfill(limit=3).fillna(False)
for idx, row in df.iterrows():
if 'A' in row.values and mask[idx]:
x = row['1']
y = row['2']
print(f'row {idx} matches: {x=}, {y=}')
example input (slightly different from yours):
1 2 3 4
0 A . . .
1 . . . .
2 . . A .
3 . . . .
4 X A . .
5 . . X .
output:
row 2 matches: x='.', y='.'
row 4 matches: x='X', y='A'
IIUC, you want to identify the cells where there is a value A and if within the next 3 rows, there is also a value X
I will use a more visual example for clarity (A/X/.):
0 1 2 3 4 5
0 A . . A . A
1 . . X . A .
2 . A . . . A
3 . X . X . X
4 X . . . . .
One can use eq to find the searched values and where+bfill(limit=3)+.fillna to extend the second mask to the previous lines.
# mask for the A
m1 = df.eq('A')
# mask for the X in the next 3 lines
m2 = df.eq('X')
m2 = m2.where(m2).bfill(limit=3).fillna(False)
# example of how to use the masks: replacing A with O
df[m1&m2] = 'O'
Example output:
0 1 2 3 4 5
0 A . . O . O
1 . . X . A .
2 . O . . . O
3 . X . X . X
4 X . . . . .
checking X for any column
Just change the second mask to:
m2 = df.eq('X').any(1)
m2 = m2.where(m2).bfill(limit=3).fillna(False)
output with this mask:
0 1 2 3 4 5
0 O . . O . O
1 . . X . O .
2 . O . . . O
3 . X . X . X
4 X . . . . .

Using Imputer with conditions on the cell values of columns

I have a data frame with certain number of 'nan' in certain columns:
(Total number of rows is over 50000)
Data looks something like below (showing for 1st 3 columns)
A
B
C
D
E
F
10
20
5
.
.
.
nan
54
10
.
.
.
23
nan
9
.
.
.
30
32
6
.
.
.
20
22
nan
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
There is a condition on these columns : A < B and A > C always for all rows.
I wish to use some imputer (preferably KNNImputer) such that after imputing the above conditions are satisfied.
(while applying imputer in a generic way, it turns out that many cells are not satisfying these conditions)
How can this be implemented?

Search a value in column and give the value from another

I have a database called Process It has 22 columns
Column 1 is Id which serial and primary,
Column 2 Process name Character(50)
column 3 is "Amount 1" Character Varying
Column 4 is "Time 1" is integer.
The rest of the columns are the same as 3 & 4 but going up in number ie column 5 "Amount 2", column 6 is "Time 2".
What i need is a query which looks in the amount columns for normal and then display the ID and the Time column. for example:
Process Table
ID . Process Name . Amount 1 . Time 1 . Amount 2 Time 2
1 . Pick . normal . 20 . normal . 40
2 . Pack . normal . 40 . 3 . 10
3 . Pull . 3 . 20 . 1 . 60
4 . Play . normal . 40
Result
ID . Time 1 . Time 2
1 . 20 . 40
2 . 40
4 . 40
I have tried the following codes :
select public."Process", amount_1 from
names-# (select ID,time_1 FROM public."Process" AS normal_tasks);
select public."Process", amount_1 from
names-# select id, Time_1 from public."Process" where Amount_1!='normal';
but i'm getting syntax errors.
Any Help will be greatly appreciated
Many Thanks
Dave
I think you are looking for CASE
SELECT CASE WHEN Amount_1 = 'Normal'
THEN Time1
END as Time1,
CASE WHEN Amount_2 = 'Normal'
THEN Time2
END as Time2
FROM Process
WHERE 'Normal' IN (Amount_1, Amount_2)
You have to add one for each of those 22 columns

Search a column with number and a "." which should be treated as 0 by arithmetic operator?

I would like to list out all the data files within all the subdirectories of "my_data_path" directory and those files should match with
- column 7: match with "mystring" keyword
- column 20: value is <= 0.01
It seems awk doesn't not work properly with second condition ($20 <= 0.01) because that column has values ranging from 0 to 1 but also including ".". I think it may cause problem to awk. "." supposes to be treated as 0 in data files. Therefore, how can I dynamically change "." to 0 during awk matching?
Here's my current version:
find my_data_path -type f -name '*out.txt' -exec awk -F "\t" '{ if(($7 == "mystring") && ($20 <= 0.01)) { print } }' {} \;
The sample data is as follow:
chr1 69511 69511 A G exonic OR4F5 . nonsynonymous SNV OR4F5:NM_001005484:exon1:c.A421G:p.T141A Score=0.994828;Name=chr19:60000 . . . . . . rs2691305 1 0.9394
chr1 877831 877831 T C exonic SAMD11 . nonsynonymous SNV SAMD11:NM_152486:exon10:c.T1027C:p.W343R . . . . . . . rs6672356 1 0.9999
chr1 878667 878667 G T exonic SAMD11 . nonsynonymous SNV SAMD11:NM_152486:exon12:c.G1599T:p.E533D . . . . . . . rs201447515 0.003 8.74E-05
chr1 881627 881627 G A exonic NOC2L . synonymous SNV NOC2L:NM_015658:exon16:c.C1843T:p.L615L . . . . . . . rs2272757 0.66 0.5653
chr1 887801 887801 A G exonic NOC2L . synonymous SNV NOC2L:NM_015658:exon10:c.T1182C:p.T394T . . . . . . . rs3828047 0.96 0.9355
chr1 888639 888639 T C exonic NOC2L . synonymous SNV NOC2L:NM_015658:exon9:c.A918G:p.E306E . . . . . . . rs3748596 0.71 0.070
chr1 914333 914333 C G exonic PERM1 . nonsynonymous SNV PERM1:NM_001291366:exon2:c.G2077C:p.E693Q,PERM1:NM_001291367:exon3:c.G1795C:p.E599Q . . . . . . . rs13302979 0.81 0.6617
chr1 914852 914852 G C exonic PERM1 . nonsynonymous SNV PERM1:NM_001291366:exon2:c.C1558G:p.Q520E,PERM1:NM_001291367:exon3:c.C1276G:p.Q426E . . . . . . . rs13303368 0.71 0.595
chr1 914876 914876 T C exonic PERM1 . nonsynonymous SNV PERM1:NM_001291366:exon2:c.A1534G:p.S512G,PERM1:NM_001291367:exon3:c.A1252G:p.S418G . . . . . . . rs13302983 1 0.9664
chr1 914940 914940 T C exonic PERM1 . synonymous SNV PERM1:NM_001291366:exon2:c.A1470G:p.A490A,PERM1:NM_001291367:exon3:c.A1188G:p.A396A . . . . . . . rs13303033 0.71 0.5874
chr1 983473 983473 G T exonic AGRN . nonsynonymous SNV AGRN:NM_198576:exon23:c.G3833T:p.R1278L . . . . . . . rs542631667 0.0004 2.57E-05
chr1 984302 984302 T C exonic AGRN . synonymous SNV AGRN:NM_198576:exon24:c.T4161C:p.T1387T . Benign not_specified RCV000116269.2 MedGen CN169374 . rs9442391 0.84 0.6295
chr1 990280 990280 C T exonic AGRN . synonymous SNV AGRN:NM_198576:exon36:c.C6057T:p.D2019D . Benign not_specified RCV000116281.2 MedGen CN169374 . rs4275402 0.82 0.6376
chr1 1007203 1007203 A G exonic RNF223 . synonymous SNV RNF223:NM_001205252:exon2:c.T744C:p.D248D . . . . . . . rs4633229 0.92 0.8154
chr1 1007432 1007432 G A exonic RNF223 . nonsynonymous SNV RNF223:NM_001205252:exon2:c.C515T:p.A172V . . . . . . . rs4333796 0.8 0.5721
chr1 1147422 1147422 C T exonic TNFRSF4 . synonymous SNV TNFRSF4:NM_003327:exon5:c.G534A:p.E178E . . . . . . . rs17568 0.78 0.3751
chr1 1158631 1158631 A G exonic SDF4 . synonymous SNV SDF4:NM_016176:exon4:c.T570C:p.D190D,SDF4:NM_016547:exon4:c.T570C:p.D190D . . . . . . . rs6603781 1 0.9166
chr1 1220954 1220954 G A exonic SCNN1D . synonymous SNV SCNN1D:NM_001130413:exon6:c.G468A:p.S156S . . . . . . . rs12751100 . .
chr1 1222257 1222257 A C exonic SCNN1D . nonsynonymous SNV SCNN1D:NM_001130413:exon8:c.A1021C:p.T341P . . . . . . . . . .
Therefore, I would expect:
filename of that data file can be found if I search like this:
SAMD11 < 0.01 (column 20 has value < 0.01)
SCNN1D < 0.01 (due to column 20 is "." => 0)
filename of that data file can't be found if I search like this:
NOC2L < 0.01 (due to column 20 > 0.01)
Please advise. Thanks!
Like this?:
First something to test with:
$ mkdir -p test/dir1 test/dir2
$ cat > test/dir1/good # pasted your sample file here
$ echo foo > test/dir2/bad # this wont match
then a solution:
$ awk '$7~/SCNN1D/ && $20<=0.01{print FILENAME;nextfile}' test/*/* 2>/dev/null
test/dir1/good
GNU awk required due to nextfile. Explained:
$ awk ' # awk has been assigned
$7~/SCNN1D/ && $20<=0.01 { # mystring is now SCNN1D
print FILENAME # on match output FILENAME
nextfile # and skip to next file
}' test/*/* 2>/dev/null # dirs under test/ cause output to stderr

Access SQL agregate function query

I would like to make an SQL query to extract aggregate statistics from the following table:
Company | Product X1 | ProdX2 | ... | ProdX10 | ProdY1 | ProdY2 | ... | ProdY10
ABC 5 3 ... 6 5 8 ... 12
EDF 2 NULL ... 5 Null 1 ... 6
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
XYZ NULL 3 ... 14 7 2 ... 8
The result of the query should look something like this (other design suggestions appreciated)
Product | Average | Min | Covariance with corresponding X or Y Product
ProdX1 Avg(ProdX1) Min(ProdX1) Covar(ProdX1,ProdY1)
ProdX2 Avg(ProdX2) Min(ProdX2) Covar(ProdX2,ProdY2)
.
.
.
ProdY10 Avg(ProdY1) Min(ProdY10) Covar(ProdY10,ProdX10)
I am OK with the different aggregate functions, of course Covar (X1,Y1) = Covar(Y1,X1)
However, I am not sure how to create a query that returns the desired result.
Any suggestions are much appreciated.
Thank you very much.