Correspondence between iCE40 I/O blocks and package pins - yosys

Is the correspondence between the I/O blocks of an iCE40 FPGA and the pins of the package they drive documented somewhere?
The I/O tile documentation of Project IceStorm gives a list of I/O blocks, and for each block where its IE and REN bits are located in the bitstream. A few blocks are missing from this list:
0 16 *
0 15 *
0 7 *
0 1 *
13 3 0
13 5 *
13 10 *
13 16 *
6 17 0
Does that mean that these blocks don't exist?
While lacking a documentation, I was able to deduce the following correspondence for the iCE40-HX1K-TQ144 and the pins used on the iCEstick evaluation board by examining the bitstreams generated by the Yosys/arachne-pnr/Icestorm flow:
0 8 1 -> 21 (PIO3_00 / clock)
4 0 0 -> 44 (PIO2_10 / J3 10)
4 0 1 -> 45 (PIO2_11 / J3 9)
5 0 0 -> 47 (PIO2_12 / J3 8)
5 0 1 -> 48 (PIO2_13 / J3 7)
7 0 1 -> 56 (PIO2_14 / J3 6)
8 0 1 -> 60 (PIO2_15 / J3 5)
9 0 0 -> 61 (PIO2_16 / J3 4)
9 0 1 -> 62 (PIO2_17 / J3 3)
13 3 1 -> 78 (PIO1_02 / Pmod 1)
13 4 0 -> 79 (PIO1_03 / Pmod 2)
13 4 1 -> 80 (PIO1_04 / Pmod 3)
13 6 0 -> 81 (PIO1_05 / Pmod 4)
13 6 1 -> 87 (PIO1_06 / Pmod 7)
13 7 0 -> 88 (PIO1_07 / Pmod 8)
13 7 1 -> 90 (PIO1_08 / Pmod 9)
13 8 0 -> 91 (PIO1_09 / Pmod 10)
13 9 1 -> 95 (PIO1_10 / D5)
13 11 0 -> 96 (PIO1_11 / D4)
13 11 1 -> 97 (PIO1_12 / D3)
13 12 0 -> 98 (PIO1_13 / D2)
13 12 1 -> 99 (PIO1_14 / D1)
13 14 1 -> 105 (PIO1_18 / TXD)
13 15 0 -> 106 (PIO1_19 / RXD)
13 15 1 -> 107 (PIO1_20 / SD)
12 17 0 -> 113 (PIO0_03 / J1 4)
12 17 1 -> 112 (PIO0_02 / J1 3)
11 17 0 -> 115 (PIO0_05 / J1 6)
11 17 1 -> 114 (PIO0_04 / J1 5)
10 17 0 -> 117 (PIO0_07 / J1 8)
10 17 1 -> 116 (PIO0_06 / J1 7)
9 17 0 -> 119 (PIO0_09 / J1 10)
9 17 1 -> 118 (PIO0_08 / J1 9)
However, I'd like to cross-check this information if possible.

Does that mean that these blocks don't exist?
It means those blocks are not connected to actual IO pins. I would assume they still exist on the silicon. But since I've never looked at the actual die, I have no way of knowing.
However, I'd like to cross-check this information if possible.
See the .pins tq144 section of chipdb-1k.txt. For example:
Your list: 0 8 1 -> 21
My list: 21 0 8 1

Related

iteration calculation based on another dataframe

How to do iteration calculation as shown in df2 as desired output ?
any reference links for this > many thanks for helping
df1
a b c
0 1 0 5
1 9 9 2
2 2 2 8
3 6 3 0
4 6 1 7
df2 :
a b c
0 1 0 5 >> values from df1
1 19 18 9 >> values from (df1.iloc[1] * 2) + df2.iloc[0] *1)
2 23 22 25 >> values from (df1.iloc[2] * 2) + df2.iloc[1] *1)
3 35 28 25 >> values from (df1.iloc[3] * 2) + df2.iloc[2] *1)
4 47 30 39 >> values from (df1.iloc[4] * 2) + df2.iloc[3] *1)
IIUC, you can try:
df2 = df1.mul(2).cumsum().sub(df1.iloc[0])
Output:
a b c
0 1 0 5
1 19 18 9
2 23 22 25
3 35 28 25
4 47 30 39
more complex operation
If you want x[n] = x[n]*2 + x[n-1]*2, you need to iterate:
def process(s):
out = [s[0]]
for x in s[1:]:
out.append(x*2+out[-1]*3)
return out
df1.apply(process)
Output:
a b c
0 1 0 5
1 21 18 19
2 67 58 73
3 213 180 219
4 651 542 671

ValueError: Data must be 1-dimensional......verify_integrity

Bonjour,
I don't understand why this issue occurs.
print("p.shape= ", p.shape)
print("dfmj_dates['deces'].shape = ",dfmj_dates['deces'].shape)
cross_dfmj = pd.crosstab(p, dfmj_dates['deces'])
That produces:
p.shape= (683, 1)
dfmj_dates['deces'].shape = (683,)
----> 3 cross_dfmj = pd.crosstab(p, dfmj_dates['deces'])
--> 654 df = DataFrame(data, index=common_idx)
--> 614 mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
--> 589 val = sanitize_array(
--> 576 subarr = _sanitize_ndim(subarr, data, dtype, index, allow_2d=allow_2d)
--> 627 raise ValueError("Data must be 1-dimensional")
ValueError: Data must be 1-dimensional
From me, I suspect issue comes from the difference between (683, 1)
and (683,). I tried something like p.flatten(order = 'C') to get
(683,) but pd.DataFrame(dfmj_dates['deces']) too. That failed.
Do you have any idea? Regards, Atapalou
print(p.head(30))
print(df.head(30))
that produces
week
0 8
1 8
2 8
3 9
4 9
5 9
6 9
7 9
8 9
9 9
10 10
11 10
12 10
13 10
14 10
15 10
16 10
17 11
18 11
19 11
20 11
21 11
22 11
23 11
24 12
25 12
26 12
27 12
28 12
29 12
deces
0 0
1 1
2 0
3 0
4 0
5 1
6 0
7 0
8 0
9 0
10 1
11 1
12 0
13 3
14 4
15 5
16 3
17 11
18 3
19 15
20 13
21 18
22 12
23 36
24 21
25 27
26 69
27 128
28 78
29 112
Try to squeeze p:
cross_dfmj = pd.crosstab(p.squeeze(), dfmj_dates['deces'])
Example:
p = np.random.random((5, 1))
p.shape
# (5, 1)
p.squeeze().shape
# (5,)

R : which.max + condition does not return the expected value

I made a reproducible example of a dataframe with 2 patients ID (ID 1 and ID 2), the value of a measurement (m_value) on different days (m_day).
df <- data.frame (ID = c (1, 1, 1, 1, 2, 2, 2),
m_value = c (10, 15, 12, 13, 18, 16, 19),
m_day = c (14, 143, 190, 402, 16, 55, 75)
ID m_value m_day
1 1 10 14
2 1 15 143
3 1 12 190
4 1 13 402
5 2 18 16
6 2 16 55
7 2 19 75
Now I want to obtain, for each patient, the best value of m before day 100 (period 1), and >= day 100 (period 2), and the dates of these best values, such as I can obtain the following table:
ID m_value m_day best_m_period1 best_m_period2 date_best_m_period1 date_best_m_period2
1 1 10 14 10 15 14 143
2 1 15 143 10 15 14 143
3 1 12 190 10 15 14 143
4 1 13 402 10 15 14 143
5 2 18 16 19 NA 75 NA
6 2 16 55 19 NA 75 NA
7 2 19 75 19 NA 75 NA
I tried the following code:
df2 <- df %>%
group_by (ID)%>%
mutate (best_m_period1 = max(m_value[m_day < 100]))%>%
mutate (best_m_period2 = max (m_value[m_day >=100])) %>%
mutate (date_best_m_period1 =
ifelse (is.null(which.max(m_value[m_day<100])), NA,
m_day[which.max(m_value[m_day < 100])])) %>%
mutate (date_best_m_period2 =
ifelse (is.null(which.max(m_value[m_day >= 100])), NA,
m_day[which.max(m_value[m_day >= 100])]))
But I obtain the following table:
ID m_value m_day best_m_period1 best_m_period2 date_best_m_period1 date_best_m_period2
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 10 14 10 15 14 14
2 1 15 143 10 15 14 14
3 1 12 190 10 15 14 14
4 1 13 402 10 15 14 14
5 2 18 16 19 -Inf 75 NA
6 2 16 55 19 -Inf 75 NA
7 2 19 75 19 -Inf 75 NA
The date_best_m_period2 for ID1 is not 143 as expected (corresponding to the max value of 15 for ID1 in period 2 (>= 100 day)), but returns 14, the max value in period 1.
How can I resolve this problem ? Thank you very much for your help

Required data frame after explode or other option to fill a running difference b/w two columns pandas dataframe

Input data frame as given given below,
data = {
'labels': ["A","B","A","B","A","B","M","B","M","B","M"],
'start': [0,9,13,23,47,77,81,92,100,104,118],
'stop': [9,13,23,47,77,81,92,100,104,118,145],
}
df = pd.DataFrame.from_dict(data)
labels start stop
0 A 0 9
1 B 9 13
2 A 13 23
3 B 23 47
4 A 47 77
5 B 77 81
6 M 81 92
7 B 92 100
8 M 100 104
9 B 104 118
10 M 118 145
The output data frame required is as below,
Try this:
df['start'] = df.apply(lambda x: range(x['start'] + 1, x['stop'] + 1), axis=1)
df = df.explode('start')
Output:
>>> df
labels start stop
0 A 1 9
0 A 2 9
0 A 3 9
0 A 4 9
0 A 5 9
0 A 6 9
0 A 7 9
0 A 8 9
0 A 9 9
1 B 10 13
1 B 11 13
1 B 12 13
1 B 13 13
2 A 14 23
2 A 15 23
2 A 16 23
2 A 17 23
2 A 18 23
2 A 19 23
2 A 20 23
2 A 21 23
2 A 22 23
2 A 23 23
...

Removing rows and keeping consecutive rows pandas

I would like to omit the first row and keep x consecutive rows.
in the example below i would like to keep 7. How do i achieve this?
df = pd.Series(range(1,101)).to_frame()
df.columns = ['numbers']
df['numbers'][1::7]
1 2
8 9
15 16
22 23
29 30
36 37
43 44
50 51
57 58
64 65
71 72
78 79
85 86
92 93
99 100
I would like to keep the values below but continue to the next row sequence.
so remove 1 then keep 2 to 7. then remove 8 and keep 9 to 14
df = pd.Series(range(1,101)).to_frame()
df.columns = ['numbers']
df['numbers'][1:7]
1 2
2 3
3 4
4 5
5 6
6 7
Or loc:
df.loc[df.index % 7 != 0]
giving
numbers
1 2
2 3
3 4
4 5
5 6
6 7
8 9
9 10
10 11
11 12
12 13
13 14
15 16
16 17
... ...
drop
df.drop(df.index[::7])
numbers
1 2
2 3
3 4
4 5
5 6
6 7
8 9
9 10
10 11
11 12
12 13
13 14
15 16
16 17
17 18
18 19
.. ...