Add rows as columns in pandas - pandas

I'm trying to change my dataset by making all the rows into columns in pandas.
5 6 7
8 9 10
Needs to be changed as
5 6 7 8 9 10
with different headers of course, any suggestions??

Use pd.DataFrame([df.values.flatten()]) as follows:
In [18]: df
Out[18]:
0 1 2
0 5 6 7
1 8 9 10
In [19]: pd.DataFrame([df.values.flatten()])
Out[19]:
0 1 2 3 4 5
0 5 6 7 8 9 10
Explanation:
df.values returns numpy.ndarray:
In [18]: df.values
Out[18]:
array([[ 5, 6, 7],
[ 8, 9, 10]], dtype=int64)
In [19]: type(df.values)
Out[19]: numpy.ndarray
and numpy arrays have .flatten() method:
In [20]: df.values.flatten?
Docstring:
a.flatten(order='C')
Return a copy of the array collapsed into one dimension.
In [21]: df.values.flatten()
Out[21]: array([ 5, 6, 7, 8, 9, 10], dtype=int64)
Pandas.DataFrame constructor expects lists/arrays of rows:
If we try this:
In [22]: pd.DataFrame([ 5, 6, 7, 8, 9, 10])
Out[22]:
0
0 5
1 6
2 7
3 8
4 9
5 10
Pandas thinks that it's a list of rows, where each row has one element.
So i've enclosed that array into square brackets:
In [23]: pd.DataFrame([[ 5, 6, 7, 8, 9, 10]])
Out[23]:
0 1 2 3 4 5
0 5 6 7 8 9 10
which will be understood as one row with 6 columns.

or just in one line:
df = pd.DataFrame([[1,2,3],[4,5,6]])
df.values.flatten()
#out: array([1, 2, 3, 4, 5, 6])

you can also use reduce()
from import pandas as pd
from functools import reduce
df = pd.DataFrame([[5, 6, 7],[8, 9, 10]])
df = pd.DataFrame([reduce(lambda x,y: list(x[1]) + list(y[1]), df.iterrows())])
df
0 1 2 3 4 5
0 5 6 7 8 9 10

Use the reshape function from numpy:
import pandas as pd
import numpy as np
df = pd.DataFrame([[5, 6, 7],[8, 9, 10]])
nparray = np.array(df.iloc[:,:])
x = np.reshape(nparray, -1)
df = pd.DataFrame(x) #to convert back to a dataframe

Related

Retrieving values from different columns in Pandas based on a column condition [duplicate]

The operation pandas.DataFrame.lookup is "Deprecated since version 1.2.0", and has since invalidated a lot of previous answers.
This post attempts to function as a canonical resource for looking up corresponding row col pairs in pandas versions 1.2.0 and newer.
Standard LookUp Values With Default Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 B 4 8
I would like to be able to lookup the corresponding value in the column specified in Col:
I would like my result to look like:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Standard LookUp Values With a Non-Default Index
Non-Contiguous Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
Col A B
0 B 1 5
2 A 2 6
8 A 3 7
9 B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
MultiIndex
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
Col A B
C E B 1 5
F A 2 6
D E A 3 7
F B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
LookUp with Default For Unmatched/Not-Found Values
Given the following DataFrame
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 C 4 8 # Column C does not correspond with any column
I would like to look up the corresponding values if one exists otherwise I'd like to have it default to 0
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0 # Default value 0 since C does not correspond
LookUp with Missing Values in the lookup Col
Given the following DataFrame:
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 NaN 4 8 # <- Missing Lookup Key
I would like any NaN values in Col to result in a NaN value in Val
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # NaN to indicate missing
Standard LookUp Values With Any Index
The documentation on Looking up values by index/column labels recommends using NumPy indexing via factorize and reindex as the replacement for the deprecated DataFrame.lookup.
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
factorize is used to convert the column encode the values as an "enumerated type".
idx, col = pd.factorize(df['Col'])
# idx = array([0, 1, 1, 0], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
Notice that B corresponds to 0 and A corresponds to 1. reindex is used to ensure that columns appear in the same order as the enumeration:
df.reindex(columns=col)
B A # B appears First (location 0) A appers second (location 1)
0 5 1
1 6 2
2 7 3
3 8 4
We need to create an appropriate range indexer compatible with NumPy indexing.
The standard approach is to use np.arange based on the length of the DataFrame:
np.arange(len(df))
[0 1 2 3]
Now NumPy indexing will work to select values from the DataFrame:
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
[5 2 3 8]
*Note: This approach will always work regardless of type of index.
MultiIndex
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
Why use np.arange and not df.index directly?
Standard Contiguous Range Index
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
In this case only, there is no error as the result from np.arange is the same as the df.index.
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Non-Contiguous Range Index Error
Raises IndexError:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: index 8 is out of bounds for axis 0 with size 4
MultiIndex Error
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
Raises IndexError:
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
LookUp with Default For Unmatched/Not-Found Values
There are a few approaches.
First let's look at what happens by default if there is a non-corresponding value:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 C 4 8
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 C 4 8 NaN # NaN Represents the Missing Value in C
If we look at why the NaN values are introduced, we will find that when factorize goes through the column it will enumerate all groups present regardless of whether they correspond to a column or not.
For this reason, when we reindex the DataFrame we will end up with the following result:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col)
B A C
0 5 1 NaN
1 6 2 NaN
2 7 3 NaN
3 8 4 NaN # Reindex adds the missing column with the Default `NaN`
If we want to specify a default value, we can specify the fill_value argument of reindex which allows us to modify the behaviour as it relates to missing column values:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col, fill_value=0)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col, fill_value=0)
B A C
0 5 1 0
1 6 2 0
2 7 3 0
3 8 4 0 # Notice reindex adds missing column with specified value `0`
This means that we can do:
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(
columns=col,
fill_value=0 # Default value for Missing column values
).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0
*Notice the dtype of the column is int, since NaN was never introduced, and, therefore, the column type was not changed.
LookUp with Missing Values in the lookup Col
factorize has a default na_sentinel=-1, meaning that when NaN values appear in the column being factorized the resulting idx value is -1
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 NaN 4 8 # <- Missing Lookup Key
idx, col = pd.factorize(df['Col'])
# idx = array([ 0, 1, 1, -1], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
# Col A B Val
# 0 B 1 5 5
# 1 A 2 6 2
# 2 A 3 7 3
# 3 NaN 4 8 4 <- Value From A
This -1 means that, by default, we'll be pulling from the last column when we reindex. Notice the col still only contains the values B and A. Meaning, that we will end up with the value from A in Val for the last row.
The easiest way to handle this is to fillna Col with some value that cannot be found in the column headers.
Here I use the empty string '':
idx, col = pd.factorize(df['Col'].fillna(''))
# idx = array([0, 1, 1, 2], dtype=int64)
# col = Index(['B', 'A', ''], dtype='object')
Now when I reindex, the '' column will contain NaN values meaning that the lookup produces the desired result:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'].fillna(''))
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # Missing as expected
Other Approaches to LookUp
There are 2 other approaches to performing this operation:
apply (Intuitive, but quite slow)
apply can be used on axis=1 in order to use the Column values as the key:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This operation will work regardless of index type:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
# Col A B
# 0 B 1 5
# 2 A 2 6
# 8 A 3 7
# 9 B 4 8
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
When dealing with Missing/Non-Corresponding Values we can use Series.get can be used to remedy this issue:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'C', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 C 3 7 <- Non Corresponding
# 3 NaN 4 8 <- Missing
df['Val'] = df.apply(lambda row: row.get(row['Col']), axis=1)
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN # Missing value
3 NaN 4 8 NaN # Missing value
With Default Value
df['Val'] = df.apply(lambda row: row.get(row['Col'], default=-1), axis=1)
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 C 3 7 -1 # Default -1
3 NaN 4 8 -1 # Default -1
apply is extremely flexible and modifications are straightforward, however, the general iterative approach, as well as all the individual Series lookups can become extremely costly in large DataFrames.
get_indexer (limited)
Index.get_indexer can be used to convert the column to index values into an indexer for the DataFrame. This means there is no reason to reindex the DataFrame as the indexer corresponds to the DataFrame as a whole.
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This approach is reasonably fast, however, missing values are represented by -1 meaning that if a value is missing it will grab the value from the -1 column (The last column in the DataFrame).
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8],
'Col': ['B', 'A', 'A', 'C']})
# A B Col <- Col is now the Last Col
# 0 1 5 B
# 1 2 6 A
# 2 3 7 A
# 3 4 8 C <- Notice Col `C` does not correspond to a Valid Column Header
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df:
A B Col Val
0 1 5 B 5
1 2 6 A 2
2 3 7 A 3
3 4 8 C C # <- Value from the last column in the DataFrame (index -1)
It is also notable that not reindexing the DataFrame means converting the entire DataFrame to numpy. This can be very costly if there are many unrelated columns that all need converted:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: 10,
2: 20,
3: 't',
4: 40,
5: np.nan,
'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df.to_numpy()
[[10 20 't' 40 nan 'B' 1 5 5]
[10 20 't' 40 nan 'A' 2 6 2]
[10 20 't' 40 nan 'A' 3 7 3]
[10 20 't' 40 nan 'B' 4 8 8]]
Compared to the reindexing approach which only contains columns relevant to the column values:
df.reindex(columns=['B', 'A']).to_numpy()
[[5 1]
[6 2]
[7 3]
[8 4]]
Another option is to build a tuple of the lookup columns, pivot the dataframe, and select the relevant columns with the tuples:
cols = [(ent, ent) for ent in df.Col.unique()]
df.assign(Val = df.pivot(index = None, columns = 'Col')
.reindex(columns = cols)
.ffill(axis=1)
.iloc[:, -1])
Col A B Val
0 B 1 5 5.0
2 A 2 6 2.0
8 A 3 7 3.0
9 B 4 8 8.0
Another possible method is to use melt:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output:
Col A B value
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This method also works with Missing/Non-Corresponding Values:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output
Col A B value
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN
3 NaN 4 8 NaN
You can replace .loc[...] by query(...) but it's little slower although more expressive:
df['value'] = df.melt('Col', ignore_index=False).query('Col == variable')['value']

Concatenate tensors of different rank into a single tensor

I'm looking to feed an autoencoder my features, as both training and target data. Majority of the features have rank 1; are a single column of values like [1,2,3,4]. Some have been put through one hot encoding so the tensors are of rank 2 and have X columns, with X being the number of categorical values in the one hot encoder, so something like:
['a', 'a, 'b', c'] -> [[1,0,0], [1,0,0], [0,1,0], [0,0,1]]
For some reason keras's Model.fit don't accept y values if the training data are generators or datasets. So I have to provide my training data as a tuple of (features, targets), and in this case targets=features, but at the same time, features is a dictionary of tensors so I must concatenate all of the tensors in features into a single tensor.
I can do tf.concat across all of my feature columns except the one-hot encoded columns, which have rank 2 (instead of 1). How can I somehow turn the one-hot encoded features in X individual tensors and then concat them together?
Same issue here where the OP solved his issue using tf.concat, but I can't do that here.
I was trying to comment your question to ask for a couple of details, but I can't do that because I don't have reputation score (I am new to stack overflow and this is my first post ever). So I will try to explain based on what I understood of your problem - hope it helps.
You can use Pandas to hot-encode your categorical features.
Example Dataset
import pandas as pd
import random
import numpy as np
dataset = {'feature_A': np.random.randint(10, size=10),
'feature_B': np.random.randint(10, size=10),
'feature_C': np.random.randint(10, size=10),
'categorical_feature': np.array([chr(97 + random.randint(0, 2)) for i in range(10)])}
print(dataset)
{'feature_A': array([6, 8, 2, 0, 4, 8, 6, 4, 3, 8]),
'feature_B': array([0, 6, 8, 6, 7, 3, 4, 6, 1, 6]),
'feature_C': array([0, 7, 7, 3, 7, 4, 0, 3, 2, 7]),
'categorical_feature': array(['a', 'a', 'c', 'b', 'c', 'c', 'a', 'a', 'c', 'c'], dtype='<U1')}
Pandas DataFrame
Transform the dataset into a Pandas DataFrame
df = pd.DataFrame(dataset)
print(df)
Feature_A Feature_B Feature_C Categorical_Feature
0 1 9 9 e
1 9 9 8 c
2 5 8 3 c
3 3 5 7 c
4 9 8 10 d
5 6 9 6 c
6 1 4 5 d
7 9 5 2 e
8 3 9 2 c
9 3 8 10 a
One-hot & concatenate
One-hot encode the categorical features and concatenate to the main DataFrame (and drop original categorical feature column)
df = pd.concat((df, pd.get_dummies(df['categorical_feature'])), axis=1).drop('categorical_feature', axis=1)
print(df)
feature_A feature_B feature_C a b c
0 0 0 1 0 0 1
1 5 7 0 0 0 1
2 9 2 6 0 1 0
3 1 1 4 0 0 1
4 5 8 8 0 0 1
5 9 1 8 1 0 0
6 5 6 8 1 0 0
7 9 5 0 1 0 0
8 9 4 5 1 0 0
9 8 3 5 1 0 0
NumPy array
Then you can simply get the values of the DataFrame as a numpy array by using the attribute .values. Each row now is one training example that comprises of all the value features + the categorical features as hot encoded vector.
You can use the numpy array directly into your model or, if you wish, you can also transform it into tensorflow tensor by using tf.data.Dataset.from_tensor_slices().
dataset = df.values
print(dataset)
array([[0, 0, 1, 0, 0, 1],
[5, 7, 0, 0, 0, 1],
[9, 2, 6, 0, 1, 0],
[1, 1, 4, 0, 0, 1],
[5, 8, 8, 0, 0, 1],
[9, 1, 8, 1, 0, 0],
[5, 6, 8, 1, 0, 0],
[9, 5, 0, 1, 0, 0],
[9, 4, 5, 1, 0, 0],
[8, 3, 5, 1, 0, 0]], dtype=int32)

Pandas append row without specifying columns

I wanted to add or append a row (in the form of a list) to a dataframe. All the methods requires that I turn the list into another dataframe first, eg.
df = df.append(another dataframe)
df = df.merge(another dataframe)
df = pd.concat(df, another dataframe)
I've found a trick if the index is in running number at https://www.statology.org/pandas-add-row-to-dataframe/
import pandas as pd
#create DataFrame
df = pd.DataFrame({'points': [10, 12, 12, 14, 13, 18],
'rebounds': [7, 7, 8, 13, 7, 4],
'assists': [11, 8, 10, 6, 6, 5]})
#view DataFrame
df
points rebounds assists
0 10 7 11
1 12 7 8
2 12 8 10
3 14 13 6
4 13 7 6
5 18 4 5
#add new row to end of DataFrame
df.loc[len(df.index)] = [20, 7, 5]
#view updated DataFrame
df
points rebounds assists
0 10 7 11
1 12 7 8
2 12 8 10
3 14 13 6
4 13 7 6
5 18 4 5
6 20 7 5
However, the dataframe must have index in running number or else, the add/append will override the existing data.
So my question is: Is there are simple, foolproof way to just append/add a list to a dataframe ?
Thanks very much !!!
>>> df
points rebounds assists
3 10 7 11
1 12 7 8
2 12 8 10
If the indexes are "numbers" - you could add 1 to the max index.
>>> df.loc[max(df.index) + 1] = 'my', 'new', 'row'
>>> df
points rebounds assists
3 10 7 11
1 12 7 8
2 12 8 10
4 my new row

How to assigne a dataframe mean to specific rows of dataframe?

I have a data frame like this
df_a = pd.DataFrame({'a': [2, 4, 5, 6, 12],
'b': [3, 5, 7, 9, 15]})
Out[112]:
a b
0 2 3
1 4 5
2 5 7
3 6 9
4 12 15
and mean out
df_a.mean()
Out[118]:
a 5.800
b 7.800
dtype: float64
I want this;
df_a[df_a.index.isin([3, 4])] = df.mean()
But I'm getting an error. How do I achieve this?
I gave an example here. There are observations that I need to change a lot in the data that I am working with. And I keep their index values in a list
If you want to overwrite the values of rows in a list, you can do it with iloc
df_a = pd.DataFrame({'a': [2, 4, 5, 6, 12], 'b': [3, 5, 7, 9, 15]})
idx_list = [3, 4]
df_a.iloc[idx_list,:] = df_a.mean()
Output
a b
0 2.0 3.0
1 4.0 5.0
2 5.0 7.0
3 5.8 7.8
4 5.8 7.8
edit
If you're using an older version of pandas and see NaNs instead of wanted values, you can use a for loop
df_a_mean = df_a.mean()
for i in idx_list:
df_a.iloc[i,:] = df_a_mean

Multiplying Dataframe by Column Value

I'm currently trying to multiply a dataframe of local currency values and converting it to its relevant Canadian value by multiplying its relevant FX rate.
However, I keep getting this error:
ValueError: operands could not be broadcast together with shapes (12252,) (1021,)
This is the code I'm working with right now. It works when I have a handful rows of data, but keeps getting the ValueError once I use it on the full file (1022 rows of data incl. headers).
import pandas as pd
Local_File = ('RawData.xlsx')
df = pd.read_excel(Local_File, sheet_name = 'Local')
df2 = df.iloc[:,[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]].multiply(df['FX Spot Rate'],axis='index')
print (df2)
My dataframe looks something like this with 1022 rows of data (incl. header)
Appreciate any help! Thank you!
df = pd.DataFrame({'A': [1, 2, 3, 3, 1],
'B': [1, 2, 3, 3, 1],
'C': [9, 7, 4, 3, 9]})
A B C
0 1 1 9
1 2 2 7
2 3 3 4
3 3 3 3
4 1 1 9
df.iloc[:,1:] = df.iloc[:,1:].multiply(df['A'][:], axis="index")
df
A B C
0 1 1 9
1 2 4 14
2 3 9 12
3 3 9 9
4 1 1 9