Python 3.x How to access the element of the index before the index of element of my i in for loop? - indexing

a = [1,2,3,4,5]
for i in a:
list1.append(i)
list1.append(i-2) `i-2` is not functioning why?
Like for example, I am now in the index of element 4

i is not an index. It is the element present in the list itself. When you say you are now in the index of element 4, you are actually having element 4 not the index. So, you cannot treat like an index.
Python For loop is an interator based loop. It is used to step through items in lists, strings etc
The code:
a = [1,2,3,4,5]
list1 = []
for i in a:
print(i)
list1.append(i)
list1.append(i-2)
print list1
will produce the below output:
1
2
3
4
5
[1, -1, 2, 0, 3, 1, 4, 2, 5, 3]

Related

Initialize a list in cells in specific indexes (the indexes are in a list)

I have a list of indexes in each of which I need to initialize a list in a specific column. I tried this:
index = [0, 1, 2, 3, 4]
dataframe.at[indexes, 'column_x'] = [] * len(indexes)
which resulted in the error message:
pandas.errors.InvalidIndexError: Int64Index([0, 1, 2, 3, 4], dtype='int64')
I tried using loc and iloc instead of at, which also resulted in errors. I couldn't find relevant solutions.
Any suggestions will be welcomed.
Thanks!
You can create an empty series with [] then use combine_first to fill right index:
sr = pd.Series([[]] * len(df))
df['column_x'] = df['column_x'].mask(df.index.isin(index)).combine_first(sr)

Does Pandas have a resample method without dependency on a datetime index?

I have a series that I want to apply an external function to in subsets/chunks of three. Although the actual external function is more complex, for the sake of an example, lets just assume my external function takes an ndarray of integers and returns the sum of all values. So for example:
series = pd.Series([1,1,1,1,1,1,1,1,1])
# Some pandas magic similar to:
result = series.resample(3).apply(myFunction)
# where 3 just represents every 3 values and
# result == pd.Series([3,3,3])
I looked at combining Series.resample and Series.apply as hinted to by the psuedo code above but it appears resample depends on a datetime index. Any ideas on how I can effectively downsample by applying an external function like this without a datetime index? Or do you just recommend creating a temporary datetime index to do this then reverting to the original index?
pandas.DataFrame.groupby would do the trick here. What you need is a repeated index to specify subsets/chunks
Create chunks
n = 3
repeat_idx = np.repeat(np.arange(0,len(series), n), n)[:len(series)]
print(repeat_idx)
array([0, 0, 0, 3, 3, 3, 6, 6, 6])
Groupby
def myFunction(l):
output = 0
for item in l:
output+=item
return output
series = pd.Series([1,1,1,1,1,1,1,1,1])
result = series.groupby(repeat_idx).apply(myFunction)
(result)
0 3
3 3
6 3
The solution will also work for chunks not adding to the length of series,
n = 4
repeat_idx = np.repeat(np.arange(0,len(series), n), n)[:len(series)]
print(repeat_idx)
array([0, 0, 0, 0, 4, 4, 4, 4, 8])
result = series.groupby(repeat_idx).apply(myFunction)
print(result)
0 4
4 4
8 1

How to compute how many elements in three arrays in python are equal to some value in the same positon betweel the arrays?

I have three numpy arrays
a = [0, 1, 2, 3, 4]
b = [5, 1, 7, 3, 9]
c = [10, 1, 3, 3, 1]
and i wanna to compute how many elements in a, b, c are equal to 3 in the same position, so for that example would be 3.
An elegant solution is to use Numpy functions, like:
np.count_nonzero(np.vstack([a, b, c])==3, axis=0).max()
Details:
np.vstack([a, b, c]) - generate an array with 3 rows, composed of your
3 source arrays.
np.count_nonzero(...==3, axis=0) - count how many values of 3 occurs
in each column. For your data the result is array([0, 0, 1, 3, 0], dtype=int64).
max() - take the greatest value, in your case 3.

How to remove duplicates items in list (Raku)

FAQ: In Raku, how to remove duplicates from a list to only get unique values?
my $arr = [1, 2, 3, 2, 3, 1, 1, 0];
# desired output [1, 2, 3, 0]
Use The built-in unique
#arr.unique # (1 2 3 0)
Use a Hash (alias map, dictionary)
my %unique = map {$_ => 1}, #arr;
%unique.keys; # (0 1 2 3) do not rely on order
Use a Set: same method as before but in one line and optimized by the dev team
set(#arr).keys
Links:
Answer on Roseta Code
Hash solution on Think Perl6
Same question for Perl, Python -> always same methods: a Hash or a Set

Get indices for values of one array in another array

I have two 1D-arrays containing the same set of values, but in a different (random) order. I want to find the list of indices, which reorders one array according to the other one. For example, my 2 arrays are:
ref = numpy.array([5,3,1,2,3,4])
new = numpy.array([3,2,4,5,3,1])
and I want the list order for which new[order] == ref.
My current idea is:
def find(val):
return numpy.argmin(numpy.absolute(ref-val))
order = sorted(range(new.size), key=lambda x:find(new[x]))
However, this only works as long as no values are repeated. In my example 3 appears twice, and I get new[order] = [5 3 3 1 2 4]. The second 3 is placed directly after the first one, because my function val() does not track which 3 I am currently looking for.
So I could add something to deal with this, but I have a feeling there might be a better solution out there. Maybe in some library (NumPy or SciPy)?
Edit about the duplicate: This linked solution assumes that the arrays are ordered, or for the "unordered" solution, returns duplicate indices. I need each index to appear only once in order. Which one comes first however, is not important (neither possible based on the data provided).
What I get with sort_idx = A.argsort(); order = sort_idx[np.searchsorted(A,B,sorter = sort_idx)] is: [3, 0, 5, 1, 0, 2]. But what I am looking for is [3, 0, 5, 1, 4, 2].
Given ref, new which are shuffled versions of each other, we can get the unique indices that map ref to new using the sorted version of both arrays and the invertibility of np.argsort.
Start with:
i = np.argsort(ref)
j = np.argsort(new)
Now ref[i] and new[j] both give the sorted version of the arrays, which is the same for both. You can invert the first sort by doing:
k = np.argsort(i)
Now ref is just new[j][k], or new[j[k]]. Since all the operations are shuffles using unique indices, the final index j[k] is unique as well. j[k] can be computed in one step with
order = np.argsort(new)[np.argsort(np.argsort(ref))]
From your original example:
>>> ref = np.array([5, 3, 1, 2, 3, 4])
>>> new = np.array([3, 2, 4, 5, 3, 1])
>>> np.argsort(new)[np.argsort(np.argsort(ref))]
>>> order
array([3, 0, 5, 1, 4, 2])
>>> new[order] # Should give ref
array([5, 3, 1, 2, 3, 4])
This is probably not any faster than the more general solutions to the similar question on SO, but it does guarantee unique indices as you requested. A further optimization would be to to replace np.argsort(i) with something like the argsort_unique function in this answer. I would go one step further and just compute the inverse of the sort:
def inverse_argsort(a):
fwd = np.argsort(a)
inv = np.empty_like(fwd)
inv[fwd] = np.arange(fwd.size)
return inv
order = np.argsort(new)[inverse_argsort(ref)]