How to create non-alphabetically ordered Categorical column in Polars Dataframe? - data-science

In Pandas, you can create an "ordered" Categorical column from existing string column as follows:
column_values_with_custom_order = ["B", "A", "C"] df["Column"] = pd.Categorical(df.Column, categories=column_values_with_custom_order, ordered=True)
In Polars documentation, I couldn't find such way to create ordered columns. However, I could reproduce this by using pl.from_pandas(df) so I suspect that this is possible with Polars as well.
What would be the recommended way to this?
I tried to create new column with polars_df.with_columns(col("Column").cast(pl.categorical)), but I don't know how to include the custom ordering to this.
I also checked In polars, can I create a categorical type with levels myself?, but I would prefer not to add another column to my Dataframe only for ordering.

Say you have
df = pl.DataFrame(
{"cats": ["z", "z", "k", "a", "b"], "vals": [3, 1, 2, 2, 3]}
)
and you want to make cats a categorical but you want the categorical ordered as
myorder=["k", "z", "b", "a"]
There are two ways to do this. One way is with pl.StringCache() as in the question you reference and the other is more messy. The former does not require you add any columns to your df. It's actually very succinct.
with pl.StringCache():
pl.Series(myorder).cast(pl.Categorical)
df=df.with_columns(pl.col('cats').cast(pl.Categorical))
What happens is that everything in the StringCache gets the same key values so when the myorder list is casted that saves what keys get allocated to each string value. When your df gets casted under the same cache it gets the same key/string values which are in the order you wanted.
The other way to do this is as follows:
You have to sort your df by the ordering then you can do set_ordering('physical'). If you want to maintain your original order then you just have to use with_row_count at the beginning so you can restore that order.
Putting it all together, it looks like this:
df=df.with_row_count('i').join(
pl.from_dicts([{'order':x, 'cats':y} for x,y in enumerate(myorder)]), on='cats') \
.sort('order').drop('order') \
.with_columns(pl.col('cats').cast(pl.Categorical).cat.set_ordering('physical')) \
.sort('i').drop('i')
You can verify by doing:
df.select(['cats',pl.col('cats').to_physical().alias('phys')])
shape: (5, 2)
┌──────┬──────┐
│ cats ┆ phys │
│ --- ┆ --- │
│ cat ┆ u32 │
╞══════╪══════╡
│ z ┆ 1 │
│ z ┆ 1 │
│ k ┆ 0 │
│ a ┆ 3 │
│ b ┆ 2 │
└──────┴──────┘

From the doc:
Use:
polars_df.with_columns(col("Column").cast(pl.categorical).cat.set_ordering("lexical"))
See the doc
df = pl.DataFrame(
{"cats": ["z", "z", "k", "a", "b"], "vals": [3, 1, 2, 2, 3]}
).with_columns(
[
pl.col("cats").cast(pl.Categorical).cat.set_ordering("lexical"),
]
)
df.sort(["cats", "vals"])

Related

ArgumentError: columns argument must be a vector of AbstractVector objects

I want to make a DataFrame in Julia with one column, but I get an error:
julia> using DataFrames
julia> r = rand(3);
julia> DataFrame(r, ["col1"])
ERROR: ArgumentError: columns argument must be a vector of AbstractVector objects
Why?
Update:
I figured out that I could say the following:
julia> DataFrame(reshape(r, :, 1), ["col1"])
3×1 DataFrame
Row │ col1
│ Float64
─────┼──────────
1 │ 0.800824
2 │ 0.989024
3 │ 0.722418
But it's not straightforward. Is there any better way? Why can't I easily create a DataFrame object from a Vector?
Why can't I easily create a DataFrame object from a Vector?
Because it would be ambiguous with the syntax where you pass positional arguments the way you tried. Many popular tables are vectors.
However, what you can write is just:
julia> r = rand(3);
julia> DataFrame(col1=r)
3×1 DataFrame
Row │ col1
│ Float64
─────┼────────────
1 │ 0.00676619
2 │ 0.554207
3 │ 0.394077
to get what you want.
An alternative more similar to your code would be:
julia> DataFrame([r], ["col1"])
3×1 DataFrame
Row │ col1
│ Float64
─────┼────────────
1 │ 0.00676619
2 │ 0.554207
3 │ 0.394077

python-polars casting string to numeric

When applying pandas.to_numeric,Pandas return dtype is float64 or int64 depending on the data supplied.https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html
is there an equivelent to do this in polars?
I have seen this How to cast a column with data type List[null] to List[i64] in polars however dont want to individually cast each column. got couple of string columns i want to turn numeric. this could be int or float values
#code to show casting in pandas.to_numeric
import pandas as pd
df = pd.DataFrame({"col1":["1","2"], "col2":["3.5", "4.6"]})
print("DataFrame:")
print(df)
df[["col1","col2"]]=df[["col1","col2"]].apply(pd.to_numeric)
print(df.dtypes)
Unlike Pandas, Polars is quite picky about datatypes and tends to be rather unaccommodating when it comes to automatic casting. (Among the reasons is performance.)
You can create a feature request for a to_numeric method (but I'm not sure how enthusiastic the response will be.)
That said, here's some easy ways to accomplish this.
Create a method
Perhaps the simplest way is to write a method that attempts the cast to integer and then catches the exception. For convenience, you can even attach this method to the Series class itself.
def to_numeric(s: pl.Series) -> pl.Series:
try:
result = s.cast(pl.Int64)
except pl.exceptions.ComputeError:
result = s.cast(pl.Float64)
return result
pl.Series.to_numeric = to_numeric
Then to use it:
(
pl.select(
s.to_numeric()
for s in df
)
)
shape: (2, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞══════╪══════╡
│ 1 ┆ 3.5 │
├╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 2 ┆ 4.6 │
└──────┴──────┘
Use the automatic casting of csv parsing
Another method is to write your columns to a csv file (in a string buffer), and then have read_csv try to infer the types automatically. You may have to tweak the infer_schema_length parameter in some situations.
from io import StringIO
pl.read_csv(StringIO(df.write_csv()))
>>> pl.read_csv(StringIO(df.write_csv()))
shape: (2, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞══════╪══════╡
│ 1 ┆ 3.5 │
├╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 2 ┆ 4.6 │
└──────┴──────┘

Issue with Left Outer Join in Julia DataFrame

This one has me stumped.
Im trying to join two dataframes in Julia but I get this wierd 'nothing' error. This works on a different machine so Im thinking it could be package problems. I Pkg.rm() everything and re-install but no go.
Julia v1.2
using PyCall;
using DataFrames;
using CSV;
using Statistics;
using StatsBase;
using Random;
using Plots;
using Dates;
using Missings;
using RollingFunctions;
# using Indicators;
using Pandas;
using GLM;
using Impute;
a = DataFrames.DataFrame(x = [1, 2, 3], y = ["a", "b", "c"])
b = DataFrames.DataFrame(x = [1, 2, 3, 4], z = ["d", "e", "f", "g"])
join(a, b, on=:x, kind =:left)
yields
ArgumentError: `nothing` should not be printed; use `show`, `repr`, or custom output instead.
Stacktrace:
[1] print(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Nothing) at ./show.jl:587
[2] print_to_string(::String, ::Vararg{Any,N} where N) at ./strings/io.jl:129
[3] string at ./strings/io.jl:168 [inlined]
[4] #join#543(::Symbol, ::Symbol, ::Bool, ::Nothing, ::Tuple{Bool,Bool}, ::typeof(join), ::DataFrames.DataFrame, ::DataFrames.DataFrame) at /Users/username/.julia/packages/DataFrames/3ZmR2/src/deprecated.jl:298
[5] (::getfield(Base, Symbol("#kw##join")))(::NamedTuple{(:on, :kind),Tuple{Symbol,Symbol}}, ::typeof(join), ::DataFrames.DataFrame, ::DataFrames.DataFrame) at ./none:0
[6] top-level scope at In[15]:4
kind=:inner works fine but :left, :right, and :outer don't.
This is a problem caused by the way Julia 1.2 prints nothing (i.e. that it errors when trying to print it). If you would switch to Julia 1.4.1 the problem will disappear.
However, I can see you are on DataFrames.jl 0.21. In this version join function is deprecated. You should use innerjoin, leftjoin, rightjoin, outerjoin, etc. functions. Then all will work also on Julia 1.2, e.g.:
julia> leftjoin(a, b, on=:x)
3×3 DataFrame
│ Row │ x │ y │ z │
│ │ Int64 │ String │ String? │
├─────┼───────┼────────┼─────────┤
│ 1 │ 1 │ a │ d │
│ 2 │ 2 │ b │ e │
│ 3 │ 3 │ c │ f │

Select numerical columns of Julia DataFrame with missing values

I want to select all columns of a DataFrame in which the datatype is a subtype of Number. However, since there are columns with missing values, the numerical column datatypes can be something like Union{Missing, Int64}.
So far, I came up with:
using DataFrames
df = DataFrame([["a", "b"], [1, missing] ,[2, 5]])
df_numerical = df[typeintersect.(colwise(eltype, df), Number) .!= Union{}]
This yields the expected result.
Question
Is there a more simple, idiomatic way of doing this? Possibly simliar to:
df.select_dtypes(include=[np.number])
in pandas as taken from an answer to this question?
julia> df[(<:).(eltypes(df),Union{Number,Missing})]
2×2 DataFrame
│ Row │ x2 │ x3 │
├─────┼─────────┼────┤
│ 1 │ 1 │ 2 │
│ 2 │ missing │ 5 │
Please note that the . is the broadcasting operator and hence I had to use <: operator in a functional form.
An other way to do it could be:
df_numerical = df[[i for i in names(df) if Base.nonmissingtype(eltype(df[i])) <: Number]]
To retrieve all the columns that are subtype of Number, irrelevantly if they host missing data or not.

Julia dataframe where a column is an array of arrays?

I'm trying to create a table where each row has time-series data associated with a particular test-case.
julia> df = DataFrame(var1 = Int64[], var2 = Int64[], ts = Array{Array{Int64, 1}, 1})
0x3 DataFrames.DataFrame
I'm able to create the data frame. Each var1, var2 pair is intended to have an associated time series.
I want to generate data in a loop and want to append to this dataframe using push!
I've tried
julia> push!(df, [1, 2, [3,4,5]])
ERROR: ArgumentError: Length of iterable does not match DataFrame column count.
in push! at /Users/stro/.julia/v0.4/DataFrames/src/dataframe/dataframe.jl:871
and
julia> push!(df, (1, 2, [3,4,5]))
ERROR: ArgumentError: Error adding [3,4,5] to column :ts. Possible type mis-match.
in push! at /Users/stro/.julia/v0.4/DataFrames/src/dataframe/dataframe.jl:883
What's the best way to go about this? Is my intended approach even the right path?
You've accidentally put the type of a vector in instead of an actual vector. This declaration will work:
df = DataFrame(var1 = Int64[], var2 = Int64[], ts = Array{Int64, 1}[])
Note the change from Array{Array{Int64, 1}, 1}, which is a type, to Array{Int64, 1}[], which is an actual vector with that type.
Then things work:
julia> push!(df, (1, 2, [3,4,5]))
julia> df
1x3 DataFrames.DataFrame
│ Row │ var1 │ var2 │ ts │
┝━━━━━┿━━━━━━┿━━━━━━┿━━━━━━━━━┥
│ 1 │ 1 │ 2 │ [3,4,5] │
Note that your other example, using [1, 2, [3,4,5]] still does not work. This is because a quirk in Julia's array syntax means that the comma , operator does concatenation, so in fact [1, 2, [3,4,5]] means [1, 2, 3, 4, 5]. This behaviour is weird and will be fixed in Julia 0.5, but is preserved in 0.4 for backwards compatibility.