python-polars casting string to numeric - pandas

When applying pandas.to_numeric,Pandas return dtype is float64 or int64 depending on the data supplied.https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html
is there an equivelent to do this in polars?
I have seen this How to cast a column with data type List[null] to List[i64] in polars however dont want to individually cast each column. got couple of string columns i want to turn numeric. this could be int or float values
#code to show casting in pandas.to_numeric
import pandas as pd
df = pd.DataFrame({"col1":["1","2"], "col2":["3.5", "4.6"]})
print("DataFrame:")
print(df)
df[["col1","col2"]]=df[["col1","col2"]].apply(pd.to_numeric)
print(df.dtypes)

Unlike Pandas, Polars is quite picky about datatypes and tends to be rather unaccommodating when it comes to automatic casting. (Among the reasons is performance.)
You can create a feature request for a to_numeric method (but I'm not sure how enthusiastic the response will be.)
That said, here's some easy ways to accomplish this.
Create a method
Perhaps the simplest way is to write a method that attempts the cast to integer and then catches the exception. For convenience, you can even attach this method to the Series class itself.
def to_numeric(s: pl.Series) -> pl.Series:
try:
result = s.cast(pl.Int64)
except pl.exceptions.ComputeError:
result = s.cast(pl.Float64)
return result
pl.Series.to_numeric = to_numeric
Then to use it:
(
pl.select(
s.to_numeric()
for s in df
)
)
shape: (2, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞══════╪══════╡
│ 1 ┆ 3.5 │
├╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 2 ┆ 4.6 │
└──────┴──────┘
Use the automatic casting of csv parsing
Another method is to write your columns to a csv file (in a string buffer), and then have read_csv try to infer the types automatically. You may have to tweak the infer_schema_length parameter in some situations.
from io import StringIO
pl.read_csv(StringIO(df.write_csv()))
>>> pl.read_csv(StringIO(df.write_csv()))
shape: (2, 2)
┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞══════╪══════╡
│ 1 ┆ 3.5 │
├╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ 2 ┆ 4.6 │
└──────┴──────┘

Related

ArgumentError: columns argument must be a vector of AbstractVector objects

I want to make a DataFrame in Julia with one column, but I get an error:
julia> using DataFrames
julia> r = rand(3);
julia> DataFrame(r, ["col1"])
ERROR: ArgumentError: columns argument must be a vector of AbstractVector objects
Why?
Update:
I figured out that I could say the following:
julia> DataFrame(reshape(r, :, 1), ["col1"])
3×1 DataFrame
Row │ col1
│ Float64
─────┼──────────
1 │ 0.800824
2 │ 0.989024
3 │ 0.722418
But it's not straightforward. Is there any better way? Why can't I easily create a DataFrame object from a Vector?
Why can't I easily create a DataFrame object from a Vector?
Because it would be ambiguous with the syntax where you pass positional arguments the way you tried. Many popular tables are vectors.
However, what you can write is just:
julia> r = rand(3);
julia> DataFrame(col1=r)
3×1 DataFrame
Row │ col1
│ Float64
─────┼────────────
1 │ 0.00676619
2 │ 0.554207
3 │ 0.394077
to get what you want.
An alternative more similar to your code would be:
julia> DataFrame([r], ["col1"])
3×1 DataFrame
Row │ col1
│ Float64
─────┼────────────
1 │ 0.00676619
2 │ 0.554207
3 │ 0.394077

Issue with Left Outer Join in Julia DataFrame

This one has me stumped.
Im trying to join two dataframes in Julia but I get this wierd 'nothing' error. This works on a different machine so Im thinking it could be package problems. I Pkg.rm() everything and re-install but no go.
Julia v1.2
using PyCall;
using DataFrames;
using CSV;
using Statistics;
using StatsBase;
using Random;
using Plots;
using Dates;
using Missings;
using RollingFunctions;
# using Indicators;
using Pandas;
using GLM;
using Impute;
a = DataFrames.DataFrame(x = [1, 2, 3], y = ["a", "b", "c"])
b = DataFrames.DataFrame(x = [1, 2, 3, 4], z = ["d", "e", "f", "g"])
join(a, b, on=:x, kind =:left)
yields
ArgumentError: `nothing` should not be printed; use `show`, `repr`, or custom output instead.
Stacktrace:
[1] print(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Nothing) at ./show.jl:587
[2] print_to_string(::String, ::Vararg{Any,N} where N) at ./strings/io.jl:129
[3] string at ./strings/io.jl:168 [inlined]
[4] #join#543(::Symbol, ::Symbol, ::Bool, ::Nothing, ::Tuple{Bool,Bool}, ::typeof(join), ::DataFrames.DataFrame, ::DataFrames.DataFrame) at /Users/username/.julia/packages/DataFrames/3ZmR2/src/deprecated.jl:298
[5] (::getfield(Base, Symbol("#kw##join")))(::NamedTuple{(:on, :kind),Tuple{Symbol,Symbol}}, ::typeof(join), ::DataFrames.DataFrame, ::DataFrames.DataFrame) at ./none:0
[6] top-level scope at In[15]:4
kind=:inner works fine but :left, :right, and :outer don't.
This is a problem caused by the way Julia 1.2 prints nothing (i.e. that it errors when trying to print it). If you would switch to Julia 1.4.1 the problem will disappear.
However, I can see you are on DataFrames.jl 0.21. In this version join function is deprecated. You should use innerjoin, leftjoin, rightjoin, outerjoin, etc. functions. Then all will work also on Julia 1.2, e.g.:
julia> leftjoin(a, b, on=:x)
3×3 DataFrame
│ Row │ x │ y │ z │
│ │ Int64 │ String │ String? │
├─────┼───────┼────────┼─────────┤
│ 1 │ 1 │ a │ d │
│ 2 │ 2 │ b │ e │
│ 3 │ 3 │ c │ f │

Convert data type string to float in DataFrame

I have a data in string format, when I use DataFrame, it will be in Substring format, but I want it in Float format. What should I do?
x = defect_positions[1:3]
>>>SubString{String}["4.71801", "17.2815", "0.187765"]
>>>SubString{String}["17.3681", "17.1425", "6.13644"]
>>>SubString{String}["0.439987", "0.00231646", "0.404172"]
DataFrame(permutedims(reduce(hcat, x))
x1 x2 x3
SubStrin… SubStrin… SubStrin…
1 4.71801 17.2815 0.187765
2 17.3681 17.1425 6.13644
3 0.439987 0.00231646 0.404172
How can I convert my DataFrame to float?
DataFrame uses the element types of the input collections You should convert your strings to a floating point number type before creating a DataFrame. You can parse a string as a floating number type of your choice with parse.
# we map each `SubString` array in x (`SubString` arrays)
# and parse each entries as `Float64` by broadcasting `parse`
parsed_x = map(i -> parse.(Float64, i), x)
DataFrame(permutedims(reduce(hcat, parsed_x)))
You may also choose to do the conversion after creating the DataFrame with strings.
df = DataFrame(permutedims(reduce(hcat, x))
for i in 1:size(df, 2)
df[i] = parse.(Float64, df[i])
end
df
Both methods give
│ Row │ x1 │ x2 │ x3 │
│ │ Float64 │ Float64 │ Float64 │
├─────┼─────────┼─────────┼──────────┤
│ 1 │ 4.71801 │ 17.2815 │ 0.187765 │
...

Apparent issues with DataFrame string values

I am not sure if this is an actual problem or if I am just not doing something the correct way, but at the moment it appears a little bizarre to me.
When using DataFrames I came across an issue where if you copy a DataFrame to another variable, then any changes made to either of the variables changes both. This goes for the individual columns too. For example:
julia> x = DataFrame(A = ["pink", "blue", "green"], B = ["yellow", "red", "purple"]);
julia> y = x;
julia> x[x.A .== "blue", :A] = "red";
julia> x
3×2 DataFrame
│ Row │ A │ B │
├─────┼───────┼────────┤
│ 1 │ pink │ yellow │
│ 2 │ red │ red │
│ 3 │ green │ purple │
julia> y
3×2 DataFrame
│ Row │ A │ B │
├─────┼───────┼────────┤
│ 1 │ pink │ yellow │
│ 2 │ red │ red │
│ 3 │ green │ purple │
A similar thing happens with columns too, so if were to say setup a DataFrame like the above but use B = A before I incorporate both into a data frame, then if the values in one column is changed, the other is also automatically changed.
This seems odd to me, and maybe it is a feature of other programming languages but I have done the same thing as above in R many times when making a backup of a data table or swapping data between columns, and have never seen this issue. So the question is, is it working as designed and is there a correct way for copying values between data frames?
I am using Julia version 0.7.0 since I originally installed 1.0.0 through the Manjaro repository and had issues with the Is_windows() when trying to build Tk.
The command y = x does not create a new object; it just creates a new reference (or name) for the same DataFrame.
You can create a copy by calling y = copy(x). In your case, this still doesn't work, as it only copies the dataframe itself but not the variables in it.
If you want a completely independent new object, you can use y = deepcopy(x). In this case, y will have no references to x.
See this thread for a more detailed discussion:
https://discourse.julialang.org/t/what-is-the-difference-between-copy-and-deepcopy/3918/2

Select numerical columns of Julia DataFrame with missing values

I want to select all columns of a DataFrame in which the datatype is a subtype of Number. However, since there are columns with missing values, the numerical column datatypes can be something like Union{Missing, Int64}.
So far, I came up with:
using DataFrames
df = DataFrame([["a", "b"], [1, missing] ,[2, 5]])
df_numerical = df[typeintersect.(colwise(eltype, df), Number) .!= Union{}]
This yields the expected result.
Question
Is there a more simple, idiomatic way of doing this? Possibly simliar to:
df.select_dtypes(include=[np.number])
in pandas as taken from an answer to this question?
julia> df[(<:).(eltypes(df),Union{Number,Missing})]
2×2 DataFrame
│ Row │ x2 │ x3 │
├─────┼─────────┼────┤
│ 1 │ 1 │ 2 │
│ 2 │ missing │ 5 │
Please note that the . is the broadcasting operator and hence I had to use <: operator in a functional form.
An other way to do it could be:
df_numerical = df[[i for i in names(df) if Base.nonmissingtype(eltype(df[i])) <: Number]]
To retrieve all the columns that are subtype of Number, irrelevantly if they host missing data or not.