R's table function in Julia (for DataFrames) - dataframe

Is there something like R's table function in Julia? I've read about xtab, but do not know how to use it.
Suppose we have R's data.frame rdata which col6 is of the Factor type.
R sample code:
rdata <- read.csv("mycsv.csv") #1
table(rdata$col6) #2
In order to read data and make factors in Julia I do it like this:
using DataFrames
jldata = readtable("mycsv.csv", makefactors=true) #1 :col6 will be now pooled.
..., but how to build R's table like in julia (how to achieve #2)?

You can use the countmap function from StatsBase.jl to count the entries of a single variable. General cross tabulation and statistical tests for contingency tables are lacking at this point. As Ismael points out, this has been discussed in the issue tracker for StatsBase.jl.

I came to the conclusion that a similar effect can be achieved using by:
Let jldata consists of :gender column.
julia> by(jldata, :gender, nrow)
3x2 DataFrames.DataFrame
| Row | gender | x1 |
|-----|----------|-------|
| 1 | NA | 175 |
| 2 | "female" | 40254 |
| 3 | "male" | 58574 |
Of course it's not a table but at least I get the same data type as the datasource. Surprisingly by seems to be faster than countmap.

I believe, "by" is depreciated in Julia as of 1.5.3 (It says: ERROR: ArgumentError: by function was removed from DataFrames.jl).
So here are some alternatives, we can use split apply combine to do a cross tabs as well or use FreqTables.
Using Split Combine:
Example 1 - SingleColumn:
using RDatasets
using DataFrames
mtcars = dataset("datasets", "mtcars")
## To do a table on cyl column
gdf = groupby(mtcars, :Cyl)
combine(gdf, nrow)
Output:
# 3×2 DataFrame
# Row │ Cyl nrow
# │ Int64 Int64
# ─────┼──────────────
# 1 │ 6 7
# 2 │ 4 11
# 3 │ 8 14
Example 2 - CrossTabs Between 2 columns:
## we have to just change the groupby code a little bit and rest is same
gdf = groupby(mtcars, [:Cyl, :AM])
combine(gdf, nrow)
Output:
#6×3 DataFrame
# Row │ Cyl AM nrow
# │ Int64 Int64 Int64
#─────┼─────────────────────
# 1 │ 6 1 3
# 2 │ 4 1 8
# 3 │ 6 0 4
# 4 │ 8 0 12
# 5 │ 4 0 3
# 6 │ 8 1 2
Also on a side note if you don't like the name as nrow on top, you can use :
combine(gdf, nrow => :Count)
to change the name to Count
Alternate way: Using FreqTables
You can use package, FreqTables like below to do count and proportion very easily, to add it you can use Pkg.add("FreqTables") :
## Cross tab between cyl and am
freqtable(mtcars.Cyl, mtcars.AM)
## Proportion between cyl and am
prop(freqtable(mtcars.Cyl, mtcars.AM))
## with margin like R you can use it too in this (columnwise proportion: margin=2)
prop(freqtable(mtcars.Cyl, mtcars.AM), margins=2)
## with margin for rowwise proportion: margin = 1
prop(freqtable(mtcars.Cyl, mtcars.AM), margins=1)
Outputs:
## count cross tabs
#3×2 Named Array{Int64,2}
#Dim1 ╲ Dim2 │ 0 1
#────────────┼───────
#4 │ 3 8
#6 │ 4 3
#8 │ 12 2
## proportion wise (overall)
#3×2 Named Array{Float64,2}
#Dim1 ╲ Dim2 │ 0 1
#────────────┼─────────────────
#4 │ 0.09375 0.25
#6 │ 0.125 0.09375
#8 │ 0.375 0.0625
## Column wise proportion
#3×2 Named Array{Float64,2}
#Dim1 ╲ Dim2 │ 0 1
#────────────┼───────────────────
#4 │ 0.157895 0.615385
#6 │ 0.210526 0.230769
#8 │ 0.631579 0.153846
## Row wise proportion
#3×2 Named Array{Float64,2}
#Dim1 ╲ Dim2 │ 0 1
#────────────┼───────────────────
#4 │ 0.272727 0.727273
#6 │ 0.571429 0.428571
#8 │ 0.857143 0.142857

Related

Extracting Data from .csv File in Julia

I'm quite new to Julia and i have a .csv File, which is stored inside a gzip, where i want to extract some informations from for educational purposes and to get to know the language better.
In Python there are many helpful functions from Panda to help with that, but i can't seem to get the Problem straight...
This is my Code (I KNOW, VERY WEAK!!!) :
{
import Pkg
#Pkg.add("CSV")
#Pkg.add("DataFrames")
#Pkg.add("CSVFiles")
#Pkg.add("CodecZlib")
#Pkg.add("GZip")
using CSVFiles
using Pkg
using CSV
using DataFrames
using CodecZlib
using GZip
df = CSV.read("Path//to//file//file.csv.gzip", DataFrame)
print(df)
}
I added a Screen to show how the Columns inside the .csv File are looking like.
I would like to extract the Dates and make some sort of a Top 10 most commented users, Top 10 days with the most threads etc.
I would like to point out that this is not an Exercise given to me, but a training i would like to do 4 myself.
I know the Panda Version to this is looking like this:
df['threadcreateddate'] = pd.to_datetine(df['thread_created_utc']).dt.date
or
df['commentcreateddate'] = pd.to_datetime(df['comment_created_utc']).dt.date
And to sort it:
pf_number_of_threads = df.groupby('threadcreateddate')["thread_id'].nunique()
If i were to plot it:
df_number_of_threads.plot(kind='line')
plt.show()
To print:
head = df.head()
print(df_number_of_threads.sort_values(ascending=False).head(10))
Can someone help? The df.select() function didn't work for me.
1. Packages
We obviously need DataFrames.jl. And since we're dealing with dates in the data, and doing a plot later, we'll include Dates and Plots as well.
As this example in CSV.jl's documentation shows, no additional packages are needed for gzipped data. CSV.jl can decompress automatically. So, you can remove the other using statements from your list.
julia> using CSV, DataFrames, Dates, Plots
2. Preparing the Data Frame
You can use CSV.read to load the data into the Data Frame, as in the question. Here, I'll use some sample (simplified) data for illustration, with just 4 columns:
julia> df
6×4 DataFrame
Row │ thread_id thread_created_utc comment_id comment_created_utc
│ Int64 String Int64 String
─────┼─────────────────────────────────────────────────────────────────
1 │ 1 2022-08-13T12:00:00 1 2022-08-13T12:00:00
2 │ 1 2022-08-13T12:00:00 2 2022-08-14T12:00:00
3 │ 1 2022-08-13T12:00:00 3 2022-08-15T12:00:00
4 │ 2 2022-08-16T12:00:00 4 2022-08-16T12:00:00
5 │ 2 2022-08-16T12:00:00 5 2022-08-17T12:00:00
6 │ 2 2022-08-16T12:00:00 6 2022-08-18T12:00:00
3. Converting from String to DateTime
To extract the thread dates from the string columns we have, we'll use the Dates standard libary.
Depending on the exact format your dates are in, you might have to add a datefmt argument for conversion to Dates data types (see the Constructors section of Dates in the Julia manual). Here in the sample data, the dates are in ISO standard format, so we don't need to specify the date format explicitly.
In Julia, we can get the date directly without intermediate conversion to a date-time type, but since it's a good idea to have the columns be in the proper type anyway, we'll first convert the existing columns from strings to DateTime:
julia> transform!(df, [:thread_created_utc, :comment_created_utc] .=> ByRow(DateTime), renamecols = false)
6×4 DataFrame
Row │ thread_id thread_created_utc comment_id comment_created_utc
│ Int64 DateTime Int64 DateTime
─────┼─────────────────────────────────────────────────────────────────
1 │ 1 2022-08-13T12:00:00 1 2022-08-13T12:00:00
2 │ 1 2022-08-13T12:00:00 2 2022-08-14T12:00:00
3 │ 1 2022-08-13T12:00:00 3 2022-08-15T12:00:00
4 │ 2 2022-08-16T12:00:00 4 2022-08-16T12:00:00
5 │ 2 2022-08-16T12:00:00 5 2022-08-17T12:00:00
6 │ 2 2022-08-16T12:00:00 6 2022-08-18T12:00:00
Though it looks similar, this data frame doesn't use Strings for the date-time columns, instead has proper DateTime type values.
(For an explanation of how this transform! works, see the DataFrames manual: Selecting and transforming columns.)
Edit: Based on the screenshot added to the question now, in your case you'd use transform!(df, [:thread_created_utc, :comment_created_utc] .=> ByRow(s -> DateTime(s, dateformat"yyyy-mm-dd HH:MM:SS.s")), renamecols = false).
4. Creating Date columns
Now, creating the date columns is as easy as:
julia> df.threadcreateddate = Date.(df.thread_created_utc);
julia> df.commentcreateddate = Date.(df.comment_created_utc);
julia> df
6×6 DataFrame
Row │ thread_id thread_created_utc comment_id comment_created_utc commentcreateddate threadcreatedate
│ Int64 DateTime Int64 DateTime Date Date
─────┼───────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ 1 2022-08-13T12:00:00 1 2022-08-13T12:00:00 2022-08-13 2022-08-13
2 │ 1 2022-08-13T12:00:00 2 2022-08-14T12:00:00 2022-08-14 2022-08-13
3 │ 1 2022-08-13T12:00:00 3 2022-08-15T12:00:00 2022-08-15 2022-08-13
4 │ 2 2022-08-16T12:00:00 4 2022-08-16T12:00:00 2022-08-16 2022-08-16
5 │ 2 2022-08-16T12:00:00 5 2022-08-17T12:00:00 2022-08-17 2022-08-16
6 │ 2 2022-08-16T12:00:00 6 2022-08-18T12:00:00 2022-08-18 2022-08-16
These could also be written as a transform! call, and in fact the transform! call in the previous code segment could have instead been replaced with df.thread_created_utc = DateTime.(df.thread_created_utc) and df.comment_created_utc = DateTime.(df.comment_created_utc). However, transform offers a very powerful and flexible syntax that can do a lot more, so it's useful to familiarize yourself with it if you're going to work on DataFrames.
5. Getting the number of threads per day
julia> gdf = combine(groupby(df, :threadcreateddate), :thread_id => length ∘ unique => :number_of_threads)
2×2 DataFrame
Row │ threadcreateddate number_of_threads
│ Date Int64
─────┼──────────────────────────────────────
1 │ 2022-08-13 1
2 │ 2022-08-16 1
Note that df.groupby('threadcreateddate') becomes groupby(df, :threadcreateddate), which is a common pattern in Python-to-Julia conversions. Julia doesn't use the . based object-oriented syntax, and instead the data frame is one of the arguments to the function.
length ∘ unique uses the function composition operator ∘, and the result is a function that applies unique and then length. Here we take the unique values of thread_id column in each group, apply length to them (so, the equivalent of nunique), and store the result in number_of_threads column in a new GroupedDataFrame called gdf.
6. Plotting
julia> plot(gdf.threadcreateddate, gdf.number_of_threads)
Since our grouped data frame conveniently contains both the date and the number of threads, we can plot the number_of_threads against the dates, making for a nice and informative visualization.
As Sundar R commented it is hard to give you a precise answer for your data as there might be some relevant details. But here is a general pattern you can follow:
julia> using DataFrames
julia> df = DataFrame(id = [1, 1, 2, 2, 2, 3])
6×1 DataFrame
Row │ id
│ Int64
─────┼───────
1 │ 1
2 │ 1
3 │ 2
4 │ 2
5 │ 2
6 │ 3
julia> first(sort(combine(groupby(df, :id), nrow), :nrow, rev=true), 10)
3×2 DataFrame
Row │ id nrow
│ Int64 Int64
─────┼──────────────
1 │ 2 3
2 │ 1 2
3 │ 3 1
What this code does:
groupby groups data by the column you want to aggregate
combine with nrow argument counts the number of rows in each group and stores it in :nrow column (this is the default, you could choose other column name)
sort sorts data frame by :nrow and rev=true makes the order descending
first picks 10 first rows from this data frame
If you want something more similar to dplyr in R with piping you can use #chain that is exported by DataFramesMeta.jl:
julia> using DataFramesMeta
julia> #chain df begin
groupby(:id)
combine(nrow)
sort(:nrow, rev=true)
first(10)
end
3×2 DataFrame
Row │ id nrow
│ Int64 Int64
─────┼──────────────
1 │ 2 3
2 │ 1 2
3 │ 3 1

Add thousands separator to column in dataframe in julia

I have a dataframe with two columns a and b and at the moment both are looking like column a, but I want to add separators so that column b looks like below. I have tried using the package format.jl. But I haven't gotten the result I'm afte. Maybe worth mentioning is that both columns is Int64 and the column names a and b is of type symbol.
a | b
150000 | 1500,00
27 | 27,00
16614 | 166,14
Is there some other way to solve this than using format.jl? Or is format.jl the way to go?
Assuming you want the commas in their typical positions rather than how you wrote them, this is one way:
julia> using DataFrames, Format
julia> f(x) = format(x, commas=true)
f (generic function with 1 method)
julia> df = DataFrame(a = [1000000, 200000, 30000])
3×1 DataFrame
Row │ a
│ Int64
─────┼─────────
1 │ 1000000
2 │ 200000
3 │ 30000
julia> transform(df, :a => ByRow(f) => :a_string)
3×2 DataFrame
Row │ a a_string
│ Int64 String
─────┼────────────────────
1 │ 1000000 1,000,000
2 │ 200000 200,000
3 │ 30000 30,000
If you instead want the row replaced, use transform(df, :a => ByRow(f), renamecols=false).
If you just want the output vector rather than changing the DataFrame, you can use format.(df.a, commas=true)
You could write your own function f to achieve the same behavior, but you might as well use the one someone already wrote inside the Format.jl package.
However, once you transform you data to Strings as above, you won't be able to filter/sort/analyze the numerical data in the DataFrame. I would suggest that you apply the formatting in the printing step (rather than modifying the DataFrame itself to contain strings) by using the PrettyTables package. This can format the entire DataFrame at once.
julia> using DataFrames, PrettyTables
julia> df = DataFrame(a = [1000000, 200000, 30000], b = [500, 6000, 70000])
3×2 DataFrame
Row │ a b
│ Int64 Int64
─────┼────────────────
1 │ 1000000 500
2 │ 200000 6000
3 │ 30000 70000
julia> pretty_table(df, formatters = ft_printf("%'d"))
┌───────────┬────────┐
│ a │ b │
│ Int64 │ Int64 │
├───────────┼────────┤
│ 1,000,000 │ 500 │
│ 200,000 │ 6,000 │
│ 30,000 │ 70,000 │
└───────────┴────────┘
(Edited to reflect the updated specs in the question)
julia> df = DataFrame(a = [150000, 27, 16614]);
julia> function insertdecimalcomma(n)
if n < 100
return string(n) * ",00"
else
return replace(string(n), r"(..)$" => s",\1")
end
end
insertdecimalcomma (generic function with 1 method)
julia> df.b = insertdecimalcomma.(df.a)
julia> df
3×2 DataFrame
Row │ a b
│ Int64 String
─────┼─────────────────
1 │ 150000 1500,00
2 │ 27 27,00
3 │ 16614 166,14
Note that the b column will necessarily be a String after this change, as integer types cannot store formatting information in them.
If you have a lot of data and find that you need better performance, you may also want to use the InlineStrings package:
julia> #same as before upto the function definition
julia> using InlineStrings
julia> df.b = inlinestrings(insertdecimalcomma.(df.a))
3-element Vector{String7}:
"1500,00"
"27,00"
"166,14"
This stores the b column's data as fixed-size strings (String7 type here), which are generally treated like normal Strings, but can be significantly better for performance.

How to extract column_name String and data Vector from a one-column DataFrame in Julia?

I was able to extract the column of a DataFrame that I want using a regular expression, but now I want to extract from that DataFrame column a String with the column name and a Vector with the data. How can I construct f and g below? Alternate approaches also welcome.
julia> df = DataFrame("x (in)" => 1:3, "y (°C)" => 4:6)
3×2 DataFrame
Row │ x (in) y (°C)
│ Int64 Int64
─────┼────────────────
1 │ 1 4
2 │ 2 5
3 │ 3 6
julia> y = df[:, r"y "]
3×1 DataFrame
Row │ y (°C)
│ Int64
─────┼────────
1 │ 4
2 │ 5
3 │ 6
julia> y_units = f(y)
"°C"
julia> y_data = g(y)
3-element Vector{Int64}:
4
5
6
f(df) = only(names(df))
g(df) = only(eachcol(df)) # or df[!, 1] if you do not need to check that this is the only column
(only is used to check that the data frame actually has only one column)
An alternate approach to get the column name without creating an intermediate data frame is just writing:
julia> names(df, r"y ")
1-element Vector{String}:
"y (°C)"
to extract out the column name (you need to get the first element of this vector)

Juila Dataframe rollup by groups (aka subtotals)

What is a concise way to express rollup aggregations in DataFrames.jl?
Example dataset:
+---+----------+-----+---------+------+
| id| date_col|group| item|amount|
+---+----------+-----+---------+------+
| 1|2020-03-11| A|BOO00OXXX| 1.0|
| 2|2020-03-11| A|BOO00OXXY| 2.0|
| 3|2020-03-11| B|BOO00OXXZ| 17.0|
| 4|2020-03-12| B|BOO00OXXA| 9.0|
| 5|2020-03-12| B|BOO00OXXB| 1.0|
| 6|2020-03-12| B|BOO00OXXY| 5.0|
| 7|2020-03-13| C|BOO00OXXY| 2.0|
| 8|2020-03-13| C|BOO00OXXX| 1.0|
| 9|2020-03-13| C|BOO00OXXY| 2.0|
+---+----------+-----+---------+------+
# desired output
+------+---------+
|group |total_amt|
+------+---------+
|ROLLUP| 40.0|
| A | 3.0|
| B | 32.0|
| C | 5.0|
+------+---------+
I commonly need to summarize a dataset, sometimes for sharing reports, which aggregates values over certain columns with subtotals and grand totals. These are called 'rollups' or 'subtotals'/'grand totals' in Excel.
In Spark these are conveniently generated with rollup or cube aggregations. The above result is generated with the following spark api call.
How can I produce a similar table with Julia DataFrames.jl?
// scala spark
df.rollup("group")
.agg(sum("amount").as("total_amt"))
.orderBy("group")
.show()
+-----+---------+
|group|total_amt|
+-----+---------+
| null| 40.0|
| A| 3.0|
| B| 32.0|
| C| 5.0|
+-----+---------+
// note the aggregated column label is null for the subtotal (aka rollup)
NOTE: I am able to produce the result with multiple julia groupby() and combine() operations, and then union or vcat the result into a single dataframe. I need and want a concise and readable idiom.
EDIT: adding a specific julia implementation to show why I want something more concise.
using DataFrames, Dates
df = DataFrame(id = [1,2,3,4,5,6,7,8,9]
, date_col = Date.(["2020-03-11","2020-03-11","2020-03-11","2020-03-12","2020-03-12","2020-03-12","2020-03-13","2020-03-13","2020-03-13"])
, group = ["A","A","B","B","B","B","C","C","C"]
, amount = [1.0,2.0,17.0,9.0,1.0,5.0,2.0,1.0,2.0]
)
# replicate the spark.rollup example
df1 = combine(groupby(_, :group), :amount => sum => :total_amt);
df2 = combine(df, :amount => sum => :total_amt);
df2[:, :group] = [missing];
df_result = sort(vcat(df1, df2, cols = :setequal), rev = true)
4×2 DataFrame
Row │ group total_amt
│ String? Float64
─────┼────────────────────
1 │ missing 40.0
2 │ C 5.0
3 │ B 32.0
4 │ A 3.0
Adding a version of #bkamins answer, sticking with combine()
I think I prefer this answer so far, as it maintains a bit of symmetry and if made into a function is easier to see where the arguments would go.
using Chain
#chain df begin
groupby(:group)
combine(:amount => sum => :total_amt)
append!(insertcols!(combine(df, :amount => sum => :total_amt), :group => "ROLLUP"))
sort(:total_amt, rev = true)
end
This is how I would do it:
julia> using DataFrames, Chain
julia> df = DataFrame(group=["A", "A", "B", "B", "C", "C"], amount=1:6)
6×2 DataFrame
Row │ group amount
│ String Int64
─────┼────────────────
1 │ A 1
2 │ A 2
3 │ B 3
4 │ B 4
5 │ C 5
6 │ C 6
julia> #chain df begin
groupby(:group)
combine(:amount => sum => :total_amount)
push!(_, (missing, sum(_.total_amount)), promote=true)
sort(:total_amount, rev=true)
end
4×2 DataFrame
Row │ group total_amount
│ String? Int64
─────┼───────────────────────
1 │ missing 21
2 │ C 11
3 │ B 7
4 │ A 3
This will be efficient and hopefully you find it readable.
As #jling commented we do not have in-built rollup.
Here is an answer with DataFramesMeta.jl
julia> using DataFramesMeta;
julia> #chain df begin
groupby(:group)
#combine :total_amount = sum(:amount)
#aside df2 = #combine df :total_amount = sum(:amount)
vcat(df2; cols = :union)
end
4×2 DataFrame
Row │ group total_amount
│ String? Int64
─────┼───────────────────────
1 │ A 3
2 │ B 7
3 │ C 11
4 │ missing 21
julia> df
5×2 DataFrame
Row │ g amt
│ Int64 Int64
─────┼──────────────
1 │ 0 2
2 │ 1 1
3 │ 1 1
4 │ 0 1
5 │ 1 1
julia> combine(groupby(df, :g), :amt => sum => :total_amt)
2×2 DataFrame
Row │ g total_amt
│ Int64 Int64
─────┼─────────────────────
1 │ 0 3
2 │ 1 3
#alternative do-block syntax:
julia> combine(groupby(df, :g)) do sub_df
(total_amt = sum(sub_df.amt),)
end
2×2 DataFrame
Row │ g total_amt
│ Int64 Int64
─────┼──────────────────
1 │ 0 3
2 │ 1 3
does this more or less do what you want? btw the relevant docs: https://dataframes.juliadata.org/stable/man/split_apply_combine/
I feel like we would need a feel iteration to solve all the things you might want to do in Spark, SO is hard to do those kind of back and forth.

julia create an empty dataframe and append rows to it

I am trying out the Julia DataFrames module. I am interested in it so I can use it to plot simple simulations in Gadfly. I want to be able to iteratively add rows to the dataframe and I want to initialize it as empty.
The tutorials/documentation on how to do this is sparse (most documentation describes how to analyse imported data).
To append to a nonempty dataframe is straightforward:
df = DataFrame(A = [1, 2], B = [4, 5])
push!(df, [3 6])
This returns.
3x2 DataFrame
| Row | A | B |
|-----|---|---|
| 1 | 1 | 4 |
| 2 | 2 | 5 |
| 3 | 3 | 6 |
But for an empty init I get errors.
df = DataFrame(A = [], B = [])
push!(df, [3, 6])
Error message:
ArgumentError("Error adding 3 to column :A. Possible type mis-match.")
while loading In[220], in expression starting on line 2
What is the best way to initialize an empty Julia DataFrame such that you can iteratively add items to it later in a for loop?
A zero length array defined using only [] will lack sufficient type information.
julia> typeof([])
Array{None,1}
So to avoid that problem is to simply indicate the type.
julia> typeof(Int64[])
Array{Int64,1}
And you can apply that to your DataFrame problem
julia> df = DataFrame(A = Int64[], B = Int64[])
0x2 DataFrame
julia> push!(df, [3 6])
julia> df
1x2 DataFrame
| Row | A | B |
|-----|---|---|
| 1 | 3 | 6 |
using Pkg, CSV, DataFrames
iris = CSV.read(joinpath(Pkg.dir("DataFrames"), "test/data/iris.csv"))
new_iris = similar(iris, nrow(iris))
head(new_iris, 2)
# 2×5 DataFrame
# │ Row │ SepalLength │ SepalWidth │ PetalLength │ PetalWidth │ Species │
# ├─────┼─────────────┼────────────┼─────────────┼────────────┼─────────┤
# │ 1 │ missing │ missing │ missing │ missing │ missing │
# │ 2 │ missing │ missing │ missing │ missing │ missing │
for (i, row) in enumerate(eachrow(iris))
new_iris[i, :] = row[:]
end
head(new_iris, 2)
# 2×5 DataFrame
# │ Row │ SepalLength │ SepalWidth │ PetalLength │ PetalWidth │ Species │
# ├─────┼─────────────┼────────────┼─────────────┼────────────┼─────────┤
# │ 1 │ 5.1 │ 3.5 │ 1.4 │ 0.2 │ setosa │
# │ 2 │ 4.9 │ 3.0 │ 1.4 │ 0.2 │ setosa │
The answer from #waTeim already answers the initial question. But what if I want to dynamically create an empty DataFrame and append rows to it. E.g. what if I don't want hard-coded column names?
In this case, df = DataFrame(A = Int64[], B = Int64[]) is not sufficient.
The NamedTuple A = Int64[], B = Int64[] needs to be create dynamically.
Let's assume we have a vector of column names col_names and a vector of column types colum_types from which to create an emptyDataFrame.
col_names = [:A, :B] # needs to be a vector Symbols
col_types = [Int64, Float64]
# Create a NamedTuple (A=Int64[], ....) by doing
named_tuple = (; zip(col_names, type[] for type in col_types )...)
df = DataFrame(named_tuple) # 0×2 DataFrame
Alternatively, the NameTuple could be created with
# or by doing
named_tuple = NamedTuple{Tuple(col_names)}(type[] for type in col_types )
I think at least in the latest version of Julia you can achieve this by creating a pair object without specifying type
df = DataFrame("A" => [], "B" => [])
push!(df, [5,'f'])
1×2 DataFrame
Row │ A B
│ Any Any
─────┼──────────
1 │ 5 f
as seen in this post by #Bogumił Kamiński where multiple columns are needed, something like this can be done:
entries = ["A", "B", "C", "D"]
df = DataFrame([ name =>[] for name in entries])
julia> push!(df,[4,5,'r','p'])
1×4 DataFrame
Row │ A B C D
│ Any Any Any Any
─────┼────────────────────
1 │ 4 5 r p
Or as pointed out by #Antonello below if you know that type you can do.
df = DataFrame([name => Int[] for name in entries])
which is also in #Bogumil Kaminski's original post.