Im using pyspark dataframe.
I have a df which is 1x9
example
temp = spark.read.option("sep","\n").csv("temp.txt")
temp :
sam
11
newyork
john
13
boston
eric
22
texas
without using Pandas library, How can I transform this to 3x3 dataframe with columns name,age,city ?
like this :
name,age,city
sam,11,newyork
john,13,boston
I would read the file as an rdd to take advantage of zipWithIndex to add an index to your data.
rdd = sc.textFile("temp.txt")
We can now use truncating division to create an index with which to group records together. Use this new index as the key for the rdd. The corresponding values will be a tuple of the header, which can be computed using the modulus, and the actual value. (Note the index returned by zipWithIndex will be at the end of the record, which is why we use row[1] for the division/mod.)
Next use reduceByKey to add the value tuples together. This will give you a tuple of keys and values (in sequence). Use map to turn that into a Row (to keep column headers, etc).
Finally use toDF() to convert to a DataFrame. You can use select(header) to get the columns in the desired order.
from operator import add
from pyspark.sql import Row
header = ["name", "age", "city"]
df = rdd.zipWithIndex()\
.map(lambda row: (row[1]//3, (header[row[1]%3], row[0])))\
.reduceByKey(add)\
.map(lambda row: Row(**dict(zip(row[1][::2], row[1][1::2]))))\
.toDF()\
.select(header)
df.show()
#+----+---+-------+
#|name|age| city|
#+----+---+-------+
#| sam| 11|newyork|
#|eric| 22| texas|
#|john| 13| boston|
#+----+---+-------+
Related
I have a Pandas dataframe with several columns wherein the entries of each column are a combination of numbers, upper and lower case letters and some special characters:, i.e, "=A-Za-z0-9_|". Each entry of the column is of the form:
'x=ABCDefgh_5|123|'
I want to retain only the numbers 0-9 appearing only between | | and strip out all other characters. Here is my code for one column of the dataframe:
list(map(lambda x: x.lstrip(r'\[=A-Za-z_|,]+'), df[1]))
However, the code returns the full entry 'x=ABCDefgh_5|123|' without stripping out anything. Is there an error in my code?
Instead of working with these unreadable regex expressions, you might want to consider a simple split. For example:
import pandas as pd
d = {'col': ["x=ABCDefgh_5|123|", "x=ABCDefgh_5|123|"]}
df = pd.DataFrame(data=d)
output = df["col"].str.split("|").str[1]
I have a groupby that I want to have as a pyspark dataframe, as I need to join the resulting data with another dataset that I have.
So basically, I just want this table to be a dataframe that I can perform dataframe operations on.
DATE
COUNT
01/12/2019
583
02/14/2020
421
crash_orig.groupBy('Date').count().sort(desc('count')).show()
Just use assignment operator to save the dataframe by declaring a variable:
df = crash_orig.groupBy('Date').count().sort(desc('count'))
df.show()
I need to extract dataframes from json data stored in every row of initial dataframe and concat them all together. Currently it works for me over iteration and takes ages.
Input data is dataframe, containing JSON dictionaries:
print(json_table)
json_responce timestamp request
27487 {'explore_tabs.. 2019-07-02 02:05:25 Lisboa, Portugal
27488 {'explore_tabs.. 2019-07-02 02:05:27 Ribeira, Portugal
The json_responce field is being unwraped to dataframe:
from pandas.io.json import json_normalize
from ast import literal_eval
json = literal_eval(json_table.loc[0,'json_responce'])
df_normalized = json_normalize(json['explore_tabs'][0]['sections'][0]
['listings'])
which gives a nice unwrapped dataframe for each row of the initial df
Having 27000 rows of json containing df, I iterate over initial df, which creates new df at every step and concat's to the final_df, to concat all the data together:
def unwrap_json_and_concat(json_table):
final_df = pd.DataFrame()
for i in json_table.index:
row = literal_eval(json_table.loc[i,'json_responce'])
df = json_normalize(row['explore_tabs'][0]['sections']
[0]['listings'])
final_df = pd.concat([final_df,df])
return final_df
As expected, that takes ages to iterate over, with significant slowing towards the end of calculation due to the increasing size of the final_df.
I know how to create functions for apply, but I believe it will not give much perfomance either, due to the fact, that new dataframe is being created every row anyways.
How to vectorize this calculation?
Thank you!
I need to add an index column to a dataframe with three very simple constraints:
start from 0
be sequential
be deterministic
I'm sure I'm missing something obvious because the examples I'm finding look very convoluted for such a simple task, or use non-sequential, non deterministic increasingly monotonic id's. I don't want to zip with index and then have to separate the previously separated columns that are now in a single column because my dataframes are in the terabytes and it just seems unnecessary. I don't need to partition by anything, nor order by anything, and the examples I'm finding do this (using window functions and row_number). All I need is a simple 0 to df.count sequence of integers. What am I missing here?
1, 2, 3, 4, 5
What I mean is: how can I add a column with an ordered, monotonically increasing by 1 sequence 0:df.count? (from comments)
You can use row_number() here, but for that you'd need to specify an orderBy(). Since you don't have an ordering column, just use monotonically_increasing_id().
from pyspark.sql.functions import row_number, monotonically_increasing_id
from pyspark.sql import Window
df = df.withColumn(
"index",
row_number().over(Window.orderBy(monotonically_increasing_id()))-1
)
Also, row_number() starts at 1, so you'd have to subtract 1 to have it start from 0. The last value will be df.count - 1.
I don't want to zip with index and then have to separate the previously separated columns that are now in a single column
You can use zipWithIndex if you follow it with a call to map, to avoid having all of the separated columns turn into a single column:
cols = df.columns
df = df.rdd.zipWithIndex().map(lambda row: (row[1],) + tuple(row[0])).toDF(["index"] + cols
Not sure about the performance but here is a trick.
Note - toPandas will collect all the data to driver
from pyspark.sql import SparkSession
# speed up toPandas using arrow
spark = SparkSession.builder.appName('seq-no') \
.config("spark.sql.execution.arrow.pyspark.enabled", "true") \
.config("spark.sql.execution.arrow.enabled", "true") \
.getOrCreate()
df = spark.createDataFrame([
('id1', "a"),
('id2', "b"),
('id2', "c"),
], ["ID", "Text"])
df1 = spark.createDataFrame(df.toPandas().reset_index()).withColumnRenamed("index","seq_no")
df1.show()
+------+---+----+
|seq_no| ID|Text|
+------+---+----+
| 0|id1| a|
| 1|id2| b|
| 2|id2| c|
+------+---+----+
I am slicing a DataFrame from a large DataFrame and daughter df have only one row. Does a daughter df with a single row has same attributes like parent df?
import numpy as np
import pandas as pd
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,2),index=dates,columns=['col1','col2'])
df1=df.iloc[1]
type(df1)
>> pandas.core.series.Series
df1.columns
>>'Series' object has no attribute 'columns'
Is there a way I can use all attributes of pd.DataFrame on a pd.series ?
Possibly what you are looking for is a dataframe with one row:
>>> pd.DataFrame(df1).T # T -> transpose
col1 col2
2013-01-02 -0.428913 1.265936
What happens when you do df.iloc[1] is that pandas converts that to a series, which is one-dimensional, and the columns become the index. You can still do df1['col1'], but you can't do df.columns because a series is basically a column, and hence the old columns are now the new index
As a result, you can returns the former columns like this:
>>> df1.index.tolist()
['col1', 'col2']
This used to confuse me quite a bit. I also expected df.iloc[1] to be a dataframe with one row, but it has always been the default behavior of pandas to automatically convert any one dimensional dataframe slice (whether row or column) to a series. It's pretty natural for a row, but less so for a column (since the columns become the index), but really is not a problem once you understand what is happening.