How to add delimiters to a csv file - pandas

I have a csv file with no delimiters. Is it possible to add delimiters at certain position in PySpark? like,
my file looks like:
USDINRFUTCUR23Feb201700000000FF00000000000001990067895000000000NNN*12
USDINRFUTCUR24Feb201700000000FF00000000000001990067895000000000NNN*12
USDINRFUTCUR25Feb201700000000FF00000000000001990067895000000000NNN*12
and i want delimiters at 3rd, 6th 12th position

For fixed width files there is pandas.read_fwf()
widths = [
3,
6,
12,
]
df = pd.read_fwf("fixed_width.txt", widths=widths)
df

For using distributed pyspark solution, there is no similar way to add delimiter right as you read(as there is pandas). A scalable way to solve this would be read the data as is in one column, then use below code (using pyspark functions) to create your columns.
Creating sample dataframe:
from pyspark.sql import functions as F
from pyspark.sql.types import *
list=[['USDINRFUTCUR23Feb201700000000FF00000000000001990067895000000000NNN*12'],
['USDINRFUTCUR24Feb201700000000FF00000000000001990067895000000000NNN*12'],
['USDINRFUTCUR25Feb201700000000FF00000000000001990067895000000000NNN*12']]
df=spark.createDataFrame(list,['col1'])
df.show(truncate=False)
+---------------------------------------------------------------------+
|col1 |
+---------------------------------------------------------------------+
|USDINRFUTCUR23Feb201700000000FF00000000000001990067895000000000NNN*12|
|USDINRFUTCUR24Feb201700000000FF00000000000001990067895000000000NNN*12|
|USDINRFUTCUR25Feb201700000000FF00000000000001990067895000000000NNN*12|
+---------------------------------------------------------------------+
Use substr and withcolumn to create new columns, and drop the first one. You could make a def(function) which reads and performs this code as well, so that you could re use and simplify your pipeline
df.withColumn("Currency1", F.col("col1").substr(0,3))\
.withColumn("Currency2", F.col("col1").substr(4,3))\
.withColumn("Type", F.col("col1").substr(7,6))\
.withColumn("Time", F.expr("""substr(col1,13,length(col1))"""))\
.drop("col1").show(truncate=False)
#output
+---------+---------+------+---------------------------------------------------------+
|Currency1|Currency2|Type |Time |
+---------+---------+------+---------------------------------------------------------+
|USD |INR |FUTCUR|23Feb201700000000FF00000000000001990067895000000000NNN*12|
|USD |INR |FUTCUR|24Feb201700000000FF00000000000001990067895000000000NNN*12|
|USD |INR |FUTCUR|25Feb201700000000FF00000000000001990067895000000000NNN*12|
+---------+---------+------+---------------------------------------------------------+

Related

PySpark: Transform values of given column in the DataFrame

I am new to PySpark and Spark in general.
I would like to apply transformation on a given column in the DataFrame, essentially call a function for each value on that specific column.
I have my DataFrame df that looks like this:
df.show()
+------------+--------------------+
|version | body |
+------------+--------------------+
| 1|9gIAAAASAQAEAAAAA...|
| 2|2gIAAAASAQAEAAAAA...|
| 3|3gIAAAASAQAEAAAAA...|
| 1|7gIAKAASAQAEAAAAA...|
+------------+--------------------+
I need to read value of body column for each row where the version is 1 and then decrypt it (I have my own logic/function which takes a string and returns a decrypted string). Finally, write the decrypted values in csv format to a S3 bucket.
def decrypt(encrypted_string: str):
# code that returns decrypted string
So, When I do following, I get the corresponding filtered values to which I need to apply my decrypt function.
df.where(col('version') =='1')\
.select(col('body')).show()
+--------------------+
| body|
+--------------------+
|9gIAAAASAQAEAAAAA...|
|7gIAKAASAQAEAAAAA...|
+--------------------+
However, I am not clear how to do that. I tried to use collect() but then it defeats the purpose of using Spark.
I also tried using .rdd.map as follows but that did not work.
df.where(col('version') =='1')\
.select(col('body'))\
.rdd.map(lambda x: decrypt).toDF().show()
OR
.rdd.map(decrypt).toDF().show()
Could someone please help with this.
Please try:
from pyspark.sql.functions import udf
decrypt_udf = udf(decrypt, StringType())
df.where(col('version') =='1').withColumn('body', decrypt_udf('body'))
Got some clue from this post: Pyspark DataFrame UDF on Text Column.
Looks like I can simply get it with following. I was doing it without using udf earlier, so it wasn't working.
dummy_function_udf = udf(decrypt, StringType())
df.where(col('version') == '1')\
.select(col('body')) \
.withColumn('decryptedBody', dummy_function_udf('body')) \
.show()

How to add Extra column with current date in Spark dataframe

I am trying to add one column in my existing Pyspark Dataframe using withColumn method.I want to insert current date in this column.From my Source I don't have any date column so i am adding this current date column in my dataframe and saving this dataframe in my table so later for tracking purpose i can use this current date column.
I am using below code
df2=df.withColumn("Curr_date",datetime.now().strftime('%Y-%m-%d'))
here df is my existing Dataframe and i want to save df2 as table with Curr_date column.
but here its expecting existing column or lit method instead of datetime.now().strftime('%Y-%m-%d').
someone please guide me how should i add this Date column in my dataframe.?
use either lit or current_date
from pyspark.sql import functions as F
df2 = df.withColumn("Curr_date", F.lit(datetime.now().strftime("%Y-%m-%d")))
# OR
df2 = df.withColumn("Curr_date", F.current_date())
current_timestamp() is good but it is evaluated during the serialization time.
If you prefer to use the timestamp of the processing time of a row, then you may use the below method,
withColumn('current', expr("reflect('java.time.LocalDateTime', 'now')"))
There is a spark function current_timestamp().
from pyspark.sql.functions import *
df.withColumn('current', date_format(current_timestamp(), 'yyyy-MM-dd')).show()
+----+----------+
|test| current|
+----+----------+
|test|2020-09-09|
+----+----------+

Performing different computations conditioned on a column value in a spark dataframe

I have a pyspark dataframe with 2 columns, A and B. I need rows of B to be processed differently, based on values of the A column. In plain pandas I might do this:
import pandas as pd
funcDict = {}
funcDict['f1'] = (lambda x:x+1000)
funcDict['f2'] = (lambda x:x*x)
df = pd.DataFrame([['a',1],['b',2],['b',3],['a',4]], columns=['A','B'])
df['newCol'] = df.apply(lambda x: funcDict['f1'](x['B']) if x['A']=='a' else funcDict['f2']
(x['B']), axis=1)
The easy way I can think of to do in (py)spark are
Use files
read in the data into a dataframe
partition by column A and write to separate files (write.partitionBy)
read in each file and then process them separately
or else
use expr
read in the data into a dataframe
write a unwieldy expr (from a readability/maintenance perspective) to conditionally do something differently based on the value of the column
this will not look anywhere as "clean" as the pandas code above looks
Is there anything else that is the appropriate way to handle this requirement? From the efficiency perspective, I expect the first approach to be cleaner, but have more run time due to the partition-write-read, and the second approach is not as good from the code perspective, and harder to extend and maintain.
More primarily, would you choose to use something completely different (e.g. message queues) instead (relative latency difference notwithstanding)?
EDIT 1
Based on my limited knowledge of pyspark, the solution proposed by user pissall (https://stackoverflow.com/users/8805315/pissall) works as long as the processing isn't very complex. If that happens, I don't know how to do it without resorting to UDFs, which come with their own disadvantages. Consider the simple example below
# create a 2-column data frame
# where I wish to extract the city
# in column B differently based on
# the type given in column A
# This requires taking a different
# substring (prefix or suffix) from column B
df = sparkSession.createDataFrame([
(1, "NewYork_NY"),
(2, "FL_Miami"),
(1, "LA_CA"),
(1, "Chicago_IL"),
(2,"PA_Kutztown")
], ["A", "B"])
# create UDFs to get left and right substrings
# I do not know how to avoid creating UDFs
# for this type of processing
getCityLeft = udf(lambda x:x[0:-3],StringType())
getCityRight = udf(lambda x:x[3:],StringType())
#apply UDFs
df = df.withColumn("city", F.when(F.col("A") == 1, getCityLeft(F.col("B"))) \
.otherwise(getCityRight(F.col("B"))))
Is there a way to do this in a simpler manner without resorting to UDFs? If I use expr, I can do this, but as I mentioned earlier, it doesn't seem elegant.
What about using when?
import pyspark.sql.functions as F
df = df.withColumn("transformed_B", F.when(F.col("A") == "a", F.col("B") + 1000).otherwise(F.col("B") * F.col("B")))
EDIT after more clarity on the question:
You can use split on _ and take the first or the second part of it based on your condition.
Is this the expected output?
df.withColumn("city", F.when(F.col("A") == 1, F.split("B", "_")[0]).otherwise(F.split("B", "_")[1])).show()
+---+-----------+--------+
| A| B| city|
+---+-----------+--------+
| 1| NewYork_NY| NewYork|
| 2| FL_Miami| Miami|
| 1| LA_CA| LA|
| 1| Chicago_IL| Chicago|
| 2|PA_Kutztown|Kutztown|
+---+-----------+--------+
UDF approach:
def sub_string(ref_col, city_col):
# ref_col is the reference column (A) and city_col is the string we want to sub (B)
if ref_col == 1:
return city_col[0:-3]
return city_col[3:]
sub_str_udf = F.udf(sub_string, StringType())
df = df.withColumn("city", sub_str_udf(F.col("A"), F.col("B")))
Also, please look into: remove last few characters in PySpark dataframe column

How to transform pyspark dataframe 1x9 to 3x3

Im using pyspark dataframe.
I have a df which is 1x9
example
temp = spark.read.option("sep","\n").csv("temp.txt")
temp :
sam
11
newyork
john
13
boston
eric
22
texas
without using Pandas library, How can I transform this to 3x3 dataframe with columns name,age,city ?
like this :
name,age,city
sam,11,newyork
john,13,boston
I would read the file as an rdd to take advantage of zipWithIndex to add an index to your data.
rdd = sc.textFile("temp.txt")
We can now use truncating division to create an index with which to group records together. Use this new index as the key for the rdd. The corresponding values will be a tuple of the header, which can be computed using the modulus, and the actual value. (Note the index returned by zipWithIndex will be at the end of the record, which is why we use row[1] for the division/mod.)
Next use reduceByKey to add the value tuples together. This will give you a tuple of keys and values (in sequence). Use map to turn that into a Row (to keep column headers, etc).
Finally use toDF() to convert to a DataFrame. You can use select(header) to get the columns in the desired order.
from operator import add
from pyspark.sql import Row
header = ["name", "age", "city"]
df = rdd.zipWithIndex()\
.map(lambda row: (row[1]//3, (header[row[1]%3], row[0])))\
.reduceByKey(add)\
.map(lambda row: Row(**dict(zip(row[1][::2], row[1][1::2]))))\
.toDF()\
.select(header)
df.show()
#+----+---+-------+
#|name|age| city|
#+----+---+-------+
#| sam| 11|newyork|
#|eric| 22| texas|
#|john| 13| boston|
#+----+---+-------+

Pyspark add sequential and deterministic index to dataframe

I need to add an index column to a dataframe with three very simple constraints:
start from 0
be sequential
be deterministic
I'm sure I'm missing something obvious because the examples I'm finding look very convoluted for such a simple task, or use non-sequential, non deterministic increasingly monotonic id's. I don't want to zip with index and then have to separate the previously separated columns that are now in a single column because my dataframes are in the terabytes and it just seems unnecessary. I don't need to partition by anything, nor order by anything, and the examples I'm finding do this (using window functions and row_number). All I need is a simple 0 to df.count sequence of integers. What am I missing here?
1, 2, 3, 4, 5
What I mean is: how can I add a column with an ordered, monotonically increasing by 1 sequence 0:df.count? (from comments)
You can use row_number() here, but for that you'd need to specify an orderBy(). Since you don't have an ordering column, just use monotonically_increasing_id().
from pyspark.sql.functions import row_number, monotonically_increasing_id
from pyspark.sql import Window
df = df.withColumn(
"index",
row_number().over(Window.orderBy(monotonically_increasing_id()))-1
)
Also, row_number() starts at 1, so you'd have to subtract 1 to have it start from 0. The last value will be df.count - 1.
I don't want to zip with index and then have to separate the previously separated columns that are now in a single column
You can use zipWithIndex if you follow it with a call to map, to avoid having all of the separated columns turn into a single column:
cols = df.columns
df = df.rdd.zipWithIndex().map(lambda row: (row[1],) + tuple(row[0])).toDF(["index"] + cols
Not sure about the performance but here is a trick.
Note - toPandas will collect all the data to driver
from pyspark.sql import SparkSession
# speed up toPandas using arrow
spark = SparkSession.builder.appName('seq-no') \
.config("spark.sql.execution.arrow.pyspark.enabled", "true") \
.config("spark.sql.execution.arrow.enabled", "true") \
.getOrCreate()
df = spark.createDataFrame([
('id1', "a"),
('id2', "b"),
('id2', "c"),
], ["ID", "Text"])
df1 = spark.createDataFrame(df.toPandas().reset_index()).withColumnRenamed("index","seq_no")
df1.show()
+------+---+----+
|seq_no| ID|Text|
+------+---+----+
| 0|id1| a|
| 1|id2| b|
| 2|id2| c|
+------+---+----+