I have a csv file which I insert to IBM Data Science Experience using "SparkSession DataFrame". All content in the csv file (other than the headers) are integers.
The dataframe works as expected though certain Machine Learning models until trying to create a Linear Regression Classification where I get this error:
TypeError: Cannot cast array data from dtype('float64') to dtype('U32') according to the rule 'safe'
I believe this means that the data is no longer an integer and is being treated as a float.
How can I resolve this? Is there anything that can be done when you import the file to make sure it stays as an integer? See example below where I tried to add in a second option for format.
`from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.option('format', 'int32')\
.load(bmos.url('name', 'name.csv'))
df.take(5)`
#charles-gomes is correct. Here's a complete example where my file tinyinttest.csv is in an objectstore container called TestingSandbox.
Contents of tinyinttest.csv is:
name,val
a,1
b,2
code:
from pyspark.sql import SparkSession
import ibmos2spark
credentials = {
'auth_url': 'https://identity.open.softlayer.com',
'project_id': 'xxx',
'region': 'xxx',
'user_id': 'xxx',
'username': 'xxx',
'password': 'xxx'
}
configuration_name = 'xxx'
bmos = ibmos2spark.bluemix(sc, credentials, configuration_name)
spark = SparkSession.builder.getOrCreate()
df = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.option('inferSchema', 'true')\
.load(bmos.url('TestingSandbox', 'tinyinttest.csv'))
df.schema
output:
StructType(List(StructField(name,StringType,true),StructField(val,IntegerType,true)))
Related
Where am I falling short?
jsonStrings= '{"Zipcode":704,"ZipCodeType":"STANDARD","City":"PARC PARQUE","State":"PR"}'
jsonRDD = spark.sparkContext.parallelize([jsonStrings])
df = spark.read.option('multiline', "true").json(jsonRDD)
print(df.show())
ERROR----------------
raise converted from None pyspark.sql.utils.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: {"Zipcode":704,%22ZipCodeType%22:%22STANDARD%22,%22City%22:%22PARC%20PARQUE%22,%22State%22:%22PR%22%7D
json() here spark.read.option('multiline', "true").json(jsonRDD) expects path to data, not RDD. Documentation is here
If you want to create df from json string, you can do it like this:
from pyspark.sql import Row
df = spark.createDataFrame(
spark.sparkContext.parallelize([Row(json=jsonStrings)])
)
I am reading from a bigquery table to generate a payload to upload to FB conversions api.
cols=["payload","client_user_agent","event_source_url"]
I am copying the column values directly from the bq table as I am unable to print the full output of the dataframe in note book.
payload="{"pageDetail":{"pageName":"Confirmation","pageContentType":"cart","pageSiteSection":"cart","breadcrumbs":[{"title":"Home","url":"/en/home.html"},{"title":"Cart","url":"/cart"},{"title":"Confirmation","url":"/order-confirmation="}],"pageCategory":"Home","pageCategory1":"Cart","pageCategory2":"Confirmation","proBtbGlobalHeader":false},"orderDetails":{"hceid":"3b94a","orderConfirmed":true,"orderDate":"2021-01-15","orderId":"0123","unique":2,"pricingSummary":{"total":54.01},"items":[{"productId":"0456","quantity":1,"shippingAddress":{"postalCode":"V4N 3X3"},"promotion":{"voucherCode":null},"clickToInstall":{"eligible":false}},{"productId":"0789","quantity":1,"fulfillment":{"fulfillmentCost":""},"shippingAddress":{"postalCode":"A4N 3Y3"},"promotion":{"voucherCode":null},"clickToInstall":{"eligible":false}}],"billingAddress":{"postalCode":"M$X1A7"}},"event":{"type":"Load","page":"Confirmation","timestamp":1610706772998,"language":"English","url":"https://www"}}"
client_user_agent="Mozilla/5.0"
event_source_url= "https://www.def.com="
I need the value for email=[orderDetails][hceid] and value=["orderDetails"]["pricingSummary"]["total"]
Initially all the payload I wanted was in a single column and I was able to achieve the uploads with the following code
import time
from facebook_business.adobjects.serverside.event import Event
from facebook_business.adobjects.serverside.event_request import EventRequest
from facebook_business.adobjects.serverside.user_data import UserData
from facebook_business.adobjects.serverside.custom_data import CustomData
from facebook_business.api import FacebookAdsApi
import pandas as pd
import json
FacebookAdsApi.init(access_token=access_token)
query='''SELECT JSON_EXTRACT(payload, '$') AS payload FROM `project.dataset.events` WHERE eventType = 'Page Load' AND pagename = "Confirmation" limit 1'''
df = pd.read_gbq(query, project_id= project, dialect='standard')
payload = df.to_dict(orient="records")
for i in payload:
#print(type(i["payload"]))
k = json.loads(i["payload"])
email = k["orderDetails"]["hcemuid"]
user_data = UserData(email)
value=k["orderDetails"]["pricingSummary"]["total"]
order_id = k["orderDetails"]["orderId"]
custom_data = CustomData(
currency='CAD',
value=value)
event = Event(
event_name='Purchase',
event_time=int(time.time()),
user_data=user_data,
custom_data=custom_data,
event_id = order_id,
data_processing_options= [])
events = [event]
#print(events)
event_request = EventRequest(
events=events,
test_event_code='TEST8609',
pixel_id=pixel_id)
#print(event_request)
a=event_request.execute()
print(a)
Now there are additional values client_user_agent that needs to be part of user data and event_source_url as parts of events in the above code that are present as two different columns in GBQ table.
I have tried similar code as above for multiple columns but I am receiving a
TypeError: Object of type Series is not JSON serializable
So I tried concatenating the columns and then create a json serializable object but I am not able to do an upload.
Below is where I am stuck and lost and not sure how to proceed further any inputs appreciated.
import time
from facebook_business.adobjects.serverside.event import Event
from facebook_business.adobjects.serverside.event_request import EventRequest
from facebook_business.adobjects.serverside.user_data import UserData
from facebook_business.adobjects.serverside.custom_data import CustomData
from facebook_business.api import FacebookAdsApi
import pandas as pd
import json
FacebookAdsApi.init(access_token=access_token)
query='''SELECT payload AS payload,location.userAgent as client_user_agent,location.referrer as event_source_url FROM `project.Dataset.events` WHERE eventType = 'Page Load' AND pagename = "Confirmation" limit 1'''
df = pd.read_gbq(query, project_id= project, dialect='standard')
df.reset_index(drop=True, inplace=True)
payload = df.to_dict(orient="records")
print(payload)
## cols = ['payload', 'client_user_agent', 'event_source_url']
## df['combined'] = df[cols].apply(lambda row: ','.join(row.values.astype(str)), axis=1)
## del df["payload"]
## del df["client"]
## del df["source"]
## payload = df.to_dict(orient="records")
#tried concatinating all columns in a the dataframe but not able to create a valid json object for upload
columns = ['payload', 'client_user_agent', 'event_source_url']
df['payload'] = df['payload'].str.replace(r'}"$', '')
payload = df[columns].to_dict(orient='records')
print(payload)
## df = df.drop(columns=columns)
## pd.options.display.max_rows = 4000
# #print(payload)
# for i in payload:
# print(i["payload"])
# k = json.loads(i["payload"])
# email = k["orderDetails"]["hcemuid"]
# print(email)
I am following the instructions from this page:https://developers.facebook.com/docs/marketing-api/conversions-api
I have used the bigquery json_extract_scalar function to extract data from nested column instead of pandas which is a relatively better solution for my scenario.
i am trying to save a data frame which was first imported in pandas from postgresql as dfraw and then do some manipulation and create another dataframe as df and save it back in postgresql same database using sql alchemy. but when i am trying to save it back its giving error of 'DataFrame' objects are mutable, thus they cannot be hashed
PFB code below
import psycopg2
import pandas as pd
import numpy as np
import sqlalchemy
from sqlalchemy import create_engine
# connect the database to python
# Update connection string information
host = "something.something.azure.com"
dbname = "abcd"
user = "abcd"
password = "abcd"
sslmode = "require"
schema = 'xyz'
# Construct connection string
conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
conn = psycopg2.connect(conn_string)
print("Connection established")
cursor = conn.cursor()
# Fetch all rows from table
cursor.execute("SELECT * FROM xyz.abc;")
rows = cursor.fetchall()
# Convert the tuples in dataframes
dfraw = pd.DataFrame(rows, columns =["ID","Timestamp","K","S","H 18","H 19","H 20","H 21","H 22","H 23","H 24","H 2zzz","H zzz4","H zzzzzz","H zzz6","H zzz7","H zzz8","H zzz9","H 60","H zzz0","H zzz2"])
dfraw[["S","H 18","H 19","H 20","H 21","H 22","H 23","H 24","H 2zzz","H zzz4","H zzzzzz","H zzz6","H zzz7","H zzz8","H zzz9","H 60","H zzz0","H zzz2"]] = dfraw[["S","H 18","H 19","H 20","H 21","H 22","H 23","H 24","H 2zzz","H zzz4","H zzzzzz","H zzz6","H zzz7","H zzz8","H zzz9","H 60","H zzz0","H zzz2"]].apply(pd.to_numeric)
dfraw[["Timestamp","K"]]=dfraw[["Timestamp","K"]].apply(pd.to_datetime)
# Creating temp files
temp1 = dfraw
dfraw = temp1
# creating some fucntions for data manipulation and imputations
def remZero(df,dropCol):
for k in df.drop(dropCol,axis=1):
if all(df[k] == 0):
continue
if any(df[k] == 0):
print(k)
df[k] = df[k].replace(to_replace=0, method='ffill')
return df
# Drop Columns function
dropCol = ['Timestamp','K','ID','H','C','S']
dropCol2 = ['Timestamp','K','ID','Shift']
df = remZero(dfraw,dropCol)
from sqlalchemy import create_engine
engine = create_engine('postgresql://abcd:abcd#something.something.azure.com:5432/abcd')
df.to_sql(name = df,
con=engine,
index = False,
if_exists= 'replace'
)
Error Message
Found basic error in the code I just missed putting the inverted comma before the data frame name to be published. The basic hygiene was missed
df.to_sql(name = "df",
con=engine,
index = False,
if_exists= 'replace'
)
What should be the dictionary form_data
Desired Output from python code >> data = parse.urlencode(form_data).encode():
"entry.330812148_sentinel=&entry.330812148=Test1&entry.330812148=Test2&entry.330812148=Test3&entry.330812148=Test4"
I tried various dictionary structures including ones with None, [] and dictionary within dictionary but I am unable to get this output
form_data = {'entry.330812148_sentinel':None,
'entry.330812148':'Test1',
'entry.330812148':'Test2',
'entry.330812148':'Test3',
'entry.330812148':'Test4'}
from urllib import request, parse
data = parse.urlencode(form_data).encode()
print("Printing Parsed Form Data........")
"entry.330812148_sentinel=&entry.330812148=Test1&entry.330812148=Test2&entry.330812148=Test3&entry.330812148=Test4"
You can use parse_qs from urllib.parse to return the python data structure
import urllib.parse
>>> s = 'entry.330812148_sentinel=&entry.330812148=Test1&entry.330812148=Test2&entry.330812148=Test3&entry.330812148=Test4'
>>> d1 = urllib.parse.parse_qs(s)
>>> d1
{b'entry.330812148': [b'Test1', b'Test2', b'Test3', b'Test4']}
I have a code that converts Pyspark streaming data to dataframe. I need to store this dataframe into Hbase. Help me to write code additionally.
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.sql import Row, SparkSession
def getSparkSessionInstance(sparkConf):
if ('sparkSessionSingletonInstance' not in globals()):
globals()['sparkSessionSingletonInstance'] = SparkSession\
.builder\
.config(conf=sparkConf)\
.getOrCreate()
return globals()['sparkSessionSingletonInstance']
if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: sql_network_wordcount.py <hostname> <port> ",
file=sys.stderr)
exit(-1)
host, port = sys.argv[1:]
sc = SparkContext(appName="PythonSqlNetworkWordCount")
ssc = StreamingContext(sc, 1)
lines = ssc.socketTextStream(host, int(port))
def process(time, rdd):
print("========= %s =========" % str(time))
try:
words = rdd.map(lambda line :line.split(" ")).collect()
spark = getSparkSessionInstance(rdd.context.getConf())
linesDataFrame = spark.createDataFrame(words,schema=["lat","lon"])
linesDataFrame.show()
except :
pass
lines.foreachRDD(process)
ssc.start()
ssc.awaitTermination()
You can use Spark-Hbase connector to access HBase from Spark.It provides an API in both low-level RDD and Dataframes.
The connector requires you to define a Schema for HBase table. Below is an example of Schema defined for a HBase table with name as table1, row key as key and a number of columns (col1-col8). Note that the rowkey also has to be defined in details as a column (col0), which has a specific cf (rowkey).
def catalog = '{
"table":{"namespace":"default", "name":"table1"},\
"rowkey":"key",\
"columns":{\
"col0":{"cf":"rowkey", "col":"key", "type":"string"},\
"col1":{"cf":"cf1", "col":"col1", "type":"boolean"},\
"col2":{"cf":"cf1", "col":"col2", "type":"double"},\
"col3":{"cf":"cf1", "col":"col3", "type":"float"},\
"col4":{"cf":"cf1", "col":"col4", "type":"int"},\
"col5":{"cf":"cf2", "col":"col5", "type":"bigint"},\
"col6":{"cf":"cf2", "col":"col6", "type":"smallint"},\
"col7":{"cf":"cf2", "col":"col7", "type":"string"},\
"col8":{"cf":"cf2", "col":"col8", "type":"tinyint"}\
}\
}'
Once the catalog is defined according to the schema of your dataframe, You can write the dataFrame to HBase using:
df.write\
.options(catalog=catalog)\
.format("org.apache.spark.sql.execution.datasources.hbase")\
.save()
To Read the data from HBase:
df = spark.\
read.\
format("org.apache.spark.sql.execution.datasources.hbase").\
option(catalog=catalog).\
load()
You need to include the Spark-HBase connector package as below while submitting the spark application.
pyspark --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --repositories http://repo.hortonworks.com/content/groups/public/