Read a JSON file and registered a temporary table with the below schema(inferred from JSON file with Native Spark SQL inference).
df = spark.read.json('/path/to/json', multiLine=True)
babynames.registerTempTable("babynames")
Now I would like to select columns
"sid", "id", "position", "created_at", "created_meta", "updated_at", "updated_meta", "meta", "year", "first_name", "county", "sex", "count"
using Spark SQL select statement.
Here is the data source: https://data.cityofnewyork.us/api/views/25th-nujf/rows.json?accessType=DOWNLOAD
Once you have the json file located at specific location you can read the column names as under but you need to have a better understanding of the json elements.
Using spark Sql :
val df = spark.read.option("multiline",true).json("/path/to/json")
df.createOrReplaceTempView("TestTable")
val selectedColumnsDf = spark.sql(""" Select meta.view.columns.id ,meta.view.columns.position, meta.view.createdAt from TestTable """)
Using DataFrame Api it can be done as below :
val df = spark.read.option("multiline",true).json("/path/to/json")
val selectedColumnsDf = df.select("meta.view.columns.id","meta.view.columns.position","meta.view.createdAt")
I am just selecting the three columns just to give you an idea. you can add remaining columns as per your requirement.
Related
I am doing a basic transformation on my pyspark dataframe but here i am using multiple .withColumn statements.
def trim_and_lower_col(col_name):
return F.when(F.trim(col_name) == "", F.lit("unspecified")).otherwise(F.lower(F.trim(col_name)))
df = (
source_df.withColumn("browser", trim_and_lower_col("browser"))
.withColumn("browser_type", trim_and_lower_col("browser_type"))
.withColumn("domains", trim_and_lower_col("domains"))
)
I read that creating multiple withColumn statements isn't very efficient and i should use df.select() instead.
I tried this:
cols_to_transform = [
"browser",
"browser_type",
"domains"
]
df = (
source_df.select([trim_and_lower_col(col).alias(col) for col in cols_to_transform] + source_df.columns)
)
but it gives me a duplicate column error
What else can I try?
The duplicate column comes because you pass each transformed column twice in that list, once as your newly transformed column (through .alias) as original column (by name in source_df.columns). This solution will allow you to use a single select statement, preserve the column order and not hit the duplication issue:
df = (
source_df.select([trim_and_lower_col(col).alias(col) if col in cols_to_transform else col for col in source_df.columns])
)
Chaining many .withColumn does pose a problem as the unresolved query plan can get pretty large and cause StackOverflow error on Spark driver during query plan optimisation. One good explanation of this problem is shared here: https://medium.com/#manuzhang/the-hidden-cost-of-spark-withcolumn-8ffea517c015
You are naming your new columns the following: .alias(col).
That means that they have the same name as the column you use to create the new one.
During the creation (using .withColumn) this does not pose a problem. As soon as you are trying to select, Spark does not know which column to pick.
You could fix it for example by giving the new columns a suffix:
cols_to_transform = [
"browser",
"browser_type",
"domains"
]
df = (
source_df.select([trim_and_lower_col(col).alias(f"{col}_new") for col in cols_to_transform] + source_df.columns)
)
Another solution, which does pollute the DAG though, would be:
cols_to_transform = [
"browser",
"browser_type",
"domains"
]
for col in cols_to_transform:
source_df = source_df.withColumn(col, trim_and_lower_col(col))
If you only have these few withColumns, keep using them.. It's still way more readable thus way more maintainable and self explanatory..
If you look into it, you'll see that spark says to be careful with the withColumns when you have like 200 of them.
Using select makes your code more error prone too since it's more complex to read.
Now, if you have many columns, I would define
the list of the column to transform,
the list of the column to keep
then do the select
cols_to_transform = [
"browser",
"browser_type",
"domains"
]
cols_to_keep = [c for c in df.columns if c not in cols_to_transform]
cols_transformed = [trim_and_lower_col(c).alias(c) for c in cols_to_transform]
source_df.select(*cols_to_keep, *cols_transformed)
This would give you the same column order as the withColumns.
I have a Dataframe and I want to dynamically pass the columns names through widgets in a select statement in my Databricks Notebook. How can I do it?
I am using the below code
df1 = spark.sql("select * from tableraw")
where df1 has columns "tablename" and "layer"
df = df1.select("tablename", "layer")
Now, our requirement is to use the values of the widgets to select those columns, something like:
df = df1.select(dbutils.widget.get("tablename"), dbutils.widget.get("datalayer"))
Python / Scala
Create Widgets
%python
dbutils.widgets.text(name = "pythonTextWidget", defaultValue = "columnName")
dbutils.widgets.dropdown(name = "pythonDropdownWidget", defaultValue = "col1", choices = ["col1", "col2", "col3"])
%scala
dbutils.widgets.text("scalaTextWidget", "columnName")
dbutils.widgets.dropdown("scalaDropdownWidget", "col1", Seq("col1", "col2", "col3"))
Extract Value from Widgets
%python
textColumn = dbutils.widgets.get("pythonTextWidget")
dropdownColumn = dbutils.widgets.get("pythonDropdownWidget")
%scala
val textColumn = dbutils.widgets.get("scalaTextWidget")
val dropdownColumn = dbutils.widgets.get("scalaDropdownWidget")
Use Value to select Column
%python
from pyspark.sql.functions import col
df.select(col(textColumn), col(dropdownColumn))
%scala
import org.apache.spark.sql.functions.col
df.select(col(textColumn), col(dropdownColumn))
SQL
The Widgets in SQL work slightly different compared to Python/Scala in the sense that you cannot use them to select a column. However, widgets can be used to dynamically adjust filters.
Create Widget
%sql CREATE WIDGET text sqlTextWidget DEFAULT "ACTIVE"
%sql CREATE WIDGET DROPDOWN sqlDropdownWidget DEFAULT "ACTIVE" CHOICES SELECT DISTINCT Status FROM <databaseName>.<tableName> WHERE Status IS NOT NULL
Apply Widget Value to Filter Statement
%sql SELECT * FROM <databaseName>.<tableName> WHERE Status = getArgument("sqlTextWidget")
More background can be found on the Databricks documentation on Widgets.
I'm using Databricks Notebooks to read avro files stored in an Azure Data Lake Gen2. The avro files are created by an Event Hub Capture, and present a specific schema. From these files I have to extract only the Body field, where the data which I'm interested in is actually stored.
I already implented this in Python and it works as expected:
path = 'abfss://file_system#storage_account.dfs.core.windows.net/root/YYYY/MM/DD/HH/mm/file.avro'
df0 = spark.read.format('avro').load(path) # 1
df1 = df0.select(df0.Body.cast('string')) # 2
rdd1 = df1.rdd.map(lambda x: x[0]) # 3
data = spark.read.json(rdd1) # 4
Now I need to translate this to raw SQL in order to filter the data directly in the SQL query. Considering the 4 steps above, steps 1 and 2 with SQL are as follows:
CREATE TEMPORARY VIEW file_avro
USING avro
OPTIONS (path "abfss://file_system#storage_account.dfs.core.windows.net/root/YYYY/MM/DD/HH/mm/file.avro")
WITH body_array AS (SELECT cast(Body AS STRING) FROM file_avro)
SELECT * FROM body_array
With this partial query I get the same as df1 above (step 2 with Python):
Body
[{"id":"a123","group":"0","value":1.0,"timestamp":"2020-01-01T00:00:00.0000000"},
{"id":"a123","group":"0","value":1.5,"timestamp":"2020-01-01T00:01:00.0000000"},
{"id":"a123","group":"0","value":2.3,"timestamp":"2020-01-01T00:02:00.0000000"},
{"id":"a123","group":"0","value":1.8,"timestamp":"2020-01-01T00:03:00.0000000"}]
[{"id":"b123","group":"0","value":2.0,"timestamp":"2020-01-01T00:00:01.0000000"},
{"id":"b123","group":"0","value":1.2,"timestamp":"2020-01-01T00:01:01.0000000"},
{"id":"b123","group":"0","value":2.1,"timestamp":"2020-01-01T00:02:01.0000000"},
{"id":"b123","group":"0","value":1.7,"timestamp":"2020-01-01T00:03:01.0000000"}]
...
I need to know how to introduce the steps 3 and 4 into the SQL query, to parse the strings into json objects and finally get the desired dataframe with columns id, group, value and timestamp. Thanks.
One way I found to do this with raw SQL is as follows, using from_json Spark SQL built-in function and the scheme of the Body field:
CREATE TEMPORARY VIEW file_avro
USING avro
OPTIONS (path "abfss://file_system#storage_account.dfs.core.windows.net/root/YYYY/MM/DD/HH/mm/file.avro")
WITH body_array AS (SELECT cast(Body AS STRING) FROM file_avro),
data1 AS (SELECT from_json(Body, 'array<struct<id:string,group:string,value:double,timestamp:timestamp>>') FROM body_array),
data2 AS (SELECT explode(*) FROM data1),
data3 AS (SELECT col.* FROM data2)
SELECT * FROM data3 WHERE id = "a123" --FILTERING BY CHANNEL ID
It performs faster than the Python code I posted in the question, surely because of the use of from_json and the scheme of Body to extract data inside it. My version of this approach in PySpark looks as follows:
path = 'abfss://file_system#storage_account.dfs.core.windows.net/root/YYYY/MM/DD/HH/mm/file.avro'
df0 = spark.read.format('avro').load(path)
df1 = df0.selectExpr("cast(Body as string) as json_data")
df2 = df1.selectExpr("from_json(json_data, 'array<struct<id:string,group:string,value:double,timestamp:timestamp>>') as parsed_json")
data = df2.selectExpr("explode(parsed_json) as json").select("json.*")
I am trying to create a pyspark dataframe manually using the below nested schema -
schema = StructType([
StructField('fields', ArrayType(StructType([
StructField('source', StringType()),
StructField('sourceids', ArrayType(IntegerType()))]))),
StructField('first_name',StringType()),
StructField('last_name',StringType()),
StructField('kare_id',StringType()),
StructField('match_key',ArrayType(StringType()))
])
I am using the below code to create a dataframe using this schema -
row = [Row(fields=[Row(
source='BCONNECTED',
sourceids=[10,202,30]),
Row(
source='KP',
sourceids=[20,30,40])],first_name='Christopher', last_name='Nolan', kare_id='kare1', match_key=['abc','abcd']),
Row(fields=[
Row(
source='BCONNECTED',
sourceids=[20,304,5,6]),
Row(
source='KP',
sourceids=[40,50,60])],first_name='Michael', last_name='Caine', kare_id='kare2', match_key=['ncnc','cncnc'])]
content = spark.createDataFrame(sc.parallelize(row), schema=schema)
content.printSchema()
Schema is getting printed correctly, but when I am doing content.show() I can see the values of kare_id and last_name column has swapped.
+--------------------+-----------+---------+-------+-------------+
| fields| first_name|last_name|kare_id| match_key|
+--------------------+-----------+---------+-------+-------------+
|[[BCONNECTED, [10...|Christopher| kare1| Nolan| [abc, abcd]|
|[[BCONNECTED, [20...| Michael| kare2| Caine|[ncnc, cncnc]|
+--------------------+-----------+---------+-------+-------------+
PySpark sorts the Row object on column names using lexicographic ordering. Thus, the ordering of the columns in your data will be fields, first_name, kare_id, last_name, match_key.
Spark then associates each one of the column names with the data resulting in the mismatch. The fix is to swap the schema entry for last_name and kare_id as shown below:
schema = StructType([
StructField('fields', ArrayType(StructType([
StructField('source', StringType()),
StructField('sourceids', ArrayType(IntegerType()))]))),
StructField('first_name', StringType()),
StructField('kare_id', StringType()),
StructField('last_name', StringType()),
StructField('match_key', ArrayType(StringType()))
])
From PySpark Docs on Row: "Row can be used to create a row object by using named arguments, the fields will be sorted by names."
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Row
First you are actually defining schema twice once when you are creating data at that time you are already using row object in RDD thus you do not need to use createDataFrame function instead you can do following:
sc.parallelize(row).toDF().show()
But if still you want to mention schema explicitly then you need to keep schema and data in same order and your mentioned Schema is incorrect as per the data you are passing. The correct schema would be:
schema = StructType([
StructField('fields', ArrayType(StructType([StructField('source', StringType()),StructField('sourceids', ArrayType(IntegerType()))]))),
StructField('first_name',StringType()),
StructField('kare_id',StringType()),
StructField('last_name',StringType()),
StructField('match_key',ArrayType(StringType()))
])
kare_id should come before last_name because this is the order in which you are passing data
I would like to find tables with a specific column in a database on databricks by pyspark sql.
I use the following code but it does not work.
https://medium.com/#rajnishkumargarg/find-all-the-tables-by-column-name-in-hive-51caebb94832
On SQL server my code:
SELECT Table_Name, Column_Name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_CATALOG = 'YOUR_DATABASE'
AND COLUMN_NAME LIKE '%YOUR_COLUMN%'
but, I cannot find out how to do the same thing on pyspark sql ?
thanks
The SparkSession has a property catalog. This catalog's method listTables returns a list of all tables known to the SparkSession. With this list you can query all columns for each table with listColumns
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("test").getOrCreate()
spark.sql("CREATE TABLE tab1 (name STRING, age INT) USING parquet")
spark.sql("CREATE TABLE tab2 (name STRING, age INT) USING parquet")
spark.sql("CREATE TABLE tab3 (street STRING, age INT) USING parquet")
for table in spark.catalog.listTables():
for column in spark.catalog.listColumns(table.name):
if column.name == 'name':
print('Found column {} in table {}'.format(column.name, table.name))
prints
Found column name in table tab1
Found column name in table tab2
Both methods, listTables and listColumns accept a database name as an optional argument if you want to restrict your search to a single database.
I had a similar problem to OP, I needed to find all columns - including nested columns - that match a LIKE clause.
I wrote a post about it here https://medium.com/helmes-people/how-to-view-all-databases-tables-and-columns-in-databricks-9683b12fee10
But you can find the full code below.
The benefit of this solution, in comparison with the previous answers, is that it works in case you need to search columns with LIKE '%%', as written by OP. Also, it allows you to search for name in nested fields. Finally, it creates a SQL like view, similar to INFORMATION_SCHEMA views.
from pyspark.sql.types import StructType
# get field name from schema (recursive for getting nested values)
def get_schema_field_name(field, parent=None):
if type(field.dataType) == StructType:
if parent == None:
prt = field.name
else:
prt = parent+"."+field.name # using dot notation
res = []
for i in field.dataType.fields:
res.append(get_schema_field_name(i, prt))
return res
else:
if parent==None:
res = field.name
else:
res = parent+"."+field.name
return res
# flatten list, from https://stackoverflow.com/a/12472564/4920394
def flatten(S):
if S == []:
return S
if isinstance(S[0], list):
return flatten(S[0]) + flatten(S[1:])
return S[:1] + flatten(S[1:])
# list of databases
db_list = [x[0] for x in spark.sql("SHOW DATABASES").rdd.collect()]
for i in db_list:
spark.sql("SHOW TABLES IN {}".format(i)).createOrReplaceTempView(str(i)+"TablesList")
# create a query for fetching all tables from all databases
union_string = "SELECT database, tableName FROM "
for idx, item in enumerate(db_list):
if idx == 0:
union_string += str(item)+"TablesList WHERE isTemporary = 'false'"
else:
union_string += " UNION ALL SELECT database, tableName FROM {}".format(str(item)+"TablesList WHERE isTemporary = 'false'")
spark.sql(union_string).createOrReplaceTempView("allTables")
# full list = schema, table, column
full_list = []
for i in spark.sql("SELECT * FROM allTables").collect():
table_name = i[0]+"."+i[1]
table_schema = spark.sql("SELECT * FROM {}".format(table_name))
column_list = []
for j in table_schema.schema:
column_list.append(get_schema_field_name(j))
column_list = flatten(column_list)
for k in column_list:
full_list.append([i[0],i[1],k])
spark.createDataFrame(full_list, schema = ['database', 'tableName', 'columnName']).createOrReplaceTempView("allColumns")```
#The following code will create a TempView containing all the tables,
# and all their columns along with their type , for a specified database
cls = []
spark.sql("Drop view if exists allTables")
spark.sql("Drop view if exists allColumns")
for table in spark.catalog.listTables("TYPE_IN_YOUR_DB_NAME_HERE"):
for column in spark.catalog.listColumns(table.name, table.database):
cls.append([table.database,table.name, column.name, column.dataType])
spark.createDataFrame(cls, schema = ['databaseName','tableName','columnName',
'columnDataType']).createOrReplaceTempView("allColumns")
SparkSession really has catalog property as werner mentioned.
If i understand you correctly, you want to get tables that has a specific column.
you can try this code(sorry for scala code instead python):
val databases = spark.catalog.listDatabases().select($"name".as("db_name")).as("databases")
val tables = spark.catalog.listTables().select($"name".as("table_name"), $"database").as("tables")
val tablesWithDatabase = databases.join(tables, $"databases.db_name" === $"tables.database", "inner").collect()
tablesWithDatabase.foreach(row => {
val dbName = row.get(0).asInstanceOf[String]
val tableName = row.get(1).asInstanceOf[String]
val columns = spark.catalog.listColumns(dbName, tableName)
columns.foreach(column=>{
if (column.name == "Your column")
// Do your logic here
null
})
})
Notice that i am doing collect so if you have a lot of tables/databases it can cause an OOM error, the reason im doing collect is because that in contrast to listTables or listDatabases methods, that can be called without arguments at all, listColumns need to get dbName and tableName, and it is not having any unique column id match to table.
So the search of the column will be done locally on the driver.
Hope that was helping.