Pass list of dates to SQL WHERE statement in PySpark - sql

In the process of converting some SAS code to PySpark and we previously used a macro variable for the where statement in this code. In adapting to PySpark, I'm trying to pass a list of dates to the where statement, but I keep getting errors. I want the SQL code to pull all data from those 3 months. Any pointers?
month_list = ['202107', '202108', '202109']
sql_query = """ (SELECT *
FROM Table_Blah
WHERE (to_char(DateVariable,'yyyymm') IN '{}')
) as table1""".format(month_list)

Pass the list as a tuple to have the right sql syntax:
month_list = ['202107', '202108', '202109']
sql_query = """ (SELECT *
FROM Table_Blah
WHERE (to_char(DateVariable,'yyyymm') IN {})
) as table1""".format(tuple(month_list))
And you don’t need apostrophe for in statement

Related

ProgrammingError: ('Expected 0 parameters, supplied 391', 'HY000') with 391 columns using dynamic approach

I have a dataframe that contains 391 columns and a number of rows. I am trying to push this to a database via pyodbc and using the following command:
cursor = conn.cursor()
cursor.fast_executemany = True
cursor.executemany(
f"INSERT INTO db.tble({', '.join(df.columns.tolist())}) VALUES ({('?,' * len(df.columns))[:-1]})",
list(df.itertuples(index=False, name=None))
)
cursor.commit()
I would have thought this method would be dynamic for a dataframe of any size yet I get the following error:
ProgrammingError: ('Expected 0 parameters, supplied 391', 'HY000')
I am struggling to understand this as the syntax looks correct, ? has been used instead of %s like other answers. Can someone please help.
Thanks
I once wrote a piece of code, where I wanted to create the insert statement dynamically based on number of columns in the data frame:
here is how the insert query would be passed to the database:
INSERT INTO dbo.Table (column1,columns2,column3) VALUES (?,?,?)
and again, the number of columns and values '?' would be required to be created dynamically at runtime based upon the number of columns the data frame had
I wrote the below piece to just write a string (of ?,?,?) and concatenate it with the insert query,
here
df is the dataframe,
symbol_counter would hold the number of columns in the dataframe,
sym_string would be the final string i.e. (?,?,?,?...n) based on the number of columns
symbol = ['?']
sym_string = ''
symbol_counter = int(df.shape[1])-1
word = 0
for word in range(symbol_counter):
# sym_string += str(symbol)
symbol.insert(word, "?")
word+=1
sym_string = (','.join(symbol))
#and then use this variable and concatenate it with the rest of the query as shown below
query = Variable_holding_first_partofthequery + " VALUES (" +sym_string+")"
I know, it's the big way, but that's how I got it to work. Good Luck!

Remove parentheses from the output of the selection command in the Postgrades database

I want to use the select command in the Postgress database.
I use the following command to select.
But there is a parenthesis in the output.
What command should I use so that there is no parentheses in the output?
cursor = connection.cursor()
p = " select price from mobile "
cursor.execute(p)
result = cursor.fetchall()
print(result)
[(1300.0,), (1100.0,), (1200.0,), (1100.0,), (1200.0,), (1500.0,)]
Take the first item from each row, and store those items in a new list. (A list of values, rather than a list of rows.)
result_1d = [row[0] for row in result]

How can I count all NULL values, without column names, using SQL?

I'm reading and executing sql queries from file and I need to inspect the result sets to count all the null values across all columns. Because the SQL is read from file, I don't know the column names and thus can't call the columns by name when trying to find the null values.
I think using CTE is the best way to do this, but how can I call the columns when I don't know what the column names are?
WITH query_results AS
(
<sql_read_from_file_here>
)
select count_if(<column_name> is not null) FROM query_results
If you are using Python to read the file of SQL statements, you can do something like this which uses pglast to parse the SQL query to get the columns for you:
import pglast
sql_read_from_file_here = "SELECT 1 foo, 1 bar"
ast = pglast.parse_sql(sql_read_from_file_here)
cols = ast[0]['RawStmt']['stmt']['SelectStmt']['targetList']
sum_stmt = "sum(iff({col} is null,1,0))"
sums = [sum_sql.format(col = col['ResTarget']['name']) for col in cols]
print(f"select {' + '.join(sums)} total_null_count from query_results")
# outputs: select sum(iff(foo is null,1,0)) + sum(iff(bar is null,1,0)) total_null_count from query_results

Enter Unspecified Number of Variables into Postgres Psycopg2 SQL query

I'm trying to retrieve some data from a postgresql database using psycogp2, and either exclude a variable number of rows or exclude none.
The code I have so far is:
def db_query(variables):
cursor.execute('SELECT * '
'FROM database.table '
'WHERE id NOT IN (%s)', (variables,))
This does partially work. E.g. If I call:
db_query('593')
It works. The same for any other single value. However, I cannot seem to get it to work when I enter more than one variable, eg:
db_query('593, 595')
I get the error:
psycopg2.DataError: invalid input syntax for integer: "593, 595"
I'm not sure how to enter the query correctly or amend the SQL query. Any help appreciated.
Thanks
Pass a tuple as it is adapted to a record:
query = """
select *
from database.table
where id not in %s
"""
var1 = 593
argument = (var1,)
print(cursor.mogrify(query, (argument,)).decode('utf8'))
#cursor.execute(query, (argument,))
Output:
select *
from database.table
where id not in (593)

How to convert Sql inner query to Linq for sum of column

How to Convert this sql query to Linq.
select sum(OutstandingAmt)from IvfReceiptDetails where IvfReceiptId IN(select IvfReceiptId from IvfReceipts where PatientId = 'SI-49650')
I think it is easier to translate SQL using query comprehension syntax instead of lambda syntax.
General rules:
Translate inner queries into separate query variables
Translate SQL phrases in LINQ phrase order
Use table aliases as range variables, or if none, create range variables from table names
Translate IN to Contains
Translate SQL functions such as DISTINCT or SUM into function calls on the entire query.
Here is the code:
var IvfReceiptIds = from IvfReceipt in IvfReceipts
where IvfReceipt.PatientId = "SI-49650"
select IvfReceipt.IvfReceiptId;
var OutstandingAmtSum = (from IvfReceiptDetail in IvfReceiptDetails
where IvfReciptIds.Contains(IvfReceiptDetail.IvfReceiptId)
select IvfReceiptDetail.OutstandingAmt).Sum();
Try this, First get all IvfReceiptId in array based on your inner query used in where condition then check contains. Change name of your _context if it's different.
var arrIvfReceiptId = _context.IvfReceiptDetails.Where(p=>p.PatientId == "SI-49650").ToArray();
var sum = (from ird in _context.IvfReceiptDetails.Where(p=> arrIvfReceiptId.Contains(p.IvfReceiptId))
select OutstandingAmt).Sum();