I have yaml file that contains the data that needs to go into the table, i need to convert each yaml object to sql. can anyone tell me what I can use to convert it to sql statement
for example,
- model: ss
pk: 2
fields: {created_by: xxx, created_date: !!timestamp '2018-09-13
17:50:30.821769+00:00',
modified_by: null, modified_date: null, record_version: 0, team_name:
privat, team_type: abc}
In python
data_loaded = None
with open("data.yaml", 'r') as stream:
data_loaded = yaml.load(stream)
if data_loaded:
cols, vals = data_loaded["fields"].items()
table = data_loaded["model"]
sql = "INSERT INTO %s %s VALUES %s;" % ( table, tuple(cols), tuple(vals))
Depending with SQL dialect you might need take additional care of literal, types (e.g. replacing double quotes with singles, timestamp format etc)
Related
I have a JSON API payload containing tablename, columnlist - how to build a SELECT query from it using pypika?
So far I have been able to use a string columnlist, but not able to do advanced querying using functions, analytics etc.
from pypika import Table, Query, functions as fn
def generate_sql (tablename, collist):
table = Table(tablename)
columns = [str(table)+'.'+each for each in collist]
q = Query.from_(table).select(*columns)
return q.get_sql(quote_char=None)
tablename = 'customers'
collist = ['id', 'fname', 'fn.Sum(revenue)']
print (generate_sql(tablename, collist)) #1
table = Table(tablename)
q = Query.from_(table).select(table.id, table.fname, fn.Sum(table.revenue))
print (q.get_sql(quote_char=None)) #2
#1 outputs
SELECT "customers".id,"customers".fname,"customers".fn.Sum(revenue) FROM customers
#2 outputs correctly
SELECT id,fname,SUM(revenue) FROM customers
You should not be trying to assemble the query in a string by yourself, that defeats the whole purpose of pypika.
What you can do in your case, that you have the name of the table and the columns coming as texts in a json object, you can use * to unpack those values from the collist and use the syntax obj[key] to get the table attribute with by name with a string.
q = Query.from_(table).select(*(table[col] for col in collist))
# SELECT id,fname,fn.Sum(revenue) FROM customers
Hmm... that doesn't quite work for the fn.Sum(revenue). The goal is to get SUM(revenue).
This can get much more complicated from this point. If you are only sending column names that you know to belong to that table, the above solution is enough.
But if you have complex sql expressions, making reference to sql functions or even different tables, I suggest you to rethink your decision of sending that as json. You might end up with something as complex as pypika itself, like a custom parser or wathever. than your better option here would be to change the format of your json response object.
If you know you only need to support a very limited set of capabilities, it could be feasible. For example, you can assume the following constraints:
all column names refer to only one table, no joins or alias
all functions will be prefixed by fn.
no fancy stuff like window functions, distinct, count(*)...
Then you can do something like:
from pypika import Table, Query, functions as fn
import re
tablename = 'customers'
collist = ['id', 'fname', 'fn.Sum(revenue / 2)', 'revenue % fn.Count(id)']
def parsed(cols):
pattern = r'(?:\bfn\.[a-zA-Z]\w*)|([a-zA-Z]\w*)'
subst = lambda m: f"{'' if m.group().startswith('fn.') else 'table.'}{m.group()}"
yield from (re.sub(pattern, subst, col) for col in cols)
table = Table(tablename)
env = dict(table=table, fn=fn)
q = Query.from_(table).select(*(eval(col, env) for col in parsed(collist)))
print (q.get_sql(quote_char=None)) #2
Output:
SELECT id,fname,SUM(revenue/2),MOD(revenue,COUNT(id)) FROM customers
I am trying to write a pandas DataFrame to a Postgres database.
Code is as below:
dbConnection = psycopg2.connect(user = "user1", password = "user1", host = "localhost", port = "5432", database = "postgres")
dbConnection.set_isolation_level(0)
dbCursor = dbConnection.cursor()
dbCursor.execute("DROP DATABASE IF EXISTS FiguresUSA")
dbCursor.execute("CREATE DATABASE FiguresUSA")
dbCursor.execute("DROP TABLE IF EXISTS FiguresUSAByState")
dbCursor.execute("CREATE TABLE FiguresUSAByState(Index integer PRIMARY KEY, Province_State VARCHAR(50), NumberByState integer)");
for i in data_pandas.index:
query = """
INSERT into FiguresUSAByState(column1, column2, column3) values('%s',%s,%i);
""" % (data_pandas['Index'], data_pandas['Province_State'], data_pandas['NumberByState'])
dbCursor.execute(query)
When I run this, I get an error which just says : "Index". I know its somewhere in my for loop is the problem, is that % notation correct? I am new to Postgres and don't see how that could be correct syntax. I know I can use to_sql but I am trying to use different techniques.
Print out of data_pandas is as below:
One slight possible anomaly is that there an "index" in the IDE version. Could this be the problem?
If you use pd.DataFrame.to_sql, you can supply the index_label parameter to use that as a column.
data_pandas.to_sql('FiguresUSAByState', con=dbConnection, index_label='Index')
If you would prefer to stick with the custom SQL and for loop you have, you will need to reset_index first.
for row in data_pandas.reset_index().to_dict('rows'):
query = """
INSERT into FiguresUSAByState(index, Province_State, NumberByState) values(%i, '%s', %i);
""" % (row['index'], row['Province_State'], row['NumberByState'])
Note that the default name for the new column is index, uncapitalized, rather than Index.
In the insert statement:
query = """
INSERT into FiguresUSAByState (column1, column2, column3) values ('%s',%s,%i);
"""% (data_pandas ['Index'], data_pandas ['Province_State'], data_pandas ['NumberByState'])
You have a '%s', I think that is the problem. So remove the quotes
I'd like to parse a dataframe to two pre-define columns in an sql table. The schema in sql is:
abc(varchar(255))
def(varchar(255))
With a dataframe like so:
df = pd.DataFrame(
[
[False, False],
[True, True],
],
columns=["ABC", "DEF"],
)
And the sql query is like so:
with conn.cursor() as cursor:
string = "INSERT INTO {0}.{1}(abc, def) VALUES (?,?)".format(db, table)
cursor.execute(string, (df["ABC"]), (df["DEF"]))
cursor.commit()
So that the query (string) looks like so:
'INSERT INTO my_table(abc, def) VALUES (?,?)'
This creates the following error message:
pyodbc.Error: ('HY004', '[HY004] [Cloudera][ODBC] (11320) SQL type not supported. (11320) (SQLBindParameter)')
So I try using a direct query (not via Python) in the Impala editor, on the following:
'INSERT INTO my_table(abc, def) VALUES ('Hey','Hi');'
And produces this error message:
AnalysisException: Possible loss of precision for target table 'my_table'. Expression ''hey'' (type: `STRING) would need to be cast to VARCHAR(255) for column 'abc'`
How come I cannot even insert into my table simple strings, like "Hi"? Is my schema set up correctly or perhaps something else?
STRING type in Impala has a size limit of 2GB.
VARCHAR's length is whatever you define it to be, but not more than 64KB.
Thus there is a potential of data loss if you implicitly convert one into another.
By default, literals are treated as type STRING. So, in order to insert a literal into VARCHAR field you need to CAST it appropriately.
INSERT INTO my_table(abc, def) VALUES (CAST('Hey' AS VARCHAR(255)),CAST('Hi' AS VARCHAR(255)));
I am attempting to export data and write to a formatted file in groovy 2.1.6. The query returns a null value for an entire column included in the query.
null, 0000001,1434368,ACTIVE
null, 0000002,1354447,ACTIVE
null, 0000004,1358538,ACTIVE
Here is the code that I am using in Groovy to query and write the data to a file.
private void profilerSql() {
def today = new Date()
def formattedDate = today.format('yyyyMMdd')
String reportSql
reportSql = """
SELECT
col_1,
col_2,
col_3,
col_4
from my_table
"""
sql.execute(reportSql)
def filename = "My_Table_export_" + formattedDate + ".csv"
//Create the file Object
File outputFile = new File(filename);
//Write a blank line to it to create a new "empty" file
outputFile.write("");
// Iterate through the SQL recordset. Output settings are defined within the function.
sql.eachRow(reportSql) {
// Create each line, joining the columns with a comma.
def reportLine = [it.col_1, it.col_2, it.col_3, it.col_4].join(',')
// Write the line to the file. End with a new line char.
outputFile.append(reportLine + System.getProperty("line.separator"))
}
}
Perhaps relevant information, the column that returns null values was created as a sequence in Oracle 11g. If any one can provide some insight even into how Groovy interacts with different data types in Oracle databases I would be grateful.
I see a couple things questionable about the code but none of which are about getting a sequence column out of Oracle - but wouldn't really expect that to be much of a problem - since JDBC has been around for years and years.
Don't think you need the initial call to sql.execute(reportSql) - the execute returns a boolean rather than a resultset.
Shouldn't the first parm to the outputFile.append be reportLine and not lineFormat?
Hope this helps!
I'm trying to insert data into a pre-existing PostgreSQL table using RPostgreSQL and I can't figure out the syntax for SQL parameters (prepared statements).
E.g. suppose I want to do the following
insert into mytable (a,b,c) values ($1,$2,$3)
How do I specify the parameters? dbSendQuery doesn't seem to understand if you just put the parameters in the ....
I've found dbWriteTable can be used to dump an entire table, but won't let you specify the columns (so no good for defaults etc.). And anyway, I'll need to know this for other queries once I get the data in there (so I suppose this isn't really insert specific)!
Sure I'm just missing something obvious...
I was looking for the same thing, for the same reasons, which is security.
Apparently dplyr package has the capacity that you are interested in. It's barely documented, but it's there. Scroll down to "Postgresql" in this vignette: http://cran.r-project.org/web/packages/dplyr/vignettes/databases.html
To summarize, dplyr offers functions sql() and escape(), which can be combined to produce a parametrized query. SQL() function from DBI package seems to work in exactly same way.
> sql(paste0('SELECT * FROM blaah WHERE id = ', escape('random "\'stuff')))
<SQL> SELECT * FROM blaah WHERE id = 'random "''stuff'
It returns an object of classes "sql" and "character", so you can either pass it on to tbl() or possibly dbSendQuery() as well.
The escape() function correctly handles vectors as well, which I find most useful:
> sql(paste0('SELECT * FROM blaah WHERE id in ', escape(1:5)))
<SQL> SELECT * FROM blaah WHERE id in (1, 2, 3, 4, 5)
Same naturally works with variables as well:
> tmp <- c("asd", 2, date())
> sql(paste0('SELECT * FROM blaah WHERE id in ', escape(tmp)))
<SQL> SELECT * FROM blaah WHERE id in ('asd', '2', 'Tue Nov 18 15:19:08 2014')
I feel much safer now putting together queries.
As of the latest RPostgreSQL it should work:
db_connection <- dbConnect(dbDriver("PostgreSQL"), dbname = database_name,
host = "localhost", port = database_port, password=database_user_password,
user = database_user)
qry = "insert into mytable (a,b,c) values ($1,$2,$3)"
dbSendQuery(db_connection, qry, c(1, "some string", "some string with | ' "))
Here's a version using the DBI and RPostgres packages, and inserting multiple rows at once, since all these years later it's still very difficult to figure out from the documentation.
x <- data.frame(
a = c(1:10),
b = letters[1:10],
c = letters[11:20]
)
# insert your own connection info
con <- DBI::dbConnect(
RPostgres::Postgres(),
dbname = '',
host = '',
port = 5432,
user = '',
password = ''
)
RPostgres::dbSendQuery(
con,
"INSERT INTO mytable (a,b,c) VALUES ($1,$2,$3);",
list(
x$a,
x$b,
x$c
)
)
The help for dbBind() in the DBI package is the only place that explains how to format parameters:
The placeholder format is currently not specified by DBI; in the
future, a uniform placeholder syntax may be supported. Consult the
backend documentation for the supported formats.... Known examples are:
? (positional matching in order of appearance) in RMySQL and RSQLite
$1 (positional matching by index) in RPostgres and RSQLite
:name and $name (named matching) in RSQLite
? is also the placeholder for R package RJDBC.