We're using BigQuery with their new dialect of "standard" SQL.
the new SQL supports inline functions written in SQL instead of JS, so we created a function to handle date conversion.
CREATE TEMPORARY FUNCTION
STR_TO_TIMESTAMP(str STRING)
RETURNS TIMESTAMP AS (PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ', str));
It must be a temporary function as Google returns Error: Only temporary functions are currently supported; use CREATE TEMPORARY FUNCTION
if you try a permanent function.
If you try to save a view with a query that uses the function inline - you get the following error: Failed to save view. No support for CREATE TEMPORARY FUNCTION statements inside views.
If you try to outsmart it, and remove the function (hoping to add it during query time), you'll receive this error Failed to save view. Function not found: STR_TO_TIMESTAMP at [4:7].
Any suggestions on how to address this? We have more complex functions than the example shown.
Since the issue was marked as resolved, BigQuery now supports permanents registration of UDFs.
In order to use your UDF in a view, you'll need to first create it.
CREATE OR REPLACE FUNCTION `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP`
(str STRING)
RETURNS TIMESTAMP AS (PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ', str));
Note that you must use a backtick for the function's name.
There's no TEMPORARY in the statement, as the function will be globally registered and persisted.
Due to the way BigQuery handles namespaces, you must include both the project name and the dataset name (test) in the function's name.
Once it's created and working successfully, you can use it a view.
create view test.test_view as
select `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP`('2015-02-10T13:00:00Z') as ts
You can then query you view directly without explicitly specifying the UDF anywhere.
select * from test.test_view
As per the documentation https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#create_function_statement , the functionality is still in Beta phase but is doable. The functions can be viewed in the same dataset it was created and the view can be created.
Please share if that worked fine for you or if you have any findings which would be helpful for others.
Saving a view created with a temp function is still not supported, but what you can do is plan the SQL-query (already rolled out for the latest UI), and then save it as a table. This worked for me, but I guess it depends on the query parameters you want.
##standardSQL
## JS in SQL to extract multiple h.CDs at the same time.
CREATE TEMPORARY FUNCTION getCustomDimension(cd ARRAY<STRUCT< index INT64,
value STRING>>, index INT64)
RETURNS STRING
LANGUAGE js AS """
for(var i = 0; i < cd.length; i++) {
var item = cd[i];
if(item.index == index) {
return item.value
}
}
return '';
""";
SELECT DISTINCT h.page.pagePath, getcustomDimension(h.customDimensions,20), fullVisitorId,h.page.pagePathLevel1, h.page.pagePathLevel2, h.page.pagePathLevel3, getcustomDimension(h.customDimensions,3)
FROM
`XXX.ga_sessions_*`,
UNNEST(hits) AS h
WHERE
### rolling timeframe
_TABLE_SUFFIX = FORMAT_DATE('%Y%m%d',DATE_SUB(CURRENT_DATE(),INTERVAL YY DAY))
AND h.type='PAGE'
Credit for the solution goes to https://medium.com/#JustinCarmony/strategies-for-easier-google-analytics-bigquery-analysis-custom-dimensions-cad8afe7a153
Related
I'm trying to clean up some shared functionality across queries and would like to have a number of filter functions as stored Log Analytics functions.
Now the below works fine if the function is defined in the same place as the query, but when i split the function into a stored LA function, I can't figure out how to get the invoke operator to work.
`//function to filter
let remove_robotstxt=( T:(requestUri_s:string) ) {
T
| where parse_url( requestUri_s).Path != "/robots.txt"
};
//
//
AzureDiagnostics
| where Category == "FrontdoorAccessLog"
| invoke remove_robotstxt()`
Passing params such as strings to functions works just fine, but how about tabular functions? What am i missing?
I have tried a union to the function and a number of other things, but my query doesnt seem to see the function being available.
I ended up just creating the function as a queried table. So in your case:
AzureDiagnostics
| where parse_url( requestUri_s).Path != "/robots.txt"
And save function as AzureDiagnosticsRemovedRobots.
And you can simply call this function directly:
AzureDiagnosticsRemovedRobots
| where Category == "FrontdoorAccessLog"
This might not be exactly what you're looking for but it kind of works for me.
I have a Spark Structured Streaming job that needs to use the rdd.forEach inside the forEachBatch function as per the bellow code:
val tableName = "ddb_table"
df
.writeStream
.foreachBatch { (batchDF: DataFrame, _: Long) =>
batchDF
.rdd
.foreach(
r => updateDDB(r, tableName, "key")
)
curDate= LocalDate.now().toString.replaceAll("-", "/")
prevDate= LocalDate.now().minusDays(1).toString.replaceAll("-", "/")
}
.outputMode(OutputMode.Append)
.option("checkpointLocation", "checkPointDir")
.start()
.awaitTermination()
What happens is that the tableName variable is not recognized inside the rdd.forEach function because the call to the DynamoDB API inside the updateDDB raises an exception stating that the tableName cannot be null.
The issue is clearly in the rdd/forEach and the way it works with variables. I read some things about broadcast variables, but I don't have enough experience working with RDDs and Spark in a much lower level to be sure what is the way to go.
Some notes:
I need this to be inside the forEachBatch function because I need to update other variables apart from this write to DDB (in this case the curDate and prevDate variables)
The code runs successfully when I pass the tableName parameter directly in the function call.
I have one class that extends the ForEachWriter that works ok when using the forEach instead of the forEachBatch, but as stated in point 1) I need to use the second because I need to update several things at a streaming batch time.
I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.
We have a weekly backup process which exports our production Google Appengine Datastore onto Google Cloud Storage, and then into Google BigQuery. Each week, we create a new dataset named like YYYY_MM_DD that contains a copy of the production tables on that day. Over time, we have collected many datasets, like 2014_05_10, 2014_05_17, etc. I want to create a data set Latest_Production_Data that contains a view for each of the tables in the most recent YYYY_MM_DD dataset. This will make it easier for downstream reports to write their query once and always retrieve the most recent data.
To do this, I have code that gets the most recent dataset and the names of all the tables that dataset contains from the BigQuery API. Then, for each of these tables, I fire a tables.insert call to create a view that is a SELECT * from the table I am looking to create a reference to.
This fails for tables that contain a RECORD field, from what looks to be a pretty benign column-naming rule.
For example, I have this table:
For which I issue this API call:
{
'tableReference': {
'projectId': 'redacted',
'tableId': u'AccountDeletionRequest',
'datasetId': 'Latest_Production_Data'
}
'view': {
'query': u'SELECT * FROM [2014_05_17.AccountDeletionRequest]'
},
}
This results in the following error:
HttpError: https://www.googleapis.com/bigquery/v2/projects//datasets/Latest_Production_Data/tables?alt=json returned "Invalid field name "__key__.namespace". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.">
When I execute this query in the BigQuery web console, the columns are renamed to translate the . to an _. I kind of expected the same thing to happen when I issued the create view API call.
Is there an easy way I can programmatically create a view for each of the tables in my dataset, regardless of their underlying schema? The problem I'm encountering now is for record columns, but another problem I anticipate is for tables that have repeated fields. Is there some magic alternative to SELECT * that will take care of all these intricacies for me?
Another idea I had was doing a table copy, but I would prefer not to duplicate the data if I can at all avoid it.
Here is the workaround code I wrote to dynamically generate a SELECT statement for each of the tables:
def get_leaf_column_selectors(dataset, table):
schema = table_service.get(
projectId=BQ_PROJECT_ID,
datasetId=dataset,
tableId=table
).execute()['schema']
return ",\n".join([
_get_leaf_selectors("", top_field)
for top_field in schema["fields"]
])
def _get_leaf_selectors(prefix, field):
if prefix:
format = prefix + ".%s"
else:
format = "%s"
if 'fields' not in field:
# Base case
actual_name = format % field["name"]
safe_name = actual_name.replace(".", "_")
return "%s as %s" % (actual_name, safe_name)
else:
# Recursive case
return ",\n".join([
_get_leaf_selectors(format % field["name"], sub_field)
for sub_field in field["fields"]
])
We had a bug where you needed to need to select out the individual fields in the view and use an 'as' to rename the fields to something legal (i.e they don't have '.' in the name).
The bug is now fixed, so you shouldn't see this issue any more. Please ping this thread or start a new question if you see it again.
If I were retrieving the data I wanted from a plain sql query, the following would suffice:
select * from stvterm where stvterm_code > TT_STUDENT.STU_GENERAL.F_Get_Current_term()
I have a grails domain set up correctly for this table, and I can run the following code successfully:
def a = SaturnStvterm.findAll("from SaturnStvterm as s where id > 201797") as JSON
a.render(response)
return false
In other words, I can hardcode in the results from the Oracle function and have the HQL run correctly, but it chokes any way that I can figure to try it with the function. I have read through some of the documentation on Hibernate about using procs and functions, but I'm having trouble making much sense of it. Can anyone give me a hint as to the proper way to handle this?
Also, since I think it is probably relevant, there aren't any synonyms in place that would allow the function to be called without qualifying it as schema.package.function(). I'm sure that'll make things more difficult. This is all for Grails 1.3.7, though I could use a later version if needed.
To call a function in HQL, the SQL dialect must be aware of it. You can add your function at runtime in BootStrap.groovy like this:
import org.hibernate.dialect.function.SQLFunctionTemplate
import org.hibernate.Hibernate
def dialect = applicationContext.sessionFactory.dialect
def getCurrentTerm = new SQLFunctionTemplate(Hibernate.INTEGER, "TT_STUDENT.STU_GENERAL.F_Get_Current_term()")
dialect.registerFunction('F_Get_Current_term', getCurrentTerm)
Once registered, you should be able to call the function in your queries:
def a = SaturnStvterm.findAll("from SaturnStvterm as s where id > TT_STUDENT.STU_GENERAL.F_Get_Current_term()")