Bigquery udf supported outside the Bigquery API - google-bigquery

I'm using a third party tool to query our data stored in Bigquery. The third party tool uses a Bigquery JDBC driver. I would like to take advantage of UDF's but I do not see any documentation or support for UDF's and the jdbc driver. Is it supported? If not is there an ETA?

we're currently only supporting UDFs through the API, but do have work planned to support declarative definition (and persistence) of your functions. This won't be shipped before 2016.
Just out of curiosity, what tool are you using? Tableau?

There is an inline way to define a UDF, but it is "alpha", unsupported, and undocumented. Use at your own risk.
https://stackoverflow.com/a/36208489/2259571

Related

SQL builder for CockroachDb

Try to find any SQL builder library that support CockroachDb and work well with spring-boot-webflux, spring-data-r2dbc. Or at least library that will just write dynamic SQL statement as string for me that I will provide to database client executor.
Native spring-data-r2dbc DatabaseClient API don't support join statements (https://docs.spring.io/spring-data/r2dbc/docs/1.0.x/reference/html/#r2dbc.datbaseclient.fluent-api.select) that's why I need another way how to write non-plain dynamic SQL.
JOOQ don't support CockroachDB yet(https://github.com/jOOQ/jOOQ/issues/8545).
Is there any library that will fit to my expectations?
PM at Cockroach here. We are working with the team at jOOQ to provide a CockroachDB support in the next few months. In the meantime, you can use pgjdbc with something like JDBI to do query bindings. We are also working on a CockroachDB dialect for Hibernate but you can use the current Postgres dialect as well.

How to use UDFs in New BigQuery UI(BETA)? I didnt find anywhere in UI

How to use UDFs in new BigQuery UI(BETA)? I didn't find anywhere in UI.
I'm trying to use my UDFs in new Web UI but I'm not able to find out the option.
Please let me know whether new UI will support UDFs or not?
Mikhail's comment is accurate. We currently have no plans to support legacy SQL UDFs in the new BQ UI.
Instead we recommend migrating to standard SQL and using standard SQL UDFs, which appear inline in the query text.

Standard or Legacy SQL for Google Analytics Data in BigQuery?

we are just starting to use Google Analytics data in BigQuery and previously used just the MSSQL Server in the work environment. We would like to move some of the analysis to the GCP and BigQuery, but could not decide on what is the better option to use - standard or legacy SQL?
In both cases we would have to adjust to the new language version, but the real question is what is the best choice when it comes to Google Analytics data analysis? Is there something that from the technical point of view should make us choose legacy over standard, or the other way around?
It is very misleading for us that there are two versions, because legacy seems to be more developed now, but perphaps standard will be the main version for SQL in the future in BQ?
BigQuery Standard SQL is the way to go. It has much more features than Legacy SQL.
Note: it is not binary choice. You always can use Legacy SQL - if there is something that you will find easier to express with it. From my experience it is mostly opposite - with very few exceptions. Most prominent (for me for example being) - Table Decorators - Support for table decorators in standard SQL is planned but not yet implemented.
I would recommend looking into Migrating from legacy SQL - not from migration point of view as you are the new to BigQuery - but because it is a good place to see and compare features of both dialects in one place.
Also I recommend to check BigQuery Issue Tracker so you can get some extra insight
Standard SQL is the preferred SQL dialect for use in BigQuery, as stated in the migration guide. While legacy SQL has been around for quite some time--and is still the default at the time of this writing--there is no active development work on it. If you are evaluating which to use, you should pick standard SQL, since in addition to being more similar to T-SQL (SQL Server's dialect) it is more expressive, has fewer surprising edge cases, and generally has more features.
Go with Standard SQL, as that's on the longterm roadmap.
From experience some queries are faster under Legacy SQL, but this is changing as Standard SQL is the one that is actively developed.

Can I use Madlib with Amazon Redshift?

The Madlib website suggests it is compatible with Postgresql. Amazon Redhift is based on Postgresql. Can install Madlib on Redshift?
The Madlib library suggests it is compatible with postgres, but the full advantage of MADlib you will take when you will start using it with a MPP Database( Massively Parallel Database ) and also uses some internal pyhton libraries which are similar in both and which may not be the case in Amazon Redshift, it will be good if you use it with greenplum which is also an opensource now and is totally based on Postgres otherwise you will not be able to get the most out of it.

Can you build and run models and statistical functions in Google's BigQuery?

How can you develop or use statistical functions in Google's BigQuery? Can you run Java, PHP, R, etc on the Google platform, or does it mainly support only SQL type features?
Google BigQuery supports a very limited set of statistical functions (avg, variance, quantiles). If you want to use R, Java, etc, you need to extract the data first. There is an open source JDBC driver that you can use from Java, or you can use the BigQuery client libraries. As for R, there have been a couple of examples of people writing an R connector to perform BigQuery queries and manipulate the results as an R data frame but I don't know any details.
If Python is a choice, the Pandas library has a very simple connector for reading and writing into BigQuery. Look for Pandas.read_gbq.
I just started using R and was also interested in connecting to BigQuery from there as well. Unfortunately the BigQuery client for R referenced here has been taken down.
I did happen to find a link to the connector in the CRAN archive though I haven't started testing it yet.