Running multiple queries in BigQuery web UI - google-bigquery

I am using to BigQuery web UI for running my queries. I want to delete some specific rows from all tables in a Dataset. I want to do it by running all delete queries in one go, like below:
DELETE FROM `dataset_name.tabl_name_1` WHERE REGEXP_CONTAINS(user_dim.user_id, r'g_1478_h_1.') = TRUE;
DELETE FROM `dataset_name.tabl_name_2` WHERE REGEXP_CONTAINS(user_dim.user_id, r'g_1478_h_1.') = TRUE;
DELETE FROM `dataset_name.tabl_name_3` WHERE REGEXP_CONTAINS(user_dim.user_id, r'g_1478_h_1.') = TRUE
There are almost 500 tables. So there will be 500 queries to be run in one go. I have unchecked the option of 'use Legacy Sql'.
But on running above queries (almost 500) returns error:
Syntax error: Unexpected keyword DELETE at [2:1]
Is there any solution to my problem?

You cannot do this in BigQuery web UI!
Your best option here is to use BigQuery client of your preference and script those repetitive statements
Have in mind quotas/limitation for DML
Edit (October 2019):
Support for scripting and stored procedures is now in beta.
You can submit multiple statements separated with semi-colons and BigQuery is able to run them now.

I get around this by putting the queries in a Cloud Function (using Python) and scheduling using the new Cloud Scheduler. Works fine, but would be easier in BQ itself.

Related

Running SQL via SQLWorkbench versus via Tableau Prep

I have developed some SQL that reads from a redshift table, does some manipulation (esp listagg some fields), and then writes to another redshift table.
When I run the SQL using SQLWorkbench it executes successfully. When I embed it in a Tableau Prep flow (as "Complex SQL") I get several of these errors: "System error: AqlProcessor evaluation failed: [Amazon][Support] (40550) Invalid character value for cast specification." Presumably these relate to my treatment of data types. What I don't is what is so difference in the environment that would cause different results like this? Is it because SQLWorkbench and Tableau Prep use different SQL interpreters? Or is my question too broad to even speculate without going through the actual code?
Best guess is that Tableau, which has knowledge of DDL, is add some CAST() operations to the SQL. SQLWorkbench is simpler and is pushing the SQL to Redshift as written. This is based on there being no explicit CASTs in your SQL but an error message that identifies a CAST().
Look at stl_querytext for these two queries and see if they are being given to Redshift differently by the two benches. I suspect this will give you some clues to go on.
If there are no differences in the SQL then the issue may be with user / connection differences and more info will likely be needed about the issue.

IBM SPSS How to import a Custom SQL Database Query

I am looking to see if the capability is there to have a custom SSMS sql query imported in SPSS (Statistical Package for the Social Sciences). I would want to build syntax that generates this query as my new dataset that I can then continue my scripted analysis. I see the basic query capability of one table from a Sql Server but I would like to create a query that joins to many tables. I anticipate the query to be a bit complex with many joins and perhaps data transformations.
Has anybody had experience or a solution to this situation?
I know I could take the query and make a table of it that SPSS can then connect to but my data changes daily and I would need a job in another application to refresh this table before my SPSS syntax would pull it and I would like to eliminate that first step by just having the query that grabs the data at the beginning of my syntax.
Ultimately I am looking to build out my SPSS syntax and schedule it in the Production Facility to run daily.

Executing T-sql query from hard drive

I have T-SQL queries stored on a hard drive: I:\queries\query1.sql and I:\queries\query2.sql.
I usually work in a way that I execute a query from a drive, and then I copy results into Excel, and then I work on it.
My problem here is that query1.sql is already long, and now I would like to extend it by getting a result of query2.sql, and join it with a result of query1.sql.
What I could do is appending a code from query2.sql to query1.sql. But then the query is getting really long and hard to maintain.
I would like to do something like this:
SELECT * FROM ("Result of I:\queries\query1.sql") q1
LEFT JOIN ("Result of I:\queries\query2.sql") q2 ON q1.ID=q2.ID
Is there any way to write a query or stored procedure, which will be again stored on a drive to do this?
Basically, you need to ask your DBA for a database when you are able to store things in the database. This can be on the same system where the data is stored. Or, it could be on a linked system. Gosh, you could run SQL Server locally and store the information and data there.
Then, the queries that you are storing in files should be views in the database. You can then run the queries and store and combine the results locally.
You are essentially recreating database functionality using text files and data files -- going through a lot of effort when SQL Server already supports this functionality.
To expand on Gordon's comment (+1), why are you running scripts off of a drive? Most DBA's I've known would treaten bodily harm over this as executing code that they can't control / troubleshoot / see source code control on brings a whole host of security and supportability issues.
Far better to store this code in a Stored Procedure, which will have a saved query execution plan, can be tracked using various DMV's, and have permissions assigned to it, then your outside Excel doc can just set a connection and execute the SP.

Teradata SQL & cognos. How can I customize Cognos to accept a customized & more efficiently performing SQL

RIght at the outset I'd like to say that I am NOT a Cognos Guy .So I have totally disconnected myself from developing cognos cubes / reports whatever you want to call it.
There are COGNOS queries auto generated - very badly written that will cause the Teradata ( DBS 15.1.x ) system to Hog on spool & CPU . I can tune them beautifully after I pull them out from DBQL. I want to know HOW can I implement Custom Queries that can be run periodically as batch reports instead of Cognos auto-generating these queries.
E.g. You create a cube - its writes code behind it and then you can open the code and write custom code that is equivalent to the original code but performs a lot better. Then when you open the cube again - it remembers there is a custom SQL and runs that instead of its own auto generated SQL . This is just how I imagine one way it can do it but again- I am not a cognos resource so pl dont flag me down for lack of knowledge. That is exactly what I am trying to get an idea about
Thanks for bearing with me
In Framework Manager you can create one Query Subject with complex query inside. Do not import tables etc. Just create QS in put your query inside.
You need to use stored procedure to return your expected data and add it to Model.
Then instead of using couple of tables in Cognos report studio (and joins), add one query and point it to your stored procedure. This way your Cognos report will execute the procedure instead of generating query (which may not be efficient in many cases)

Dealing with enormous datasets with Impala

I have a general question about Impala vs some traditional SQL database systems. I've heard that Impala can take certain SQL statements quite literally and spit out tables with billions of rows (such as what might happen with a join statement with duplicate rows). As a narrower example suppose I run something like "SELECT * FROM database" . As far as immediate console output is concerned, I understand that most traditional SQL databases will stop running when a limit of say, 1000 entries is reached. Is the same true of Impala? In other words, if I run "SELECT * FROM database" in Impala, is it in theory doing more work, even though it will ultimately spit out a limited number of rows?
I think it depends what you use to do the query. If you just run from the command line in Bash or the Impala shell it will fetch all the results, however if you use Hue, it will page through the results like you describe. Actually the same is true for any database, if you're using a GUI to access it, you can run something like an export to csv command to get the full result set, or if you're accessing programatically, you would use fetchall().