Every time I attempt to save a view that has "create or replace table" I get the error:Only SELECT statements are allowed in view queries.
I am using standard SQL in Google Cloud Big Query. I do not want the view as a scheduled query and the view is too large to run, meaning it errors out asking for me to create a table that allows large results. I want the query to update its associated table every time it is run. I came across the Write_Truncate syntax, however I do not know to use it.
create or replace table.test1 as( all my other queries)
I just expect for big query to allow me to run the table.
Related
So I have this schedule that gets data from some tables and aggregate then into a single table, that I use as a source in a data studio dash, one of these tables (table 1), is updated daily, sometimes more than once, I wanted to know if there is a way to automatically run the schedule every time this table (table1) is updated.
Unfortunately, BigQuery scheduled queries need the parameters of run_time and run_date, so if you want this to be only with BigQuery, you can schedule multiple queries at different times.
Additionally, using a trigger in BigQuery is not possible because this is unsupported.
I recommend you to use a Cloud Run Action that gets the event after any Insert and this creates an event Trigger, so this could do what you are looking for, but without any scheduled queries.
I inherited an Access based dashboard which uses a series of SQL queries and Make Table actions to write the results of those queries into tables located in two other Access files.
There is a macro that runs all of the make table commands which in turn run the sql queries. All of them are working when I run this on my machine, but when I run it from our VM which handles scheduled refreshes, it creates a duplicate table for one of the queries.
If I delete that table from the Tables1 database, the Queries database will run successfully and create both the correct table, and the duplicate in Tables1. However, each subsequent time, the macro will fail with an error saying that the duplicate table already exists.
This is the make table sql:
SELECT [SQLQueryName].*
INTO [TableName]
IN 'filepath\Tables1.accdb'
FROM SQLQueryName;
This is the same structure that all the other make table queries are using and they are not having this issue. I'm not sure if this matters or will tell someone something, but the duplicate table is the same name with a 1 added to the end of it. We also recently had to get a new VM setup and there have been a lot of weird issues with things not working as they did in the previous VM where this had been running without issue for quite a long time.
So far I've tried compacting both the Query and Table1 Database files. I've tried deleting the make table query and making a new version. I've tried deleting the table and duplicate from Tables1. I've tried rolling back the database files to versions several weeks old. We also made sure that the version of Access is the same as on my PC.
I am currently attempting to change it to a delete row/append row method rather than a make table, but even if that works I would still love to know why this is happening.
Edit:
Actual Code from make table that is failing, removed filepath.
SELECT [012_HHtoERorOBS].*
INTO [HHtoER-Obs]
IN '\\filepath\MiscTablesB.accdb'
FROM 012_HHtoERorOBS;
Here is code from 2 other make table queries that are working. Each make table query follows an identical format of select * from the sql query to get the data into the table in the destination accdbs.
SELECT [010_ScheduledVisitsQuery].*
INTO ScheduledVisits
IN '\\filepath\MiscTablesB.accdb'
FROM 010_ScheduledVisitsQuery;
SELECT [020_HH_HO].*
INTO HH_Referred_to_HO
IN '\\filepath\MiscTablesB.accdb'
FROM 020_HH_HO;
All of these tables exist in the destination accdbs when the make table queries are run. The macro does not include any commands to delete tables. Here is a screenshot of the top of the macro, it repeats all the make table queries then ends with a command to quit access.
I'm looking for a way to speed up the following process: I have a SSIS package that loads data from Excel files on a weekly basis to SQL Server. There are 3 fields: Brand, Date, Value.
In the dataflow, I check for existing combinations of Brand+Date, and new combinations go to the table directly, the existing ones go to a RecordSet destination for updates:
The next step is to update the Value of the existing combinations:
As you can see, there are thousands of records to update, and it takes too long. The number of records tend to grow week by week. Please suggest.
The fastest way will be do this inside a Stored procedure using ELT (Extract Load Transform) approach.
Push all data from excel as is into a table(called load to a staging table in theory). Since you do not seem to be concerned with data validation steps, this table can be a replica of final destination table columns.
Next step is to call a stored procedure using Execute SQL task. Inside this procedure you can put all your business logic. Since this steps with native data manipulation on SQL server entities, it is the fastest alternative.
As a last part, please delete all entries from the staging table.
You can use indexes on staging table to make the SP part even faster.
If every query creates a temporary table, is it possible to find out the name of that temporary table in a subsequent SQL statement?
Per https://cloud.google.com/bigquery/querying-data :
All query results are saved to a table, which can be either persistent
or temporary:
A temporary table is a randomly named table saved in a special
dataset; The table has a lifetime of approximately 24 hours. Temporary
tables are not available for sharing, and are not visible using any of
the standard list or other table manipulation methods.
For example:
/* query 1 */
select * from whatever;
/* query 2 */
select * from [temporary_table_that_just_got_created];
You cannot determine that name from within BigQuery Web UI using BigQuery SQL
But, Yes, you can find it by looking in Query History
You should find you query in history list and expand it by clicking on it
Name you are looking for is under "Destination Table"
It is also possible to do this with BigQuery API and very useful in client applications
You should use Jobs: get API and see configuration.query.destinationTable (https://cloud.google.com/bigquery/docs/reference/v2/jobs#resource)
I have a view that is returning four columns of data to be pushed to an external program. When I simply query the view ("Select * from schema.view_name") I get 10353 rows. When I run the actual SQL that created the view (I literally copied and pasted what Oracle had stored, minus the "Create or Replace" statement), I get 238745 rows.
Any ideas why this might occur?
Best guess: when you run the query standalone you're not running it in the same schema the view was created in (I am inferring this from the fact that you included the schema name in your example SELECT). The schema where you're running the query either has its own table with the same name as one of the base tables in the view, or one of the names is a synonym pointing to yet another view that contains only a subset of the rows in the underlying table.