How to change a column mode in Bigquery - google-bigquery

I have an INTEGER type column in bigquery table, which is set to NULLABLE as in mode. Is there any way to change the field mode from NULLABLE to 0 in bigquery using API, bq console or UI?
I have read the docs and its clearly showing we can change the mode from REQUIRED to NULLABLE. But there is no hint on changing it from NULLABLE to 0. I have data already exists in the table which I don't want to lose.
Please help if there is any way I can do that.

Setting a default value is not supported, however this can be easily achieved in a query using IFNULL
For example
SELECT IFNULL(a, 0) AS field
FROM (
SELECT 2 AS a
UNION ALL
SELECT NULL AS a
)
If you are loading data from external source, you could create a staging table and then run a query to generate the data for your main table using IFNULL, would need more details to give a more specific answer.

Related

What is the equivalent of Select into in Google bigquery

I am trying to write a SQL Command to insert some data from one table to a new table without any insert statement in bigquery but I cannot find a way to do it. (something similar to select into)
Here is the table:
create table database1.table1
(
pdesc string,
num int64
);
And here is the insert statement. I also tried the select into but it is not supported in bigquery.
insert into database1.table1
select column1, count(column2) as num
from database1.table2
group by column1;
Above is a possible way to insert. but I am looking for a way that I do not need to use any select statement. I am looking for something similar to 'select into' statement.
I am thinking of declaring variables and then somehow feed the data into the tables but do not know how.
I am not a Google employee. However - I understand the reasoning for not supporting creating a copy of a table (or query) from the console.
The challenge is that each table needs to be created must have a number of features defined such as associated project and expiry time.
Looking through the documentation (briefly) - it is worth exploring using bq utility - specifically the cp command -
Explore the following operations :
cache the query results to a temporary table
get the name of said temporary table
pass to a copy table command perhaps?
Other methods are described in the google cloud doco https://cloud.google.com/bigquery/docs/managing-tables#copy-table

Update after a Copy Data Activity in Azure Data Factory

I've got this doubt in Azure Data Factory. My pipeline has a copy data activity, and after loading the information in the table I need to update a field in that destination based on a parameter. It is a simple update, but given that we do not have a SQL task (present in SSIS) I do not what to use. Create a SP for this does not seem to be the most appropriate solution, besides, modify the database is complicated. I thought the option "Use Query" in the Lookup activity could be a solution, but this does not allow me to create a SQL query with a parameter, just like in a Source.
What could be a possible workaround?
You are on the right track with the Lookup. That is definitely the way to go. The query field there will allow you to create dynamic SQL just like you did within the copy activity. You just need to reference the variable/parameter properly.
Also, with the Lookup, it will always expect something returned. You don't have to do anything with that returned value. Just ignore it, but the Lookup will not work without returning something. So, that query field would contain something like:
UPDATE dbo.MyTable SET IsComplete = 1 WHERE RunId = #{pipeline().parameters.runId};
SELECT 0 AS DummyValue; -- Necessary for Lookup to work

Using SSMA to convert from Access to SQL, scripting the fixes

I am using SSMA to convert from an Access db to a SQL 2019 DB.
There are some things I need to fix in the access DB so I am trying to figure out whether or not these things can be done via a query in access or you have to use the goofy UI and do everything manually.
So I had a couple of questions about queries in Microsoft Access:
Can you modify the 'required' attribute on a column within a table by using a query?
Can you configure Index (dupes) on a column by using a query?
Can you change validation rules using a query?
Can you create/delete relationships using a query?
Can you change the field length of a column by using a query?
Any examples of any of these would be helpful, when I google for ms access related things all of the content is either related to Access 2007/2010 or its very UI heavy rather than Query heavy.
I am trying to script this because I may have to do this migration several times.
Update: I was able to get most of what i needed figured out..
ALTER TABLE Users ALTER COLUMN Type CHECK(In ("I","U","") Or Is Null);
Still havent found a way to change the 'ValidationRule'.. trying to change it to
In ("I","U","") Or Is Null
Look into the Data Definition Language section of the MS Access SQL Reference, specifically the ALTER TABLE statement, which will cover the majority of your questions.
For example, in response to:
Can you change the field length of a column by using a query?
ALTER TABLE Table1 ALTER COLUMN Field1 TEXT(100)
The above will change the data type of the field Field1 within table Table1 to a text field accommodating 100 characters.

BigQuery create Table differences between standard and legacy sql

I have a few questions around the create table syntax in standard and legacy sql
The new BigQueryUI doesn't show standard sql types and shows only legacy types. I understand they are mapped one to one with the legacy types but the examples in creating partitioned tables shows options which are not available in the UI
If I create a table using the JSON field schema can I still use the standard sql?
The BigQueryUI shows only partitioning the table by Ingestion time, but I want to create a table with date column and I don't see an option for it. If I have to create the DDL manually, I did not see the examples on how to use JSON field schema to construct a create table statement.
The new BigQueryUI doesn't show standard sql types
BigQuery standardSQL and LegacySQL are 2 options to write SQL syntax (See this link for more detail) and have nothing to do with the column Types in BigQuery, Details on table types can be found in this link, I also find this Link helpful
If I create a table using the JSON field schema can I still use the standard sql?
To create a table using JSON you need to run bq command line, If you need help how to write this syntax let us know
but I want to create a table with date column and I don't see an option for it
You can use this standardSQL syntax to do this:
#standardSQL
CREATE OR REPLACE TABLE `project.dataset.tableId`
PARTITION BY myDate
CLUSTER BY cluster_col AS
SELECT * from sourceTable
Note: myDate column is a column in the source table

Check whether field exists in SQLite without fetching them all

I am writing a database abstraction layer that also abstracts some of the different query types. One of them is called "field_exists" - its purpose should be pretty self-explanatory.
And I want to implement that for SQLite.
The problem I am having is that I need to use one query that either returns a row confirming that the field exists or none if it doesn't. Thus, I cannot use the PRAGMA approach.
So, what query can I use to check whether a field exists in SQLite, that fulfills the above criteria?
EDIT: I should add that the query needs to be able to run in PHP code (using PDO).
Also, the query should look something like this (which only works with MySQL):
SHOW COLUMNS FROM table LIKE 'field'
Trying to select a field that doesn't exist will return an exception, then you can catch it and return nothing.
Use the .schema TABLENAME command. It will tell you the command that was issued to create the table. For more info chekcout the SQLite command shell documentation.
If you don't have access to the sqlite command line, you can always query the sqlite_master table. Let's say you want to know the command used to create the table MyTable. You'd issue this:
select sql from sqlite_master where name='MyTable';
This then gives you the sql command that was used to create the table. Then just grep through that output and see if the column you're looking for is in the command used to create the table.
UPDATE 2:
Actually better than the sql I posted above, you can use this:
PRAGMA table_info(*table_name*)
This will show you all the columns in a given table along with their types and other info.