I have a table with close to 40 columns(including 3 structs) which contains more than 1 TB of data. Now I need to add a new column to that table and refresh complete table data to reflect values to new column.
Could you please help me on what is the best/optimized way to do this.
Thanks in Advance.
You can add new fields when editing the schema as below:
After that, you can update the table with the new data
Related
I'd like to append 2 datasets into one (both datasets have the same columns). To do this, I created a new dataset and set the destination table to an existing table that I want to append the new table to. However, when I do this, the dataset only contains data from the new table.
How can I make sure that the new dataset appends to the existing table?
Thanks
let's say you have a table dataset1.tableA and a table in a difference dataset dataset2.tableB, both table have the same schema. You want to append the table dataset2.tableB to dataset1.tableA. Here is how you do it in StandardSQL via BQ UI:
Set Destination Table: Dataset dataset1 & table ID tableA
Choose Write Preference: Append to table
Run query: SELECT * FROM dataset2.tableB
Now in your table dataset1.tableA you should have data from dataset2.tableB appended.
I have coupule of columns in my table and one of them is a CLOB with json object.
I am working on data extraction mechanism from table and i was wondering if it is possible to create a new view with a new column containing certain value from that json (for example one column have rows with data like ...,"request":{"status":"open",.....} and i want new column STATUS)
Do you have any ideas how could I achieve this?
You can use JSON_VALUE.
SELECT
JSON_VALUE(jsonInfo,'$.request.status') status
FROM
( VALUES('{"request":{"status":"open"}}') ) J(jsonInfo)
Result:
status
------------
open
I have a dataset (txt file) in which there are 10 columns from which, last column has string data separated by a tab. for example -> abcdef lkjhj pqrst...wxyz
I created a new table defining col 10 as STRING but after loading the data into this table and I verify the data it shows only abcdef populated in the last column and the rest are ignored.
Plz can someone help how do I load entire string of data in the hive table. Do I need to write UDF ?
Thanks in advance
I have 12 columns with +/- 2000 rows in a sqlite DB.
Now I want to add a 13th column with the same amount of rows.
If I import the text from a cvs file it will add this after the existing rows (now I have a 4000 row table)
How can I avoid adding it underneath these rows?
Do I need to create a script to run trough each row of the table and add the text from the cvs file for each row?
If you have the code that imported the original data, and if the data has not changed in the meantime, you could just drop the table and reimport it.
Otherwise, you indeed have to create a script that looks up the corresponding record in the table and updates it.
You could also import the new data into a temporary table, and then copy the values over with a command like this:
UPDATE MyTable
SET NewColumn = (SELECT NewColumn
FROM TempTable
WHERE ID = MyTable.ID)
I ended up using Razor SQL great program.
http://www.razorsql.com/
I have a table where i have a column called student id and in that column i want to insert like SN + userid.But i know how to create a new column and do this but i want to add this to an existing column whenever a user is inserted.
ALTER TABLE [dbo].[Profile_Master]
Add [new] as ('SN'+CONVERT([varchar] (10),[UserId],(0)))
In SQL Server you can use triggers to do this:
http://msdn.microsoft.com/en-us/library/aa258254%28v=sql.80%29.aspx
Your solution is already compliant to your requirements: you are creating a computed column.
Thus, it will work for existing records and for new ones created in the future.