Can`t drop the column, because another column was recently deleted - google-bigquery

I deleted the ColumnA at the table and want to drop the ColumnB.
ALTER TABLE Project.Schema.Table DROP COLUMN ColumnB;
But i got an error
Column `ColumnA` was recently deleted in the table `Table`. Deleted column name is reserved for up to the time travel duration, use a different column name instead.
Why I can`t delete ColumnB? How to do this? Only recreate the table?

This error occurs due to the Time Travel feature of BigQuery which enables recovery of data that is changed or deleted up to 7 days later.This feature maintains 7 days of history, including metadata history. There are two workaround possible for this issue:
Wait for 7 days and then retry to drop the other column.
Drop the table and recreate using a temporary table.
This issue has been raised in this Issue Tracker. You can “STAR” the issue to receive automatic updates and give it traction by referring to this link.

Related

Postgresql question for changing data without effecting table

Tables
The above image shows the table that I have created in postgresql.
Say if I create an account saying 'entertainment' and add 40 entries to it over a month
before realizing that I spelt it as entertainmunt.
so now, you'd want to rename that account, right?
With this table structure..
how should achieve that?

Delete from audit table in runtime

We are using Oracle 12.1 database,
We want to create a table which will hold runtime audit data
The relevant/used data is only in a week time frame (older records will become irrelevant) and we'll delete older records using a job
Table holds 3 columns ID, Date (Primary key) and DAY_COUNT
We want to reset specific records, which can be achieve by updating DAY_COUNT to 0
But we want to keep the table small and the old data is irrelevant to us, so we consider using delete instead of update
Is it safe to reset records in runtime using delete ?
It seems the not documented convention to prevent using delete, but is it relevant in this case?

Eliminate duplicates automatically from table

Table will be getting new data everyday from source system and i want the duplicates to be deleted automatically as soon as new data gets loaded to table.
Is it possible in bigquery?
I tried to create a view named sites_view in bigquery with below query
SELECT DISTINCT * FROM prd.sites
but duplicates not getting deleted automatically.
Below is for BigQuery:
Duplicates will not be deleted automatically - there is no such functionality in BigQuery
You should have some process to make this happen as frequently as you need or use views
Bigquery is based on append-only kind of a design. So, it accepts all the data.
This is one of the reasons there are no Primary/Unique key constraints on it, so you can't prevent duplicates from entering in the table.
So, you have to have a process like:
1.) Create a new table without duplicates from your original table.
(You can use DISTINCT/ROW_NUMBER() for doing this.)
2.) Drop original table.
3.) Rename new table with original table name.
Let me know if this information helps.

How to delete a duplicate record without using primary key

I went for an interview today where they give me technical test on sql. One of them was how to delete duplicate records without a primary key.
For one I can't imagine a table without a primary key. Yes I have read the existing threads on this. Say this happened and needed to be fixed Now. Couldn't I just add to the end of the table a automatically incrementing id then use that to delete the duplicate record?
Can anyone think of a reason why that won't work? I tried it on a simple database I created and I can't see any problems
You've got a couple of options here.
If they don't mind you dropping the table you could SELECT DISTINCT * from the table in question and then INSERT this into a new table, DROPping the old table as you go. This obviously won't be usable in a Production database but can be useful for where someone has mucked up a routine that's populating a data warehouse for example.
Alternatively you could effectively create a temporary index by using the row number as per this answer. That answer shows you how to use the built in row_number() function in SQL server but could be replicated in other RDBMS' (not sure which but MySQL certainly) by declaring a variable called #row_num or equivalent and then using it in your SELECT statement as:
SET #row_num=0;
SELECT #row_num:=#row_num+1 AS row_num, [REMAINING COLUMNS GO HERE]
One of possible options how to do this:
select distinct rows from your table(you can achieve this using group by all columns)
insert result into new table
drop first table
alter second table to name of first one
But this is not always possible in production

Change an autoincrementing field to one previous

One day, wordpress suddenly jumped from pots id 9110 to 890000000 post.
Days later I'd like to move back new posts to continue from id 9111.
I'm sure that id will never reach id 890000000, no problem here, but id is an autoincrement field and ALTER TABLE wp8_posts AUTO_INCREMENT = 9111 is not working.
Can I force id to continue from 9111 ?
You probably need to renumber the already existing post so that it has an id of 9111, and then issue your alter table command. You'll have to change the ID in all the other tables too that are pointing to this ID. If you then issue your alter table command, it should work. If this still doesn't work, you could rename the table, to something like wp8_posts_backup, with:
RENAME TABLE wp8_posts TO wp8_posts_backup
Then, create another table with the same schema with:
Create Table wp8_posts LIKE wp8_posts_backup;
and then copy the data from the backup to the new one. All this still requires renumbering the old post, or deleting it and then recreating it, because having the database knows that there is a record with ID of 890000000, and will always try to go above that when creating the next ID. I believe it uses the index on the column to find the highest number, and calcuate the next id, rather than storing the same value somewhere else. Which is why it's necessary to have a unique index on any autoincrementing field.