Is there a way to specify who drop table? - google-bigquery

I met the phenomenon that the table suddenly disappears.
I checked the history of project query, but there is no DELETE TABLE query.
I want to find who (or which service account) did drop the table.
Is there a way to specify who drop table other than project query history?
Added:
I already checked table expiration date, partition expiration date.

Yes there are other optios as well
You can check in Activity logs
In more details , you can check logs in Logging.
Use filter in log query
resource.type="bigquery_resource"
protoPayload.methodName = "tableservice.delete"

Related

Allow only 3 rows to be added to a table for a specific value

I have a question in hand where i need to restrict the number of projects assigned to a manager to only 3. The tables are:
Manager:
Manager_employee_id(PK)
Manager_Bonus
Project:
project_number(PK)
Project_cost
Project_manager_employee_id(FK)
Can anyone suggest what approach to take to implement this?
"How do I implement the restrict to 0,3?"
This requires an assertion, which is defined in the SQL standard but not implemented in Oracle. (Although there are moves to have them introduced).
What you can do is use a materialized view to enforce it transparently.
create materialized view project_manager
refresh on commit
as
select Project_manager_employee_id
, count(*) as no_of_projects
from project
group by Project_manager_employee_id
/
The magic is:
alter table project_manager
add constraint project_manager_limit_ck check
( no_of_projects <= 3 )
/
This check constraint will prevent the materialized view being refreshed if the count of projects for a manager exceeds three, which failure will cause the triggering insert or update to fail. Admittedly it's not elegant.
Because the mview is refreshed on commit (i.e. transactionally) you will need to build a log on project table:
create materialized view log on project
I would do the following:
Create new column projects_taken (tinyint) (1) (takes values of 1,2 or 3) with default value of 0.
When manager takes project, the field will increment by 1
Do simple checks (through the UI) to see if the field projects_taken is equal or smaller than 3.
I would do the following:
Create one SP for both operation insert/update.
Check with IF not exists Project_manager_employee_id(FK) count < 3
then only proceed for insert/update. otherwise send Throw error.

How to track changes for certain database tables?

I have program that takes user and updates information about him/her in five tables. The process is fairly sophisticated as it takes many steps(pages) to complete. I have logs, sysout and syserr statements that helps me to find sql queries in IDE console but it doesn't have all of them. I've already spend many days to catch other missing queries by debugging but no luck so far. The reason why I am doing this is because I want to automate user information updates so I don't have to go through every page entering user details manually.
I wonder if I could just have some technique that will show me database table changes as I already know table names, by changes I mean whether it was update or insert statements and what exactly changed(column name and value inserted/updated). Any advice is greatly appreciated. I have IBM RAD and DB2 database. Thanks.
In DB2 you can track basic auditing information.
DB2 can track what data was modified, who modified the data, and the SQL operation that modified the data.
To track when data was modified, define your table as a system-period temporal table. The row-begin and row-end columns in the associated history table contain information about when data modifications occurred.
To track who and what SQL modified the data, you can use non-deterministic generated expression columns. These columns can contain values that are helpful for auditing purposes, such as the value of the CURRENT SQLID special register at the time that the data was modified. Possible values for non-deterministic generated expression columns are defined in the syntax for the CREATE TABLE and ALTER TABLE statements.
For example
CREATE TABLE TempTable (balance INT,
userId VARCHAR(100) GENERATED ALWAYS AS ( SESSION_USER ) ,
opCode CHAR(1)
GENERATED ALWAYS AS ( DATA CHANGE OPERATION )
... SYSTEM PERIOD (SYS_START, SYS_END));
The userId column stores who modified the data. This column is defined as a non-deterministic generated expression column that contains the value of SESSION_USER special register.
The opCode column stores the SQL operation that modified the data. This column is defined as a non-deterministic generated expression column and stores a value that indicates the type of SQL operation.
Suppose that you then use the following statements to create a history table for TempTable and to associate that history table with TempTable:
CREATE TABLE TempTable_HISTORY (balance INT, user_id VARCHAR(128) , op_code CHAR(1) ... );
ALTER TABLE TempTable ADD VERSIONING
USE HISTORY TABLE TempTable_HISTORY ON DELETE ADD EXTRA ROW;
Capturing SQL statements for a limited number of tables and a limited time - as far as I understand your problem - could be solved with the DB2 Audit facility.
create audit policy tabsql categories execute status both error type normal
audit <tabname> using policy tabsql
You have to have SECADM rights in theh database and the second command will start the audit process. You can stop it with
audit <tabname> remove policy
Check out the
db2audit
command to configure paths and extract the data from the audit file to a delimited file which then could be loaded again into the database.
The necessarfy tables can be created with the provided sqllib/misc/db2audit.ddl script. You will need the query the EXECUTE table for your SQL details
Please note that audit can capture huge amounts of data so make sure to switch it off again after you have catured the necessary information.

Method in SQL Server for making a copy of a table and refreshing it?

I'm trying to figure out if there's a method for copying the contents of a main schema into a table of another schema, and then, somehow updating that copy or "refreshing" the copy as the main schema gets updated.
For example:
schema "BBLEARN", has table users
SELECT * INTO SIS_temp_data.dbo.bb_users FROM BBLEARN.dbo.users
This selects and inserts 23k rows into the table bb_course_users in my placeholder schema SIS_temp_data.
Thing is, the users table in the BBLEARN schema gets updated on a constant basis, whether or not new users get added, or there are updates to accounts or disables or enables, etc. The main reason for copying the table into a temp table is for data integration purposes and is unrelated to the question at hand.
So, is there a method in SQL Server that will allow me to "update" my new table in the spare schema based on when the data in the main schema gets updated? Or do I just need to run a scheduled task that does a SELECT * INTO every few hours?
Thank you.
You could create a trigger which updates the spare table whenever an updated or insert is performed on the main schema
see http://msdn.microsoft.com/en-us/library/ms190227.aspx

Auditing logged in user with delete trigger

We have an audit option in our application, where we are auditing the deleted records from a table using AFTER DELETE ON trigger.
Problem description :
The problem that we face here is, we need to log the person who has deleted the record. We could not get id of the person deleted the record anywhere from the database as its not present. Its coming from the web application. My question is there anyway to get the name or id of the person who has logged into the web application in the database side.
We are using oracle 11g.
You should be able to do this using dbms_session package.Using the package you can set and get values.Hence , during the login to your application , you can set the value and finally while on delete trigger execution , get this and insert into the audit table.
This might come handy - http://www.dba-oracle.com/t_dbms_session.htm
Hope that helps !

How to delete a large record from SQL Server?

In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)