Unable to find audit log on removing bigquery table permission. I could find only bigquery set Iam policy log in the cloud log but doesnt have remove iam policy in audit log. Any help on auditing on removing the user permission on bigquery tables?
setIamPolicy is the method used for all IAM policy changes, includes those that remove user bindings.
If User A removes a binding for User B, they are simply posting a new policy document that excludes User B; the logs will only show the new policy and won't have information about the diff.
Old Policy
bindings:
- members:
- user:user-a#acme.com
- user:user-b#acme.com
role: roles/roles/bigquery.dataOwner
New Policy
bindings:
- members:
- user:user-a#acme.com
role: roles/roles/bigquery.dataOwner
To see IAM removal events, you'll have to query on setIamPolicy and evaluate all policy changes. If you'd like to track events where a specific user principal was removed from a policy altogether, you can use an AND NOT query to identify changes where a policy exclude specific users:
protoPayload.methodName="google.iam.v1.IAMPolicy.SetIamPolicy"
protoPayload.serviceName="bigquery.googleapis.com"
AND NOT protoPayload.serviceData.setIamPolicyRequest.policy.bindings.members="user:<USER TO AUDIT POLICY REMOVALS>"
One caveat to this approach: this filter will not pick up cases where a user is partially removed from a policy (e.g. only a single role was removed).
Related
CREATE POLICY table_policy
ON organisation
TO role_A
USING (id IN (SELECT organisation_id
FROM org_user_map_table
WHERE user_id = current_setting('app.current_user_id')::uuid));
The current RLS policy restricts users ensuring them to view data related to their own organisation.
I am now facing issue while INSERTing a new org into the Database where it throws error saying according to the above policy restriction, since no such organisation_ids are available for the user in the org_user_map_table the operation is not permitted. So, I cannot INSERT the new organisation, let alone access it at a later point.
While a suggestion for adding new roles and policies to perform INSERTS could be added, I have to work with the existing system and I am not sure how to split the policy differently for SELECTS and INSERTS.
Since you didn't explicitly specify it, you created the policy FOR ALL statements. If you need different conditions for, say, INSERT and SELECT, create two policies instead:
CREATE POLICY table_policy_sel ON organisation
FOR SELECT
TO role_A
USING (...);
CREATE POLICY table_policy_ins ON organisation
FOR INSERT
TO role_A
WITH CHECK (...);
The condition for the FOR INSERT policy can be different from the one FOR SELECT.
thank for taking the time to try answer/understand this question.
I am using AWS Aurora Postgres (Engine version: 13.4) database.
I referred to this document for creating readwrite and readonly roles for 2 new rdsiam users -> "dev_ro" and "dev_rw". I have granted readwrite role to "dev_rw" and readonly to "dev_ro". The additional changes are:
myschema is "public" - which is my default schema
I add the same permissions as "myschema" to another schema called "graphile_worker" (from graphile/worker - which is a job queue).
With this in mind, here is what I have done:
I run my application which adds some repeating jobs (jobs schedule itself), implying that the jobs table can never be empty
Connect to RDS using the IAM user (doesn't matter dev_ro or dev_rw)
I run SELECT * FROM graphile_worker.jobs in my IDE (dbeaver - shouldn't matter, I think)
The table shows up empty
Disconnect and Re-connect to RDS using superuser credentials (which are created when server is created)
Run same query as above
See data in the table
I don't know why this is happening.
I double-checked, both "dev_ro/w" (through the roles) and superuser, have:
CONNECT to database (without doubt)
SELECT on all tables of graphile_worker schema
USAGE on the graphile_worker schema
Moreover, I can query graphile_worker.migrations and the migration records show up as expected (on both devro/w and superuser)!
Please let me know if there is any more information that I can provide to help debug this issue.
Removing Row-Level Security (RLS) solved this issue.
Thanks #Hambone for asking the right question.
RLS is removed by executing
ALTER ROLE <username> WITH BYPASSRLS
create table if not exists [dataset].[table] (id int64, name string, created_at timestamp) partition by date(created_at) cluster by id;
Error running query
Access Denied: Dataset [project]:[dataset]: Permission bigquery.tables.create denied on dataset [project]:[dataset] (or it may not exist).
From the Error it can be seen that when you are creating a table in your bigquery you are getting the bigquery.tables.create permission denied error.
bigquery.tables.create role is used for creating new tables. You are getting this error because you do not have this permission enabled in your IAM roles in your project.
The bigquery.admin and bigquery.dataEditor roles both contain the bigquery.tables.create permission. so either should be sufficient.
Make sure that the user being the owner has the permissions on the project in which the job is being run.
You can check for the same by going to -
go to google cloud console
navigate to IAM and Services
Go to roles and check for this role is enabled or not, if not then enable it.
Check this public documentation for Access controls in BigQuery.
Access Controls
Aurora Postgres 11.8
Is there any way possible that a non-superadmin user can run pg_stat_statements_reset()?
Details:
Have to schedule pg_stat_statements_reset() on an hourly basis, since there is no internal scheduler available in Aurora Postgres 11.8, I want to go for lambda/cronjob as only the superadmin can run it so its a security risk in my environment to expose superadmin password in a lambda/cronjob. So is there any way out in my case? can there be an sp that starts execution from non-superuser and then switch user within etc?
Thanks
The documentation for pg_stat_statements_reset says:
pg_stat_statements_reset discards statistics gathered so far by pg_stat_statements corresponding to the specified userid, dbid and queryid. If any of the parameters are not specified, the default value 0(invalid) is used for each of them and the statistics that match with other parameters will be reset. If no parameter is specified or all the specified parameters are 0(invalid), it will discard all statistics. By default, this function can only be executed by superusers. Access may be granted to others using GRANT.
Let me repeat that: Access may be granted to others using GRANT.
I have created a new user in a MaxDb database. I assign a role that has access to all the tables in roleprivileges but the user can not see these tables.
The user can access the tables if I assign permissions directly to the tables in tableprivileges.
The role has access, other users have this role assigned and they see all the tables.
What can be failing?
Today I've heard of MaxDB for the first time (what an ignoramus, eh?). I'm not sure why you tagged your question with the "Oracle" tag; Google says that MaxDB <> Oracle.
Anyway: it sounds like common problems in Oracle's PL/SQL, where privileges - acquired via roles - won't work, but have to be granted directly to the user.
Saying that "other users have this role assigned and they see all the tables", are you sure that they don't have direct privileges granted as well?
Assuming this deals indeed with MaxDB, and not with Oracle:
In contrast to privileges, roles need to be activated for a user session. Assigning is not enough. It is done by command SET ROLE <role>.
A role may also be activated as default for every new session, with command:
ALTER USER <user> DEFAULT ROLE <role>.
You can also activate all roles assigned to the user, like this:
ALTER USER <user> DEFAULT ROLE ALL.