Gcloud CLI Policy Analyzer Query Filter String - api

I am trying to get the GCP Service Key Last Authentication time utilizing the GCP Policy Analyzer API. I would like to filter on the service account key last authentication activity argument, by the KeyID. According to the documentation, this should be possible. I do not understand how to phrase the --query-filter argument correctly.
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter= "activities.fullResourceName"="//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID"
ERROR: (gcloud.policy-intelligence.query-activity) unrecognized arguments: activities.fullResourceName=//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID
or this command:
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter="activities.fullResourceName='//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID'"
ERROR: (gcloud.policy-intelligence.query-activity) INVALID_ARGUMENT: Invalid filter string.
or this command, all with different quotation mark patterns, just trying to figure out how the "query-filter" command works.
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter="activities.full_resource_name='//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID'"
ERROR: (gcloud.policy-intelligence.query-activity) INVALID_ARGUMENT: Invalid filter string with unexpected filter field. Expected: activities.fullResourceName.
According to this documentation:
https://cloud.google.com/policy-intelligence/docs/activity-analyzer-service-account-authentication
https://googleapis.github.io/google-api-python-client/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html
I am quite unsure why this is not filtering, or the appropriate why to call the "query-filter" method.
Note, when I run:
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID
It will return all the values of all the service accounts.. But in my case, a project could have 100s of service accounts, each with up to 10 keys. I would like to my return data to be more granular.
Thanks.
Note, this is a related question: GCP SDK API for Policy Analyzer for Python, also asked by me, but its scope was to python, not to gcloud cli.

Related

Using a service account and JSON key which is sent to you to upload data into google cloud storage

I wrote a python script that uploads files from a local folder into Google cloud storage.
I also created a service account with sufficient permission and tested it on my computer using that service account JSON key and it worked.
Now I send the code and JSON key to someone else to run but the authentication fails on her side.
Are we missing any authentication through GCP UI?
def config_gcloud():
subprocess.run(
[
shutil.which("gcloud"),
"auth",
"activate-service-account",
"--key-file",
CREDENTIALS_LOCATION,
]
)
storage_client = storage.Client.from_service_account_json(CREDENTIALS_LOCATION)
return storage_client
def file_upload(bucket, source, destination):
storage_client = config_gcloud()
...
The error happens in the config_cloud and it says it is expecting str, path, ... but gets NonType.
As I said, the code is fine and works on my computer. How anotehr person can use it using JSON key which I sent her?She stored Json locally and path to Json is in the code.
CREDENTIALS_LOCATION is None instead of the correct path, hence it complaining about it being NoneType instead of str|Path.
Also you don't need that gcloud call, that would only matter for gcloud/gsutil commands, not python client stuff.
And please post the actual stacktrace of the error next time, not just a misspelled interpretation of it.

putBucketPolicy Invalid principal in policy determine one or more

[nodejs sdk, s3.putBucketPolicy, error handling]
Is there a way to determine (one or more) invalid arn's (invalid account numbers) from error object returned by S3 putBucketPolicy call? Error statusCode is 400 however, trying to figure out which set of principals are invalid.
To clarify further I am not looking for validating role, root ARN patterns. More like, one or more account number(s) thats not correct. Can we extract that from error object or else where?
There are couple of ways to validate an arn:
Using organizations service
Using ARN as principal and apply a bucket policy for a dummy bucket. Let AWS SDK validate it for you, clean up afterwards.

Applying filters on Google Cloud API - Instance list

I was trying to filter GCP instance based on IP Range or subnet.
API : https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
I am able to use below CLI commands and get the list of desired instances
gcloud compute instances list --filter="networkInterfaces.networkIP>172.23.0.0 AND networkInterfaces.networkIP<172.23.0.170"
gcloud compute instances list --filter="networkInterfaces.subnetwork:default"
But I am not able to use these filters in API explorer provide by GCP.
When I use networkInterfaces.networkIP = "some IP" as filter I am getting below error
"Invalid value for field 'filter': 'networkInterfaces.networkIP = "172.23.0.10"'.
Is there any way we can filter the instance based on IPs?
I am aware that we can filter out once we get the response, but I am looking to apply the filter at request level itself.
Thanks,
Rmp

Query data from Google Sheets-based table in BigQuery via API using service account

I can fetch data from native BigQuery tables using a service account.
However, I encounter an error when attempting to select from a Google Sheets-based table in BigQuery using the same service account.
from google.cloud import bigquery
client = bigquery.Client.from_service_account_json(
json_credentials_path='creds.json',
project='xxx',
)
# this works fine
print('test basic query: select 1')
job = client.run_sync_query('select 1')
job.run()
print('results:', list(job.fetch_data()))
print('-'*50)
# this breaks
print('attempting to fetch from sheets-based BQ table')
job2 = client.run_sync_query('select * from testing.asdf')
job2.run()
The output:
⚡ ~/Desktop ⚡ python3 bq_test.py
test basic query: select 1
results: [(1,)]
--------------------------------------------------
attempting to fetch from sheets-based BQ table
Traceback (most recent call last):
File "bq_test.py", line 16, in <module>
job2.run()
File "/usr/local/lib/python3.6/site-packages/google/cloud/bigquery/query.py", line 381, in run
method='POST', path=path, data=self._build_resource())
File "/usr/local/lib/python3.6/site-packages/google/cloud/_http.py", line 293, in api_request
raise exceptions.from_http_response(response)
google.cloud.exceptions.Forbidden: 403 POST https://www.googleapis.com/bigquery/v2/projects/warby-parker-1348/queries: Access Denied: BigQuery BigQuery: No OAuth token with Google Drive scope was found.
I've attempted to use oauth2client.service_account.ServiceAccountCredentials for explicitly defining scopes, including a scope for drive, but I get the following error when attempting to do so:
ValueError: This library only supports credentials from google-auth-library-python. See https://google-cloud-python.readthedocs.io/en/latest/core/auth.html for help on authentication with this library.
My understanding is that auth is handled via IAM now, but I don't see any roles to apply to this service account that have anything to do with drive.
How can I select from a sheets-backed table using the BigQuery python client?
I've ran into the same issue and figured out how to solve it.
When exploring google.cloud.bigquery.Client class, there is a global variable tuple SCOPE that is not being updated by any arguments nor by any Credentials object, persisting its default value to the classes that follows its use.
To solve this, you can simply add a new scope URL to the google.cloud.bigquery.Client.SCOPE tuple.
In the following code I add the Google Drive scope to it:
from google.cloud import bigquery
#Add any scopes needed onto this scopes tuple.
scopes = (
'https://www.googleapis.com/auth/drive'
)
bigquery.Client.SCOPE+=scopes
client = bigquery.Client.from_service_account_json(
json_credentials_path='/path/to/your/credentials.json',
project='your_project_name',
)
With the code above you'll be able to query data from Sheets-based tables in BigQuery.
Hope it helps!
I think you're right that you need to pass the scope for gdrive when authenticating. The scopes are passed here https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/core/google/cloud/client.py#L126 and it seems like the BigQuery client lacks these scopes https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/bigquery/google/cloud/bigquery/client.py#L117 . I suggest asking on github and also as a workaround you can try to override client credentials including gdrive scope, but you'll need to use google.auth.credentials from GoogleCloudPlatform/google-auth-library-python instead of oauth2client, as error message suggests.

WSO2 api manager always expect query parameter issue in case query and path parameter?

Does anyone know how to use WSO2 api manager to specify all query parameters as optional through URL pattern specification in WSO2 API Manager UI(Paath Params also present in the same URI)? for example, I have a API which will be registered in WSO2 api manager , and its uri is 'search//?type="xx"&status="yy"', currently both of these 2 query parameters (type & status) are optional and is pathparam.
I specified URL Pattern "search/{stationcode}*". Now I am calling with path param only, it gives Error "No matching resource found in the API for the given request".
I call "search/TAMK", it is not working. But if I use "search/TAMK?" or "search/TAMK*" or "search/TAMK*", it works just fine.
I tried to use "search/{stationcode}/*", but still it did not solve the issue. It is always expecting one character for queryparam. Can any one please help me to solve this. Without query parameter it should work, right?
I would suggest you to use the new API Manager (1.9) and try the following.
Create an API with the backend URL of
http://...../search
when you define the URL patterns you can define the following pattern
/{stationcode}*
and you can add 'type' and 'status' as optional parameters in the design view of the API creation page. You can choose the parameter type as 'query' and Required as 'False'