Applying filters on Google Cloud API - Instance list - api

I was trying to filter GCP instance based on IP Range or subnet.
API : https://cloud.google.com/compute/docs/reference/rest/v1/instances/list
I am able to use below CLI commands and get the list of desired instances
gcloud compute instances list --filter="networkInterfaces.networkIP>172.23.0.0 AND networkInterfaces.networkIP<172.23.0.170"
gcloud compute instances list --filter="networkInterfaces.subnetwork:default"
But I am not able to use these filters in API explorer provide by GCP.
When I use networkInterfaces.networkIP = "some IP" as filter I am getting below error
"Invalid value for field 'filter': 'networkInterfaces.networkIP = "172.23.0.10"'.
Is there any way we can filter the instance based on IPs?
I am aware that we can filter out once we get the response, but I am looking to apply the filter at request level itself.
Thanks,
Rmp

Related

Gcloud CLI Policy Analyzer Query Filter String

I am trying to get the GCP Service Key Last Authentication time utilizing the GCP Policy Analyzer API. I would like to filter on the service account key last authentication activity argument, by the KeyID. According to the documentation, this should be possible. I do not understand how to phrase the --query-filter argument correctly.
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter= "activities.fullResourceName"="//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID"
ERROR: (gcloud.policy-intelligence.query-activity) unrecognized arguments: activities.fullResourceName=//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID
or this command:
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter="activities.fullResourceName='//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID'"
ERROR: (gcloud.policy-intelligence.query-activity) INVALID_ARGUMENT: Invalid filter string.
or this command, all with different quotation mark patterns, just trying to figure out how the "query-filter" command works.
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID --query-filter="activities.full_resource_name='//iam.googleapis.com/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL/keys/KEY_ID'"
ERROR: (gcloud.policy-intelligence.query-activity) INVALID_ARGUMENT: Invalid filter string with unexpected filter field. Expected: activities.fullResourceName.
According to this documentation:
https://cloud.google.com/policy-intelligence/docs/activity-analyzer-service-account-authentication
https://googleapis.github.io/google-api-python-client/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html
I am quite unsure why this is not filtering, or the appropriate why to call the "query-filter" method.
Note, when I run:
gcloud policy-intelligence query-activity --activity-type=serviceAccountKeyLastAuthentication --project=PROJECT_ID
It will return all the values of all the service accounts.. But in my case, a project could have 100s of service accounts, each with up to 10 keys. I would like to my return data to be more granular.
Thanks.
Note, this is a related question: GCP SDK API for Policy Analyzer for Python, also asked by me, but its scope was to python, not to gcloud cli.

AWS API Gateway: Use dynamic part of resource in integration uri

I want to re-use a dynamic resource value for the integration uri in an AWS CDK API Gateway definition.
Let's say I have two services:
service-football
service-tennis
both have one endpoint "players".
Now I want one single api gateway definition for both, the football and the tennis players. I try to dynamically define this as follows:
endpt = rest_api.root.add_resource(path_part='endpoint')
sport_endpt = endpt.add_resource("{sport}")
players_endpt = sport_endpt.add_resource("players")
players_endpt.add_method(
http_method='GET',
...
integration=apigw.Integration(
type=_apigw.IntegrationType.HTTP,
integration_http_method='GET',
uri=uri + '/service-*HERE_THE_SPORT_PARAM*/players',
)
)
In the integration part I want to create the uri part dynamically with the dynamic resource value {sport}, so that I get something like /service-football/players or /service-tennis/players which I can extend easily by just creating service endpoints with the same conventions.
If you want a single API endpoint definition, you need to have the sport included as a path parameter, so it would be something like BASE_URI/service/{sport}/players.
The corresponding cdk would be
endpt = rest_api.root.add_resource(path_part='endpoint')
service_endpt = endpt.add_resource(path_part='service')
sport_endpt = service_endpt.add_resource("{sport}")
players_endpt = sport_endpt.add_resource("players")
players_endpt.add_method(
http_method='GET',
...
integration=apigw.Integration(
type=_apigw.IntegrationType.HTTP,
integration_http_method='GET',
uri=uri + '/service/{sport}/players',
)
)

How do we filter the entities which is not start with "msdn" using MSDynamics Web API

I want to get all entities which are the name not start the prefix as 'msdn' from ms dynamics.
I tried the below APIs, got the error.
GET /api/data/v9.1/EntityDefinitions?$select=LogicalName&$filter=not startswith(LogicalName,%27msdn%27)
Response :
{
"error":
{"code":"0x0",
"message":"The \"startswith\" function isn't supported for Metadata Entities."
}
}
I referred https://learn.microsoft.com/en-us/powerapps/developer/common-data-service/webapi/query-data-web-api#standard-query-functions
I have checked that in one of my environment as well. What you require is not possible.
You will have to go 2 steps.
Retrieve all entities and then filter them out in your local program may it be JavaScript/C# or Json filtering/Power automate or something.

Credentials Error when integrating Google Drive with

I am using Google Big Query, I want to integrate Google Big Query to Google Drive. In Big query I am giving the Google spread sheet url to upload my data It is updating well, but when I write the query in google Add-on(OWOX BI Big Query Reports):
Select * from [datasetName.TableName]
I am getting an error:
Query failed: tableUnavailable: No suitable credentials found to access Google Drive. Contact the table owner for assistance.
I just faced the same issue in a some code I was writing - it might not directly help you here since it looks like you are not responsible for the code, but it might help someone else, or you can ask the person who does write the code you're using to read this :-)
So I had to do a couple of things:
Enable the Drive API for my Google Cloud Platform project in addition to BigQuery.
Make sure that your BigQuery client is created with both the BigQuery scope AND the Drive scope.
Make sure that the Google Sheets you want BigQuery to access are shared with the "...#appspot.gserviceaccount.com" account that your Google Cloud Platform identifies itself as.
After that I was able to successfully query the Google Sheets backed tables from BigQuery in my own project.
What was previously said is right:
Make sure that your dataset in BigQuery is also shared with the Service Account you will use to authenticate.
Make sure your Federated Google Sheet is also shared with the service account.
The Drive Api should as well be active
When using the OAuthClient you need to inject both scopes for the Drive and for the BigQuery
If you are writing Python:
credentials = GoogleCredentials.get_application_default() (can't inject scopes #I didn't find a way :D at least
Build your request from scratch:
scopes = (
'https://www.googleapis.com/auth/drive.readonly', 'https://www.googleapis.com/auth/cloud-platform')
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'/client_secret.json', scopes)
http = credentials.authorize(Http())
bigquery_service = build('bigquery', 'v2', http=http)
query_request = bigquery_service.jobs()
query_data = {
'query': (
'SELECT * FROM [test.federated_sheet]')
}
query_response = query_request.query(
projectId='hello_world_project',
body=query_data).execute()
print('Query Results:')
for row in query_response['rows']:
print('\t'.join(field['v'] for field in row['f']))
This likely has the same root cause as:
BigQuery Credential Problems when Accessing Google Sheets Federated Table
Accessing federated tables in Drive requires additional OAuth scopes and your tool may only be requesting the bigquery scope. Try contacting your vendor to update their application?
If you're using pd.read_gbq() as I was, then this would be the best place to get your answer: https://github.com/pydata/pandas-gbq/issues/161#issuecomment-433993166
import pandas_gbq
import pydata_google_auth
import pydata_google_auth.cache
# Instead of get_user_credentials(), you could do default(), but that may not
# be able to get the right scopes if running on GCE or using credentials from
# the gcloud command-line tool.
credentials = pydata_google_auth.get_user_credentials(
scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/cloud-platform',
],
# Use reauth to get new credentials if you haven't used the drive scope
# before. You only have to do this once.
credentials_cache=pydata_google_auth.cache.REAUTH,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as with Google Colab.
auth_local_webserver=True,
)
sql = """SELECT state_name
FROM `my_dataset.us_states_from_google_sheets`
WHERE post_abbr LIKE 'W%'
"""
df = pandas_gbq.read_gbq(
sql,
project_id='YOUR-PROJECT-ID',
credentials=credentials,
dialect='standard',
)
print(df)

WSO2 api manager always expect query parameter issue in case query and path parameter?

Does anyone know how to use WSO2 api manager to specify all query parameters as optional through URL pattern specification in WSO2 API Manager UI(Paath Params also present in the same URI)? for example, I have a API which will be registered in WSO2 api manager , and its uri is 'search//?type="xx"&status="yy"', currently both of these 2 query parameters (type & status) are optional and is pathparam.
I specified URL Pattern "search/{stationcode}*". Now I am calling with path param only, it gives Error "No matching resource found in the API for the given request".
I call "search/TAMK", it is not working. But if I use "search/TAMK?" or "search/TAMK*" or "search/TAMK*", it works just fine.
I tried to use "search/{stationcode}/*", but still it did not solve the issue. It is always expecting one character for queryparam. Can any one please help me to solve this. Without query parameter it should work, right?
I would suggest you to use the new API Manager (1.9) and try the following.
Create an API with the backend URL of
http://...../search
when you define the URL patterns you can define the following pattern
/{stationcode}*
and you can add 'type' and 'status' as optional parameters in the design view of the API creation page. You can choose the parameter type as 'query' and Required as 'False'