How to query Google Cloud logs by scope/storage via api or java libs - api

I have defined log sinks to various storage buckets.
In GCP Logs explorer (https://console.cloud.google.com/logs/query) I can specify query scope by project or by specified bucket storages.
How to achieve the same feature (scope by specified storages) using GCP logs api and/or google java libraries ?
Aby links to docs ?

You can achieve the same feature using GCP logs api, by using resourceNames[] Query parameter. Here BUCKET_ID refers to Log bucket ID not storage bucket ID. Refer to this documentation for information.
In resource-oriented APIs, resources are named entities, and resource names are their identifiers. Each resource must have its own unique resource name. The resource name is made up of the ID of the resource itself, the IDs of any parent resources, and its API service name.
gRPC APIs should use scheme-less URIs for resource names. They generally follow the REST URL conventions and behave much like network file paths. They can be easily mapped to REST URLs: see the Standard Methods section for details.

Related

What is the S3 hostedZoneId input used for?

Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?

How to access Bigquery using key

I was given a key which happens to be a .json file to access a bigquery data but I have no idea where to put it and how I should use it. I tried to go to the bigquery console but I can't seem to find where I can place the key to view their data. I have no experience using bigquery so I tried to search for any tutorials to no avail.
I can assume that you have created service account key with assigned roles (i.e. roles/bigquery.admin) and downloaded a JSON file that contains your key.
You will need to use it only whenever you decide to use BigQuery API by using client libraries, such as Python or Java. As you can see in the documentation, you need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the JSON file that contains your service account key to be able to access Bigquery resources.
When using the web UI in the Google Cloud Console, you don't need to use JSON key file. You only need to take care of assigning appropriate roles to the service account you have created. Please take a look for the following documentation.
Additionally, I would like to share with you the introduction to authentication, which is really important.
I hope you find the above pieces of information useful.

Attach Existing Cloud Resource (ex: S3 Bucket) to a Pulumi Project

Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.

Cloud Files API with Indexable Meta Data

I would like to use a cloud file system that supports adding meta-data that is searchable. I want to use this meta-data to store keys from my application to associate the document.
E.g.
File:
/XYZ/image.png
Meta Data:
Person-Id:12345
Group-Id:23456
Other-Id:3456
I would then like to use a API to search (v.fast) documents by Person-Id or Group-Id. I understand that I could create this table mapping myself (in mysql within the app) but is there a cloud files solution (google drive, rackspace, amazon) that supports this use case already?
Thanks
You can make use of the Directory API of Google to insert data onto the Google Drive which is hosted on cloud.
Send a POST request like this:
POST https://www.googleapis.com/admin/directory/v1/groups/groupKey/members
The require parameters include the groupKey which Identifies the group in the API request. The value can be the group's email address, group alias, or the unique group ID.
You can directly try this with this official documentation.
For inserting a single member visit this official documentation.

How can i get file's properties for a file in OneDrive?

I am using the REST API for OneDrive. I have a name of a file in the users storage. I want to obtain the properties for this file. According the documentation file's properties can be retrieved
if you have the file ID.(http://msdn.microsoft.com/en-us/library/dn659731.aspx) So I need the file ID and the only way I see to obtain it is to search the whole storage which is really unnecessary.
Is there a way to find properties of a file(with a known name) with a single request to the service?
Ideally the API would support access by path which would do what you require (assuming you have the full path and not just the name). Unfortunately, to my knowledge that isn't supported.
There is a heavy handed approach that may work for you though - you can use the search capabilities of the API to find files with the name you specify:
GET /[userid]/skydrive/search?q=MyVideo.mp4
The documentation is available at the link below:
http://msdn.microsoft.com/en-us/library/dn631847.aspx