Google Play Bucket not shown in the Cloud Storage - google-bigquery

I am trying to load some Google Play reports to my BigQuery project, but having issues with finding the bucket in the Could Storage.
I have copied the Cloud Storage URL in the Google Play console (gs://pubsite_prod_rev_... format)
When I open my Cloud Storage this bucket is not in the list of available buckets.
But if I enter this URL in the Data Transfer from bucket to dataset, it will work (although not all reports will be loaded to to my dataset :( )
If I enter this URL in the Data Transfer from bucket to bucket, it won't work, because the transfer lacks some permissions to the source bucket. But I cannot change the permissions to this Google Play bucket because I can't see it in my buckets list.
So my question is - what could be the reason this bucket is not displayed in my storage and how to get access to it?
Thanks!

Related

Importing data from S3 to BigQuery via Bigquery Omni (location restrictions)

I am trying to import some data from S3 bucket to bigQuery. And, I ended up seeing the bigQuery omni option.
However, when I try to connect to the S3 bucket, I see that I am given a set of regions to choose from. In my case, aws-us-east-1 and aws-ap-northeast-2 as in the attached screenshot.
My data on S3 bucket is on the region eu-west-2.
Wondering why BQ allows us to look for specific regions on S3.
What should I be doing so that I can query data from an S3 bucket in the region where the data is uploaded to?
The S3 service is unusual in that the names of buckets are globally unique and do not contain region information in the ARN of the bucket. However, buckets are located in regions but they can be accessed from any region.
My best guess here is that the connection location will be the S3 API endpoint that BigQuery will connect to when it attempts to get the data. If you don't see eu-west-2 as an option then try using us-east-1. From that endpoint it is always possible to find out the location information of the bucket and then make the appropriate S3 client.

Copy Files from S3 SignedURL to GCS Signed URL

I am developing a service in which two different cloud storage providers are involved. I am trying to copy data from S3 bucket to GCS.
To access the data I have been offered signedUrls, and to upload the data to GCS I also have signedUrls available which allow me to write content into a specified storage path;
Is there a possibility to move this data "in cloud"? Downloading from S3 and uploading the content to GCS will create bandwidth problems;
I must also mention that this is a on-demand job and it only moves a small number of files. I can not do a full bucket transfer;
Kind regards
You can use Skyplane to move data across clouds object stores. To move a single file from S3 to Google Cloud, you can use the command:
skyplane cp gcs://<BUCKET>/<FILE> s3://<BUCKET>/<FILE>

AWS S3 Folder wise metrics

We are using grafana's cloudwatch data source for aws metrics. We would like to differentiate folders in S3 bucket with respect to their sizes and show them as graphs. We know that cloudwatch doesn't give object level metrics but bucket level. In order to monitor the size of the folders in the bucket, let us know if any possible solution out there.
Any suggestion on the same is appreciated.
Thanks in advance.
Amazon CloudWatch provides daily storage metrics for Amazon S3 buckets but, as you mention, these metrics are for the whole bucket, rather than folder-level.
Amazon S3 Inventory can provide a daily CSV file listing all objects. You could load this information into a database or use Amazon Athena to query the contents.
If you require storage metrics at a higher resolution than daily, then you would need to track this information yourself. This could be done with:
An Amazon S3 Event that triggers an AWS Lambda function whenever an object is created or deleted
An AWS Lambda function that receives this information and updates a database
Your application could then retrieve the storage metrics from the database
Thanks for the reply John,
However I found a solution for it using an s3_exporter. It gives metrics according to size of the folders & sub-folders inside S3 bucket.

what is the meaning of upload image, how can we use that

I know this might be a very very basic and easy question to ask, but somehow I could not understand the difference between the two.
I googled a lot and read many things but could not find an answer to distinguish the two.
I was reading the FAQ'sof Cloudinary, which states that:
Cloudinary covers from image upload, storage
So my question is What is image upload vs image storage? Secondly, why do we upload the images?
As a normal user, I understand that upload is transferring files to different systems, but what is the use in cloudinary then?
Your assumption is correct upload is transferring files from one system(Local drive, different URL, other storage in a cloud (S3)) to diffrent system, for example Cloudinary storage.
Image storage is the place that the image is and the amount of storage that they take.
So, for example, if I have an image A.jpg on my local drive in my computer. And that image is 100KB. I can upload it to my Cloudinary (Storage in the cloud) and After I upload it to my Cloudinary account I can check my storage and I'll see that I have 100KB in my storage.
Hope that helps :)
Cloudinary is a cloud-based service, and as such, in order to use their image management solutions such as image manipulation, optimization and delivery, you have to upload your images to the cloud. Images uploaded to Cloudinary are stored in the cloud utilizing Amazon's S3 service.
Cloudinary provides a free tier where you can store up to 75,000 images with a managed storage of 2GB. 7500 monthly transformations and 5GB monthly net viewing bandwidth.
As you said, uploading is transferring a file to a different system. With cloudinary you can upload a local file, upload a file using a remote HTTP(s) URL or upload a file from S3 bucket.
In conclusion, Cloudinary isn't just a cloud storage service, we upload images to Cloudinary so that we can perform all kinds of image manipulations to images.
For more details you can read this documantation:
http://cloudinary.com/documentation/php_image_upload

Can I load every item in a Google Cloud Storage bucket into a BigQuery Table without listing every filename?

I write new log files to a Google Cloud Storage bucket every 2-3 minutes with data from my webserver (pipe-separated-values). I have thousands of ~1MB files in a single Google Cloud Storage bucket, and want to load all the files into a BigQuery table.
The "bq load" command seems to require individual files, and can't take an entire bucket, or bucket with prefix.
What's the best way to load thousands of files in a gs bucket? Do I really have to get the URI of every single file, as opposed to just specifying the bucket name or bucket and prefix to BigQuery?
You can use glob-style wildcards. E.g. gs://bucket/prefix*.txt.