We used the Google Cloud Function provided by Cloudflare to import data from Google Cloud Storage in to Google BigQuery (refer to: https://developers.cloudflare.com/logs/analytics-integrations/google-cloud/). The cloud function was running into an error saying:
"Quota exceeded: Your table exceeded quota for imports or query appends per table"
I queried the INFORMATION_SCHEMA.JOBS_BY_PROJECT table and found the errorresult.location is 'load_job_per_table.long'. The jobid is '26bb1792-1ca4-42c6-b61f-54abca74a2ee'.
Looked at the Quotas page for BigQuery API service but non of the quotas status showed exceeded. Some are blank though.
Could anyone help me with which Google Cloud Quota or limit it exceeded? If so, how to increase the quota? The cloudflare function is used by another google account and it works well without any error.
Thanks,
Jinglei
Try to look for the specific quota error in Cloud Logging where there are log entries.
I had a similar issues with BigQuery Data Transfer quota being reached. Please see my example Cloud Logging filter:
resource.type="cloud_function"
severity=ERROR
timestamp>="2021-01-14T00:00:00-08:00"
textPayload:"429 Quota"
Change the timestamp according and maybe remove the textPayload filter.
You can also just click in the interface for severe errors and search in the UI.
Here is another example:
severity=ERROR
timestamp>="2021-01-16T00:00:00-08:00"
NOT protoPayload.status.message:"Already Exists: Dataset ga360-bigquery-azuredatalake"
NOT protoPayload.status.message:"Syntax error"
NOT protoPayload.status.message:"Not found"
NOT protoPayload.status.message:"Table name"
NOT textPayload:"project_transfer_config_path"
NOT protoPayload.methodName : "InsertDataset"
textPayload:"429 Quota"
Identify quota will be done by What different action you are performing through your function, it could be any .. Insert, Update, Volume, External IP and so on. Then try to analysis frequency or metric values and try to evaluate with Google defined quotas it will give you an indicator which quota is getting exceeded.
You can refer following video over same.
CloudFlare says they have a fix coming for the quota issue: https://github.com/cloudflare/cloudflare-gcp/issues/72
Related
We're running a nodeJS script to identify erroneous data values (negative numbers) and we're unable to determine which user captured the values without logging in and inspecting the data entry form.
There doesn't seem to be any documentation for including user identification data in the analytics API end point.
Has anyone figured this out yet?
many APIs are present in dhis2 documentation including these :
/api/33/dataValueSets
/api/33/dataValues
But a more suitable API for this case would be the AUDIT API:
/api/33/audits/dataValue
A detailed documentation is visible at this link :
https://docs.dhis2.org/en/develop/using-the-api/dhis-core-version-235/web-api.html#webapi_auditing_aggregate_audits
I've run into a few different issues with the PagerDuty integration in Splunk Cloud.
The documentation on PagerDuty's site is either outdated, not applicable to Splunk Cloud or else there's something wrong with the way my Splunk Cloud account is configured (could be a permissions issue): https://www.pagerduty.com/docs/guides/splunk-integration-guide/. I don't see an Alert Actions page in Splunk Cloud, I have a Searches, Reports and Alerts page though.
I've configured PD alerts in Splunk using the alert_logevent app but it's not clear if I should instead be using some other app. These alerts do fire when there are search hits but I'm seeing another issue (below). The alert_webhook app type seems like it might be appropriate but I was unable to get it to work correctly. I cannot create an alert type using the pagerduty_incident app. . . although I can set it as a Trigger Action (I guess this is how it's supposed to work, I don't find the UI to intuitive here).
When my alerts fire and create incidents in PagerDuty, I do not see a way to set the PagerDuty incident severity.
Also, the PD incidents include a link back to Splunk, which I believe should open the query with the search hits which generated the alert. However, the link brings me to a page with a Page Not Found! error. It contains a link to "more information about my request" which brings up a Splunk query with no hits. This query looks like "index=_internal, host=SOME_HOST_ON_SPLUNK_CLOUD, source=*web_service.log, log_level=ERROR, requestid=A_REQUEST_ID". It it not clear to me if this is a config issue, bug in Splunk Cloud or possibly even a permissions issue for my account.
Any help is appreciated.
I'm also a Splunk Cloud + PagerDuty customer and ran into the same issue. The PagerDuty App for Splunk seems to create all incidents as Critical but you can set different severities with event rules.
One way to do this is dynamically is to rename your Splunk alerts with the desired severity level and then create a PagerDuty event rule for each level that looks for the keyword in the Summary. For example...
If the following condition is met:
Summary contains "TEST"
Then perform the following actions:
Set severity = Info
screenshot of the example in the event rule edit screen
It's a bit of pain to rename your existing alerts in Splunk but it works.
If your severity levels in Splunk are programmatically set like in Enterprise Security, then another method would be to modify the PagerDuty App for Splunk to send the $alert.severity$ custom alert action token as a custom detail in the webhook payload and use that as event rule condition instead of Summary... but that seems harder.
I just started to use Amazon S3 storage for storing images uploaded from my app. I am able to access it via a URL:
https://s3.us-east-2.amazonaws.com/BUCKETNAME/.../image.png
Does this count as a GET request? How am I charge for referencing an image like this?
I am able to access it via a URL. Does this count as a GET request?
If you are pasting this URL in to your browser and pressing go, your browser will make a GET request for this resource, yes.
How am I charge for referencing an image like this?
AWS charges based on storage and bandwidth. For storage their pricing is based per GB per month. For bandwidth they charge per 1000 requests and per GB of data transferred. Their pricing charts can be found on their documentation:
https://aws.amazon.com/s3/pricing/
You are right . It’s a get request.
You pay for every 10k get requests , storage size and of course out bound traffic costs .
Take a look here:
https://blog.cloudability.com/aws-s3-understanding-cloud-storage-costs-to-save/
For future reference, if you want to access a file in Amazon S3 the URL needs to be something like:
bucketname.s3.region.amazonaws.com/foldername/image.png
Example: my-awesome-bucket.s3.eu-central-1.amazonaws.com/media/img/dog.png
Don't forget to set the object to public.
Inside S3 if you click on the object will you see a field called: Object URL. That's the object's web address.
I have a software that extracts intraday data from google finance. However, as API was updated by Google yesterday so the software giving error
Conversion from string HTML HEAD meta http-equiv="con" to type 'Double' is not valid.
I have one ionic.zip.dll file of that software. Can somebody help to update, as to how to resolve the above error
I believe I have found the solution to the problem of Google Finance not downloading intraday prices: the domain name (the part at the beginning of the URL) has changed.
It seems Google is now serving data from finance.google.com and not www.google.com. If you use the www domain, you are redirected to finance.google.com, BUT in the process they somehow drop the &i query string parameter that determines the time interval. This defaults to 86400, which gets you daily data only.
So to get 2 days of 1-minute data for Apple, instead of
https://www.google.com/finance/getprices?p=2d&i=60&f=d,o,h,l,c,v&q=AAPL
do this instead:
https://finance.google.com/finance/getprices?p=2d&i=60&f=d,o,h,l,c,v&q=AAPL
Hope this helps :-)
Google not serving the Converter API on the main domain as well, not anymore. We've updated the URLs as below:
"https://www.google.com/finance/converter?a=$amount&from=$from_Currency&to=$to_Currency"
to
"https://finance.google.com/finance/converter?a=$amount&from=$from_Currency&to=$to_Currency"
I am getting an error notifying me of exceeding the API Quota. However, I have a quota of 150,000 requests and have only used up 12,000 of them. What is causing this error?
example of code:
from googleplaces import GooglePlaces
api_key = ''
google_places = GooglePlaces(api_key)
query_result = google_places.nearby_search(location="Vancouver, Canada", keyword="Subway")
for place in query_result.places:
print(place.name)
Error Message:
googleplaces.GooglePlacesError: Request to URL https://maps.googleapis.com/maps/api/geocode/json?sensor=false&address=Vancouver%2C+Canada failed with response code: OVER_QUERY_LIMIT
The request from error message is not a Places API request. This is Geocoding API request.
https://maps.googleapis.com/maps/api/geocode/json?sensor=false&address=Vancouver%2C+Canada
It doesn't include any API key. That means you can have only 2500 daily geocoding requests without an API key. Also, geocoding requests have a query per second limits (QPS) which is 50 queries per second. You might be exceeding the QPS limit as well.
https://developers.google.com/maps/documentation/geocoding/usage-limits
Not sure why the library that supposed to be calling Places API web service in reality calls Geocoding API web service. Maybe this is some kind of fallback in case if Places API doesn't provide any result.
For anyone looking at this post in a more recent time period, Google has made it so it is NECESSARY to have an api key to use their maps api. So please ensure you have an api key for your program. Also keep in mind the different throttle limits.
See below for more info:
https://developers.google.com/maps/documentation/geocoding/usage-and-billing#:~:text=While%20there%20is%20no%20maximum,side%20and%20server%2Dside%20queries.