GAM commands to get actives devices the last 7 days - google-chrome-os

I am trying to do some automated reports on Chrome OS devices we have.
I would like to have the number of devices that get used the last 7 days, and the same for the month.
Google Admin Reports can give me a CSV file with how much devices get used the last seven days, but not automatically, and I can't change the 7 days for a month.
I think it is possible to do this using GAM (Google Application Manager), but I can't manage to get the right results.
Tried "gam print cros query "sync:yyyy-mm-dd..yyyy-mm-dd"" but it doesn't give me the same result as Google Admin Reports.
Do someone have a clue on how to do this ? Even eventually how to automate it ?

For those who want to do the same and export it on a .csv file :
For Chromebook activity on 7 days :
gam report usage customer parameters cros:num_7day_active_devices > C:\GAM\Activity_7days.csv
For Chromebook activity on 30 days :
gam report usage customer parameters cros:num_30day_active_devices > C:\GAM\Activity_30days.csv

Related

Bigquery: Will switching from BQ sandbox to BQ paid will change 60 days data limit setup in sandbox

Bigquery: Will switching from BQ sandbox to BQ paid will change 60 days data limit setup in sandbox.
Also, will I be able to export all GA4 data (last 1 year minimum) post switching to BQ paid?
currently we only have 60 days data in BQ sandbox and want to know if moving to BQ paid service will remove this limitation.
lso, will I be able to export all GA4 data (last 1 year minimum) post switching to BQ paid?
I'm not sure if the change from 60 days is automatic, you may have to change it manually.
Unfortunately, you can't export old data from GA4. Once you are out of the sandbox and have changed the data limit, you will start to get more days stored.

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Coinbase API v2 Getting Historic Price for Multiple Days

I'm having some trouble with a Coinbase.com API call for historical data.
Previously, I was getting a variable length of days that would match the amount of space available on a terminal screen with a request URL that looked like this:
https://api.coinbase.com/v2/prices/historic?currency=USD&days=76
This would pull the previous 76 days of price history. An example of the old output is here:
https://gist.github.com/KenDB3/f071a06ab3ef1a899d3cd8df8b40a049#file-coinbase-historic-days-example-2017-12-23-json
This stopped working a few days ago. The closest I can get to this is with this request URL (though I don't get the data I want):
https://api.coinbase.com/v2/prices/BTC-USD/historic?days=76
The output from this can be seen here:
https://gist.github.com/KenDB3/f071a06ab3ef1a899d3cd8df8b40a049#file-coinbase-historic-days-example-2018-07-19-json
In the second example, it is just displaying prices from the day of the query at different times of that day. What I really want is the first example output where it gives a single price per day going back as many days as the request is for.
The project this is connected to is here:
https://github.com/KenDB3/SyncBTC
Links that do not work:
https://api.coinbase.com/v2/prices/historic?currency=BTC-USD&days=76
(No Results)
https://api.coinbase.com/v2/prices/BTC-USD/historic?2018-07-15T00:00:00-04:00
(Does not pull data from 7/15/2018)
Any reason you aren't using coinbase pro?
The new api is very easy to use. Simply add the get command you want followed by the parameters separated with a question mark. Here is the new historic rates api documentation:
https://docs.cloud.coinbase.com/exchange/reference/exchangerestapi_getproductcandles
The get command with the new api most similar to prices is "candles". It requires three parameters to be identified, start and stop time in iso format and granularity which is in seconds. Here is an example:
https://api.pro.coinbase.com/products/BTC-USD/candles?start=2018-07-10T12:00:00&end=2018-07-15T12:00:00&granularity=900
EDIT: also, note the time zone is not for your time zone, I believe its GMT.
Here is a wrapper for the CoinBase API for the export of Historical Data: https://pypi.org/project/Historic-Crypto/
It should provide the required outcome through invoking:
pip install Historic-Crypto
from Historic_Crypto import HistoricalData
new = HistoricalData('ETH-USD',300,'2020-06-01-00-00').retrieve_data()
for a full list of cryptocurrencies available:
pip install Historic-Crypto
from Historic_Crypto import Cryptocurrencies
data = Cryptocurrencies(extended_output=False).find_crypto_pairs()

Normal for BigQuery data to be higher than Firebase?

I'm running the following query to select the active users for a time frame on my project.
SELECT DISTINCT
active_users,
unix
FROM [mobileapp_logs].[dbo].[active_users]
WHERE (rtrim(app_id) + ':' + app_os) = 'tbl'
AND [aggregation] = '30-day-active'
AND [unix] BETWEEN 1491696000 AND 1494288000
AND active_users >= 100
The query seems to be working but with every row returned for that day it will give me about 10 - 30 more than what's in firebase. Is this normal for bigquery -> firebase?
I'm not familiar with the table you are querying, according to the documentation Firebase imports data to app_events_intraday_YYYYMMDD. Could you provide more information about [mobileapp_logs].[dbo].[active_users]?
According to different SO questions it seems there may be a delay of a few days where offline devices upload their data. Also Firebase updates data in BigQuery daily. Since you are querying up until today you may be seeing data that has already been updated in Firebase but not in BigQuery. I would recommend changing your query to a range ending 3 days before today.

Where do you get Google Bigquery usage info (mainly for processed data)

I know that BigQuery offers the first "1 TB of data processed" per month for free but I can't figure out where to look on my dashboard to see my monthly usage. I used to be able to "revert" to the old dashboard which had the info but for the past couple of weeks the "old dashboard" isn't accessible.
From the Google Cloud Console overview page for your project, click on the "details" section on the top-right, next to the charge estimate :
You'll get an estimate of the charges for the current month for each service and item in the service, including Big Query analysis :
If you want to track this usage, you can also export the data into CSV every day by going in the Billing settings and enable the usage export feature. Do not worry about the fact that it only mentions Compute Engine, it actually works for other services also.
You can also access directly the billing history by clicking on the billing account link :
You will get a detailed bill with the usage info :
Post GCP Console Redesign Answer
The GCP console was redesigned and now the other answer here no longer applies, but it is still possible to view your usage by going to IAM & Admin -> Quotas.
What you're looking for is "Big Query API: Query usage per day". It doesn't seem possible to view your usage over 30 days unfortunately, but you can see your current usage (per day) and your peak usage over the past 7 days. You can also set a daily quota. If you're just working infrequently or doing a lot in one day, you can set a quota to 1 TiB and prevent yourself from blowing your whole allocation in one day.
You can try sending feedback about these limitations, like I did, by clicking the question mark at the top right and then send feedback.
Theo is correct that there is no way to view the number of bytes processed or billed since the start of the month (inside of the free tier) in the GCP Billing Console. However, you can extract the bytes processed and bytes billed data from logs in Cloud Logging and calculate the total bytes processed/billed since the start of the month inside of BigQuery.
Here are the steps to count total bytes billed in a month:
Under Cloud Logging, go to Logs Explorer (NOT the Legacy Logs Explorer) and run the following query in the query builder frame:
resource.type="bigquery_project" AND
protoPayload.metadata.jobChange.job.jobStats.queryStats.totalBilledBytes>1 AND
timestamp>="2021-04-01T00:00:00Z"
The timestamp clause is not actually necessary, but it will speed up the query. You can set timestamp >= <value> to any valid timestamp you want as long as it returns at least one result.
In the Query Results frame, click the "Action" button, and select "Create Sink".
In the window that opens, give your sink a name, click "Next", and in the "Select sink service" dropdown menu select "BigQuery dataset".
In the "Select BigQuery dataset" dropdown menu, either select an existing dataset where you would like to create your sink (which is a table containing logs) or if you prefer, choose "Create new BigQuery dataset.
Finally, you will likely want to check the box for Partition Table, since this will help you control costs whenever you query this sink. As of the time of this answer, however, Google limits partition tables to 4000 partitions, so you may find it is necessary to clear out old logs eventually.
Click "Create Sink" (there is no need for any inclusion or exclusion filters).
Run a query in BigQuery that produces bytes billed (i.e. a query that does not return a previously cached result). This is necessary to instantiate the sink. Moments after your query runs, you should now see a table called <your_biquery_dataset>.cloudaudit_googleapis_com_data_access
Enter the following Standard SQL query in the BigQuery query editor:
WITH
bytes_table AS (
SELECT
JSON_VALUE(protopayload_auditlog.metadataJson,
'$.jobChange.job.jobStats.createTime') AS date_time,
JSON_VALUE(protopayload_auditlog.metadataJson,
'$.jobChange.job.jobStats.queryStats.totalBilledBytes') AS billedbytes
FROM
`<your_project><your_bigquery_dataset>.cloudaudit_googleapis_com_data_access`
WHERE
EXTRACT(MONTH
FROM
timestamp) = 4
AND EXTRACT(YEAR
FROM
timestamp) = 2021)
SELECT
(SUM(CAST(billedbytes AS INT64))/1073741824) AS total_GB
FROM
bytes_table;
You will want to chance the month from 4 to whatever month you intend to query, and 2021 to whatever year you intend to query. Also, you may find it helpful to save this query as a view if you intend to rerun it periodically.
Be advised that your sink does not contain your past BigQuery logs, only BigQuery logs produced after you created the sink. Therefore in the first month the number of GB returned by this query will not be an accurate count your bytes billed in month unless you happen to have created the sink prior to running any queries in BigQuery during the current month.
Might be related to How can I monitor incurred BigQuery billings costs (jobs completed) by table/dataset in real-time?
If you are fine by using BigQuery itself to get that information (instead of using a UI), you can use something like this:
DECLARE gb_divisor INT64 DEFAULT 1024*1024*1024;
DECLARE tb_divisor INT64 DEFAULT gb_divisor*1024;
DECLARE cost_per_tb_in_dollar INT64 DEFAULT 5;
DECLARE cost_factor FLOAT64 DEFAULT cost_per_tb_in_dollar / tb_divisor;
SELECT
ROUND(SUM(total_bytes_processed) / gb_divisor,2) as bytes_processed_in_gb,
ROUND(SUM(IF(cache_hit != true, total_bytes_processed, 0)) * cost_factor,4) as cost_in_dollar,
user_email,
FROM (
(SELECT * FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_USER)
UNION ALL
(SELECT * FROM `other-project.region-us`.INFORMATION_SCHEMA.JOBS_BY_USER)
)
WHERE
DATE(creation_time) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) and CURRENT_DATE()
GROUP BY
user_email
Open in BigQuery UI
Explanation
Please consider the caveats I mentioned in my answer here