I've recently signed up to Google BigQuery for curiosity's sake and saw that it allows one to play with sample data sets without enabling billing. I followed the installation steps, first creating a Project named "Test Cloud Project" and then enabled BigQuery in the services tab of GoogleAPI.
I have tried running the following:
SELECT repository.url FROM [publicdata:samples.github_nested] LIMIT 1000
and receive the error Error: Not Found: Project p-testcloud-bren
Did I miss a setup step somewhere or do you have to enable billing to actually query the sample datasets?
You don't need billing enabled to run a query on a publicdata:sample table (the first 100GB of data processed per month is free of charge).
If you are making your own API calls, double check that you have the "project id" correct. You should be able to use either the project number (a unique integer value) or the project id (an alpha numeric value you can choose) for your requests to the BigQuery API.
Related
Good day everyone,
we are trying to have through the use of the integrations of the Apigee service of google all the rows in a bigquery table that have a certain value in a field.
this operation is quite easy to do, but when we have more than 200 lines as a result, problems arise.
The problem is that using the integration to connect to BigQuery I am not returning any listEntitiesPageToken value and not even any listEntitiesNextPageToken value
so i can't figure out how i can go about navigating the result pages
Has anyone had the same problem? What do you suggest?
In the tutorial: "https://cloud.google.com/apigee/docs/api-platform/integration/connectors-task#configure-the-connectors-task" is write : "For example, if you are expecting 1000 records in your result set, you can set the listEntitiesPageSize to 100. So when the Connectors task runs for the first time, it returns the first 100 records, the next 100 records in the second run and so on."
And there is a tip: "Use the listEntitiesPageSize parameter in conjunction with the listEntitiesPageToken parameter to navigate through the pages."
I used the tutorial to understand how to use the task for loop and I understood that I should create a "subintegration" which must be called by a "main integration" for each element present in a list / array.
But what what can i do since these tokens are empty?
I am running a Google Cloud Platform project that utilizes BigQuery in Sandbox mode (no billing enabled). In this project, I query solely public datasets.
The Quota (in IAM & admin) shows 0 MiB although I queried a few 100 GBs already.
This raises the question of whether or not querying public BigQuery datasets counts towards project quota.
The first 1TB of data you query will be free. After that you will be billed at $5 per TB.
You can monitor your usage in logs, or I find using billing easier for this, you will get an exact usage figure, Product will be 'BigQuery', SKU will be 'Analysis'. If the data you were querying was not a public dataset you would also be charged for 'Active Storage'.
Relevant quote 1:
You pay only for the queries that you perform on the data (the first 1 TB per month is free, subject to query pricing details).
And 2:
To get started using a BigQuery public dataset, you must create or select a project. The first terabyte of data processed per month is free, so you can start querying public datasets without enabling billing. If you intend to go beyond the free tier, you must also enable billing.
Source: https://cloud.google.com/bigquery/public-data/
For Active Collab team watching this tag.
I am working on a project that uses new Active Collab 5 API, I am having performance issue trying to run reports.
Example I try to build reports on date-range, and currently to achieve that I need to first run a call to get all projects.
Followed by a loop with this call:
API::get('/projects/'.$id.'/time-records/filtered-by-date?' . http_build_query(['from' => $from, 'to' => $to]))
However we have a large number of projects, in addition to high number of active projects we also need to filter Archived projects as well to get correct reports for billing.
Now I work with around 1500 projects in AC.
So I need to make 1500 API calls which takes a huge performance hit. Is there a way that you can possibly build something that would work along these lines.
API::get(/timerecords/filter-by-date);
with a possible passed parameter that will say (all, active, complited) project state.
Please let me know what you can do or if I have missed something in your documentation that already does this.
Thanks
What you need here is not a request that goes through all projects one by one, but a request that it tailored for cross-project reporting. Active Collab 5 has just the right API endpoint for that - /reports/run.
As an example, you can use this command to query time records and expenses from all active projects that were tracked today:
curl -H "X-Angie-AuthApiToken: YOUR-API-TOKEN" "http://your.activecollab.com/api/v1/reports/run?type=TrackingFilter&project_filter=active&tracked_on_filter=today"
Notice the route (/reports/run) and query arguments:
type - specify type of the report, in this case time and expense tracking report,
project_filter - specify project filter. Apart from active, other useful values of this filter are completed (for completed projects), selected_1,2,3,4 (selected projects with a list of project ID-s), client_1,2,3,4 (projects for clients with the given ID-s), category_1,2,3,4 (projects in categories with the given ID-s),
tracked_on_filter - filter by the date when records were tracked. To target a particular date use selected_date_YYYY-MM-DD and to target a date range use selected_range_YYYY-MM-DD:YYYY-MM-DD.
tracked_by_filter - filter by who tracked the time. It can have various values, like anybody, logged_user, selected_1,2,3.
To list only time records, set type_filter to time (or to expenses if you want only expenses to be listed).
Hello guys I wanted to ask a few things because I want to upgrade my log in security . First of all this is how my log in security looks like atm -
Sql query compares the user input ( pass / name / id ) to my data base and if its correct he gets 2 values and redirected to the main restricted page. one of the values is a random value from a function that stores a limited amount of such values ( each time it picks a random 1 and returns it to the user upon successfull log in ) , and the other value is 1 of the input fields ( like company ID for example )
both of those stored in sessions ( hopefully its not an easy to gain the data stored in those from a hacker? ) and on each of the restricted pages, i use on the page load event 2 terms :
Session ( "the ID" ) <> ""
isLegit ( session ( "the random code" ) <> "false"
I am still learning about security and i guess my current method is bad?
And thats where my second question comes to play , i been reading about microsoft's memebership and wanted to use some of the stuff included , but even after reading about how it works i find myself failing to implant it on my site .
I got pretty long register form and well the site designed in some way, and if i try putting the log in controls from visual studio i cant get them to look like part of the page.
I read that there is a way to keep my site as it is and to use
FormsAuthentication.RedirectFromLoginPage("test", false);
to force membership or something of this sort?
Is such a thing possiblr without having to use the log in tools and storing additional log in data ( beside what i have on my sql data base )?
p.s I am using asp.net with vb on visual studio with MS sql server
Take a look at these articles:
Introduction to Membership http://msdn.microsoft.com/en-us/library/yh26yfzy(v=vs.100).aspx
Sample Membership Provider Implementation http://msdn.microsoft.com/en-us/library/44w5aswa(v=vs.100).aspx
Introduction to Membership http://msdn.microsoft.com/en-us/library/tw292whz(v=vs.100).aspx
If you are willing to create another table that would hold the user information it should be really straight-forward, works out of the box.
You would need to take out the checks you have in the code, though.
We're using the REST API in the sandbox environment. I've uploaded my investment accounts into the first test username that was provided using FastLink.
When I run 'getItemSummariesWithoutItemData' in TestDrive I get all the investment accounts as expected. When I repeat the same query using Python I only get back 2 items : Dagbank and DagCreditcard.
Similarly when I retrieve data using 'getItemSummaryForItem1' and a non-DAG itemId in TestDrive, i get data as expected. When I use the same query from python, i get key={}.
Any idea why this is happening?
This should not happen until and unless you are using two different user logins. Please cross check your code.