As I understand it, BigQuery's caching mechanism is on a per user basis. But we'd like to be able to share the cache on something like a project/dataset/table level.
For example, John & Mary both work on the same Google project XYZ. They love using BigQuery, and both query the table Bar in dataset Foo i.e. XYZ:Foo.Bar to get beautiful insights from their data.
John logs in and writes a query against XYZ:Foo.Bar which takes 10 seconds to execute. A few minutes later Mary logs in and composes the exact same query on XYZ:Foo.Bar. It also takes 10 seconds, but she does not get a cache hit.
Is there anything that can be done to share the query cache across users i.e. on a project/dataset/table level? Or have I missed something obvious?
BigQuery doesn't share cache across users for privacy reasons - but it could be an interesting feature request to propose: https://code.google.com/p/google-bigquery/.
An alternative you could implement today is a proxy that would connect to BigQuery on behalf of your users with a service account. For example, you get the BigQuery native cache and an application level cache when using http://demo.redash.io. Same with Datalab - as it uses a service account by default, results are cached for users in the same project.
Related
I would like to know how can we address this scenario in Azure Log Analytics where I need to generate Kube-audit logs of different cluster every week and also retain these logs for approx 400 days. Now storing it over Log Analytics will cost me more and its not an optimized architecture as I will not be require that so often. So I would like to know from experts whats the best way to design the architecture, where we get the kube audit logs which can be retained for 400 days and be available for querying when required without incurring too much cost.
PS: I also heard in my team that querying 400 days logs always times out in KQL.
Log analytics offerings:
Log analytics now provides the capability to manage several service tiers at table scope. Setting your data as archive, with no query capabilities at a much lower cost. offering spans for up to 7 years.
when needed, you can choose to elevate a subset of your data into the Analytics offering, providing you the capability to query it. The action of elevating your data is denoted as - "Search jobs"
Another option is to elevate an entire period in time to the Analytic offering, they call it - "Restore logs".
Table's different service tiers -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-retention-archive?tabs=api-1%2Capi-2
Search job offering -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/search-jobs?tabs=api-1%2Capi-2%2Capi-3
Restore logs -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/restore?tabs=api-1%2Capi-2
all are under public preview.
both offerings - Search jobs and Restore logs provides you the capability to engage your data on demand, can't comment or suggest regarding the actual cost.
Azure data explorer solution:
Another option is to use Azure storage to hold your data (as an example), Azure data explorer provides the capability to create an external table, that table is a logical view on top of your data, the data itself is kept outside of the ADX cluster. you can query your data by using ADX, expect degradation in query performance.
ADX external table offering -
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/schema-entities/externaltables
I'm looking for a cloud service that can do advanced statistics calculations on a large amount of votes submitted by users, in "real time".
In our app, users can submit different kind of votes like picking a favorite, rating 1-5, say yes/no etc. on various topics.
We also want to show "live" statistics to the user, showing the popularity of a person etc. This will be generated by a rather complex SQL where we are calculating the average number of times a person was picked as favorite, divided by total number of votes and the number of games in which the person has been participating etc. And the score for the latest X games should count higher than the overall score for all games. This is just an example, there are several other SQL queries with similar complexity.
All our presentable data (including calculated statistics) is served from Firestore documents, and the votes will be saved as Firestore documents.
Ideally, the Firebase-backend (functions, firestore etc) should not need to know about the query logic.
What I wish for is a pay as you go cloud service that does the following:
I define some schemas and set up the queries we need for the statistics we have (15-20 different SQLs). Like setting up views in MySQL
On every vote, we push the vote data to this service, which will store it in a row.
The service should then, based on its knowledge about the defined queries, and the content of the pushed vote data, determine which statistics that are affected by the newly added row, and recalculate these. A specific vote type can affect one or more statistics.
Every time a statistic is recalculated, the result should be automatically pushed back to our Firebase backend (for instance by calling an HTTPS endpoint that hits a cloud function) - so we can update the relevant Firestore documents.
The service should be able to throttle the calculations, like only regenerating new statistics every 1 minute despite having several votes per second on the same topic.
Is there any product like this in the market? Or can it be built by combining available cloud services? And what is the official term for such a product, if I should search for it myself?
I know that I can probably build a solution like this myself, and run it on a cloud hosted database server, which can scale as our need grows - but I believe that I'm not the first developer with a need of this, so I hope that someone has solved it before me :)
You can leverage the existing cloud services available on the Google Cloud Platform.
Google BigQuery, Google Cloud Firestore, Google App Engine (CRON Jobs), Google Cloud Tasks
The services can be used to solve the problems mentioned above:
1) Google BigQuery : Here you can define schema for the data on which you're going to run the SQL queries. BigQuery supports Standard and legacy SQL queries.
2) Every vote can be pushed to the defined BigQuery tables using its streaming insert service.
3) Every vote pushed can trigger the recalculation service which calculates the statistics by executing the defined SQL queries and the query results can be stored as documents in collections in Google Cloud Firestore.
4) Google Cloud Firestore: Here you can store the live statistics of the user. This is a real time database, so you'll be able to configure listeners for the modifications to the statistics and show the modifications as soon as the statistics are recalculated.
5) In the same service which inserts every vote, create a new record with a "syncId" in an another table. The idea is to group a number of votes cast in a particular interval to a its corresponding syncId. The syncId can be suffixed with a timestamp. According to your requirement a particular time interval can be set so that the recalculation can be triggered using CRON jobs service which invokes the recalculation service within the interval. Once the recalculation related to a particular syncId is completed the record corresponding to the syncId should be marked as completed.
We are leveraging the above technologies to build a web application on Google Cloud Platform, where the inputs are recorded on Google Firestore and then stream-inserted to Google BigQuery. The data stored in BigQuery is queried after 30 sec of each update using SQL queries and the query results are stored in Google Cloud Firestore to serve dashboards which are automatically updated using listeners configured for the collection in which the dashboard information is stored.
I am running some ETL on my Azure SQL DW at DW500
so I have 20 concurrency slots available
some of my queries would require RC xlargerc, some largerc, etc
so the expected load can vary from query to query
is there any option to control the assigned RC in the query directly?
e.g. using OPTION or any other hints?
the only workaround I could find so far is to create separate users with different resource classes assigned which is not really feasible
thanks in advance,
-gerhard
There is currently no option to control this at query level. You have to be logged in as the appropriate user with the appropriate resource class (smallrc, mediumrc, largerc, and xlargerc) assigned to them.
DWU500 is pretty low, with max 20 concurrent queries and only 20 concurrency slots. Remember an xlargerc user would take 16 of those slots, as per here, so you could only have 1 other mediumrc user or 4 smallrc users running at the same time. ie you could not have one largerc and one xlargerc user running at the same time. These queries would queue.
Can you tell us a bit more about your scenario? For example, why switch users during ETL? What ETL tool are you using, eg SSIS, Azure Data Factory etc
If you think this is a worthwhile option, consider making a feedback request.
I'm developing an app that allows to track a mobile device instantly (live) ... I need an of advice. The application must send the location to a webservice that in it's turn records the received data in a database.
What would be, in your opinion, the best way to store the location values?
I'm new in using bigdata and I'm afraid that simple sql requests wont be able to do the work properly ... I imagine if there is lot of users and each user send a request each 1sec I'll have issue with the database ...
An advice ? Thank you very much
i think you could have a look into the geospatial queries in mongo, if you chose to go ahead with mongodb.
Refer here
And here
for the design of the database would depend on the nature of the query (essentially the read and write).
Worth having a look into
Working at Cintric we landed on using elasticsearch. We process billions of location points in real time and provide advanced analytics to our users.
We started with mongoDB and ran into a lot of troubles, eventually leading to a painful migration.
Our stack currently has mobile devices dump location updates into AWS Kinesis, which are then processed by AWS Lambda handlers, and then dumped into elasticsearch. We're able to serve, process and store 300 million requests/month for only a few hundred dollars/month. Analytics for our dashboard add additional cost but for your needs I would highly recommend checking out your options on AWS.
I have a big table containing domain data in Google Big query and I would like to create a web app similar to http://whois.domaintools.com/browse/a/
Page with list of sorted results I can dig into.
Is it possible without making query every time page is opened or refreshed which is most ovious way.
Thanks in advance!
Querying directly BigQuery introduces lags, which will affect frontend performance, and for some users and queries it will be several seconds, hence it's not recommended to be used on a live website, as the most suitable way is to run asynchronously in the background.
You need to build your website, so the website itself reads the data from a cache or from a local database.
You then need to build a background process (Message Queue or Cron) in which you will periodically run the BigQuery job, and process the results and write to your local database. You can then choose to run only every 1 hour or so.
See what you can do with BigQuery
https://github.com/everythingme/redash frontend available at http://demo.redash.io/
http://bigqueri.es/