Problems with BigQuery and Cloud SQL in same project - google-bigquery

So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.

After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.

Related

Error when creating scheduled query on Bigquery "Error creating scheduled query: er"

I just started a new project on Google Cloud, set up some bigquery datasets and tables. I now want to set up some scheduled queries. I have already enabled BigQuery Data Transfer API. My query is valid (it's just SELECT * FROM table). I can't find anything about this error online.
See screenshot
UPDATE: I've experimented a bit and it seems to be an organization wide issue. All projects, new and old within my organization get this same error when trying to schedule a query. I tried for a project in a different organization and did not have the issue. What could be causing this error for ALL projects in an organization?
UPDATE 2:
By querying a table that is not empty the error change to "Error creating scheduled query: Yn" instead of "Error creating scheduled query: er" (when the scheduled query would have queried an empty table).
I faced the same issue than you, and basically I just needed to run the query first before creating the the scheduled query... And that did the trick.
from the BQ FAQs :
"Scheduled queries use features of BigQuery Data Transfer Service. Verify that you have completed all actions required in Enabling BigQuery Data Transfer Service."
basically, what this means is that you need to enable the data transfer api in your project, AND give the user who creates the scheduled query a BQ admin role in order to have the right permissions to access that transfer service.
If done right, you should get a popup when creating the scheduled query to confirm that the data transfer service has access to your uses account (if you block popups you might not see this message and get stuck)
If this error only occurs in your organisation, I believe it might be caused by a organisation policy on Google Cloud. I would encourage you to double check if there is any org policy causing this error. If that's not the case, open a support ticket with GCP.
What worked for me was signing in through Incognito Mode with just my account and attempting to save the scheduled query. I have multiple Google Accounts signed it at one time and for whatever reason, BigQuery throws this generic error after authorization is successful and BigQuery is granted the access it requested.
You need to make sure that you are creating the query under the project targeted not in any other projects because it won't appear
Also you need to enable the API as one of the above answers
This eventually worked for me when i ran this in an cognito window

BigQuery connecting from GSheet without enabling API every time

I have some scripts running from GSheet getting data from BigQuery. However, in order to make the files run, I need to manually enable the API every time for a given sheet.
So the question is: How to enable API within the code, so that if I share the GSheet or make a copy I don't have to go to the script editor and enable the API from there?
Thanks
I am a huge fan of this particular use of the Google ecosystem, so I'm happy to help get others up and running using GSheets with BigQuery! Hopefully it is working well for you!
When sharing the sheet with others, there is no need to alter anything in the script editor at all. The scripts should run and query BigQuery without issue; this has been my experience at least. The obvious caveat to this is that the users you share it with must have access to the Google Developer Project that the BigQuery instance is associated with.
However, when copying the sheet, I do not believe it is possible to have it replicate the connection. This is because when the file is copied, it becomes associated with a new Google Developer Project. Thus, you have to go into the script editor, then go to Resources > Developers Console Project and change the project listed to the one in which you have BigQuery enabled.
Hopefully this helps! Sorry I don't have better news for you!

Many user using one program (.exe) that includes datasets

I created a time recording program in vb.net with a sql-server as backend. User can send there time entries into the database (i used typed datasets functionality) and send different queries to get overviews over there working time.
My plan was to put that exe in a folder in our network and let the user make a link on their desktops. Every user writes into the same table but can only see his own entries so there is no possibility that two user manipulate the same dataset.
During my research i found a warning that "write contentions between the different users" can be occur. Is that so in my case?
Has anyone experience with "many user using the same exe" and where that is using datasets and could give me an advice whether it is working or what i should do instead?
SQL Server will handle all of your multi-user DB access concerns.
Multiple users accessing the same exe from a network location can work but it's kind of a hack. Let's say you wanted to update that exe with a few bug fixes. You would have to ensure that all users close the application before you could release the update. To answer you question though, the application will be isolated to each user running it. You won't have any contention issues when it comes to CRUD operations on the database due to the network deployment.
You might consider something other than a copy/paste style publishing of your application. Visual Studio has a few simple tools you can use to publish your application to a central location using ClickOnce deployment.
http://msdn.microsoft.com/en-us/library/31kztyey(v=vs.110).aspx
My solution was to add a simple shutdown-timer in the form, which alerts users to saving their data before the program close att 4 AM.
If i need to upgrade, i just replace the .exe on the network.
Ugly and dirty, yes... but worked like a charm for the past 2 years.
good luck!

Migrations don't run on hosting

I'm using MigratorDotNet to manage Rails-style migrations for my web app. I have a workflow where, if I delete all the tables in the database, I can access an installation view that will run MigratorDotNet and create all the necessary tables.
This works locally. For some reason, when I upload my code to my Arvixe hosting, the migrations just never run. I get this odd error:
There is already an object named 'SchemaInfo' in the database.
This is odd because, prior to running migrations, I manually deleted all the tables in the database (to make sure it wasn't left over from a previous install).
My code essentially boils down to:
new Migrator.Migrator("SqlServer", connectionString.ToString(), migrationsAssembly).MigrateToLastVersion();
I've already verified by logging that the connection string is correct (production/hosting settings), and the assembly is correctly loaded (name and version).
Works locally, but not on Arvixe. How do I troubleshoot this?
This is a dark day.
It turns out (oddly) that the root cause was my hosting company used a schema other than dbo for my database. Because of this, the error message I saw (SchemaInfo already exists) was talking about their table.
My solution, unfortunately, was to rip out MigratorDotNet and go with FluentMigator instead. not only did this solve the problem, but it also gave me a more intelligible error message (one referring to the schema names).
While it doesn't seem possible to auto-set the schema, and while I need to switch the schema on my dev vs. production machine, it's still a solvable problem (and a better API, IMO). I googled, but did not find any way to change the default schema in migratordotnet.
I'm sorry for the issues that you were having. On shared hosting, unfortunately the only way that we may be able to change the schema is manually. If you are still looking for a solution that requires our assistance, please forward your ticket ID to qa .at. arvixe.com as well as arvand .at. arvixe.com and we can look into the best way to resolve this.

Query RavenDB without using the studio interface

I am trying to view my sagas in the RavenDB management studio, and loading even the initial page, all that I see is this "Querying documents..." box with a continuous moving progress bar. I can not seem to get past it, going from page to page it does not go away. Is there a way to pull all of the saga data into a list so I can look at it? It appears the issue is that the saga documents are continuously being added.
I've looked into the HTTP API and the Linq adapters, but I guess I am looking for something that already exists that can easily peer into the server much like the silverlight studio, except not such a pain. I more or less just want to pull a snapshot of all the documents into some kind of readable list.
I find LINQPad 4 convenient, the RavenDB driver for LINQPad can be found here:
https://github.com/ronnieoverby/RavenDB-Linqpad-Driver
For the command line - cURL using dynamic indexes as explained here:
http://ravendb.net/docs/http-api/indexes/dynamic-indexes
In the browser, go to http://localhost:8080/docs
You might need to install JsonView, but that should give you what you want.
If anyone wants to know how to browse the data through REST call,
"localhost:8080/databases/{database-name}/docs/{dataset-name}/id"
example:
"localhost:8080/databases/testDB/docs/Sites/1"
will give the json data for the "Sites" document
"localhost:8080/databases/testDB/docs/"
will give the json data for all the documents in
testDB.