I have some scripts running from GSheet getting data from BigQuery. However, in order to make the files run, I need to manually enable the API every time for a given sheet.
So the question is: How to enable API within the code, so that if I share the GSheet or make a copy I don't have to go to the script editor and enable the API from there?
Thanks
I am a huge fan of this particular use of the Google ecosystem, so I'm happy to help get others up and running using GSheets with BigQuery! Hopefully it is working well for you!
When sharing the sheet with others, there is no need to alter anything in the script editor at all. The scripts should run and query BigQuery without issue; this has been my experience at least. The obvious caveat to this is that the users you share it with must have access to the Google Developer Project that the BigQuery instance is associated with.
However, when copying the sheet, I do not believe it is possible to have it replicate the connection. This is because when the file is copied, it becomes associated with a new Google Developer Project. Thus, you have to go into the script editor, then go to Resources > Developers Console Project and change the project listed to the one in which you have BigQuery enabled.
Hopefully this helps! Sorry I don't have better news for you!
Related
assume I have some thesis etc. and want to give the audience the possibility to download the coding part and test it;
Is there a platform for professionally upload it and also keep it there permanently (of course it should not be deleted within a couple of months)
Thanks for your help...
You can use tools such as Git-hub or bitbucket. These allow you to upload code and even have version control. Users can download your code directly and use it if they need to.
After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer
I created a time recording program in vb.net with a sql-server as backend. User can send there time entries into the database (i used typed datasets functionality) and send different queries to get overviews over there working time.
My plan was to put that exe in a folder in our network and let the user make a link on their desktops. Every user writes into the same table but can only see his own entries so there is no possibility that two user manipulate the same dataset.
During my research i found a warning that "write contentions between the different users" can be occur. Is that so in my case?
Has anyone experience with "many user using the same exe" and where that is using datasets and could give me an advice whether it is working or what i should do instead?
SQL Server will handle all of your multi-user DB access concerns.
Multiple users accessing the same exe from a network location can work but it's kind of a hack. Let's say you wanted to update that exe with a few bug fixes. You would have to ensure that all users close the application before you could release the update. To answer you question though, the application will be isolated to each user running it. You won't have any contention issues when it comes to CRUD operations on the database due to the network deployment.
You might consider something other than a copy/paste style publishing of your application. Visual Studio has a few simple tools you can use to publish your application to a central location using ClickOnce deployment.
http://msdn.microsoft.com/en-us/library/31kztyey(v=vs.110).aspx
My solution was to add a simple shutdown-timer in the form, which alerts users to saving their data before the program close att 4 AM.
If i need to upgrade, i just replace the .exe on the network.
Ugly and dirty, yes... but worked like a charm for the past 2 years.
good luck!
So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.
After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.
I develop -- from time to time -- yahoo open tables to access different resources on the web. Currently I am using a JavaScript editor and -- when I want to test if my open table works -- I upload the xml table description to a server to test it with a yql client application. However this approach is quite slow and -- sometimes -- I get blocked by yahoo, because of a mistake in my open table description. Therefore I would like to learn about best practices on how to test and develop yahoo open table locally. How does look your set up for the open table development?
To clarify my question, I am looking for any convenient way (best practise) to develop and test yql tables, e.g., running a part of a Java Script inside Rhino.
First of all: I agree that I don't see a really convenient way either to test YQL datatable definitions locally. Nevertheless, here is how I approach this issue.
Hosting on github
YQL datatable definitions are often used in very open scenarios e.g. when there is an existing API that you want to wrap via YQL. Therefore I am normally working on a fork of the YQL community tables and I just add my own definitions there. The hosting of the .xml files takes place on github in this case: https://github.com/yql/yql-tables
The other advantage of this approach is as well that it is easy for me to share my datatables with the community if I feel that they might be valuable for others as well.
Hosting privately
The free github account only comes with free repositories though, so everybody would be able to see and use your datatables. If that is not good for you then you could either buy a github pro account to get private repositories, or host the datatable definitions yourself.
To do that you can upload them to your own server - as you are already doing - or you should also be able to set up a web server like Apache locally on your machine and then get a dynamic hostname from dyndns.com or similar, so that you can point to this definitions from YQL. I have not tried this because github was working sufficiently well for me but I am sure that it is possible.
Why don't you just put the file you are editing in a public dropbox folder? This is what I do and it works pretty well.