Google BigQuery with Google Drive source error - google-bigquery

I have a problem when creating the BigQuery table by CSV file that storge on Google Drive. The old tables still working fine. I try to use the same file but change to the Upload option (instead of Google Drive), it is working without problem.
I have had this problem 5-7 days ago.

This issue has been raised in this issue tracker. We cannot provide an ETA at this moment but you can follow the progress in the issue tracker and you can ‘STAR’ the issue to receive automatic updates and give it traction by referring to this Link.
As a workaround you can disable the Editors tab, and then try creating the table. This works fine.

Related

Problem when trying to read EXCEL after implementing OFFICE 365 "Confidentiality Label"

I have an ETL routine in PENTAHO and I'm migrating to APACHE HOP.
But I came across a situation, the HOP step/plugin "Microsoft Excel Input" cannot read the data before I open the excel file and click confirm Add Confidentiality Label.
In PENTAHO PDI this problem does not occur, does anyone have any tips?
IMG 1
After clicking and adding a confidentiality label like "public" for example and saving and closing the file, the process works perfectly.
Note: This only happens with some files.
This sounds like a problem that will not have a clear and direct answer and will require some changes in the code.
The code for Apache Hop is managed on Github.
You can create an issue there and one of the developers will help you get this sorted out. When creating a ticket please be as specific as you can be and add a sample, that will improve the chances of getting a fix on short notice.

Google spreadsheet will not print or download

I have tried Google help and support and they could not solve the issue I am having.
One particular spreadsheetat:https://docs.google.com/spreadsheets/d/11JxbhDbTiXP106rx7p9uB0qHImZG5TeIH0JReGUeWRo/edit#gid=40
will not print.
They suggested posting here.
I don't have the problem with other spreadsheets - only this one.
Neither will the spreadsheet download to my desktop and I cannot download a PDF file from it either - yet I can from my other spreadsheets and Google Docs.
Since other spreadsheets print, I assume the issue is within Google Sheets. I have tried different browsers and different machines - a Chromebook and my phone and the problem persists...
I have also tried duplicating the Spreadsheet but this displays the same issues; no printing. Trying to print just displays 'sent to Printer and never completes. Have to use Windows 'Task Manager' to close the window.
I do have some scripts in the document for sending PDF files to an email address - but this comes up with an error: 'Exception: Service Spreadsheets failed while accessing document with ID #' but the sheet should be able to print. I haven't edited the
sheets for maybe two years or more except to remove some unused rows today after reading reports of Large sheets displaying the error when running a script...
I should add when I contacted Google Help, we went through all of the usual suspects - clearing cache, trying incognito mode etc - all to no avail.
Can anyone advise, please?

BigQuery connecting from GSheet without enabling API every time

I have some scripts running from GSheet getting data from BigQuery. However, in order to make the files run, I need to manually enable the API every time for a given sheet.
So the question is: How to enable API within the code, so that if I share the GSheet or make a copy I don't have to go to the script editor and enable the API from there?
Thanks
I am a huge fan of this particular use of the Google ecosystem, so I'm happy to help get others up and running using GSheets with BigQuery! Hopefully it is working well for you!
When sharing the sheet with others, there is no need to alter anything in the script editor at all. The scripts should run and query BigQuery without issue; this has been my experience at least. The obvious caveat to this is that the users you share it with must have access to the Google Developer Project that the BigQuery instance is associated with.
However, when copying the sheet, I do not believe it is possible to have it replicate the connection. This is because when the file is copied, it becomes associated with a new Google Developer Project. Thus, you have to go into the script editor, then go to Resources > Developers Console Project and change the project listed to the one in which you have BigQuery enabled.
Hopefully this helps! Sorry I don't have better news for you!

Unable to copy tables cross project in BigQuery web UI

First time I've seen this error. BigQuery won't let me copy a table cross project via the web UI, but using the console works just fine.
Is this a bug in the web UI? It used to work.
Console works fine:
Looks like we introduced a UI bug in our most recent release that makes cross-project copy jobs fail. I'm working on a fix. Thanks for the bug report!

Migrate from youtrack to jira

After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer