Bug in google cloud bigquery? - google-bigquery

I observed something very strange today when trying to steam records into bigquery table , sometimes after the successful stream, it shows all the records being steamed into, something it only shows part of it? What I did was I deleted the table, and recreated it. Has anyone encountered any scenario like this? I am seriously concerned.
Many thanks.
Regards,

I've experienced a similar issue after deleting and recreating the table in a short time span, which is part of our e2e testing plan. As long as you do not delete/recreate your table streaming API works great. In our case workaround was to customize streaming table suffix for e2e execution only.
I am not sure it this was addressed or not, but I would expect constant improvement.
I've also created a test project reproducing the issue and shared it with BigQuery team.

Related

Unable to Save Results of Query to a New Table in BigQuery

I am currently not able to save query results to a new table in BigQuery using the BigQuery console. I was able to do this 2 weeks ago, but now it doesn't work for me or anyone in my organization. I don't get an error; the 'loading' wheel simply keeps indicates that it's loading, and it eventually times out.
Has anyone experienced this? I thought it was a general BigQuery issue, but there's no evidence of others complaining, or a general bug.
[Image of what occurs when I try to export results to a BigQuery table1
Me too. I am unable to export query results to BQ table since yesterday. I thought it is just me, now I know everyone is affected. Think it is bug that needs to be fixed, quick!!!
LATEST: Go to Query Settings and change processing location to somewhere else and you should be able to save the query results. I think default is USA, and I changed to Singapore. Try a few locations and see which one works for you.
I have the same problem. I'm gcp project owner and my Bigquery quotas are almost in 0% I noticed this problem since the day before yesterday and it's very annoying. Currently I have to export results into a file (csv or json) in order to import them later from this same file. I hope Google fix this bug soon.

Google Cloud Big Query Scheduled Queries weird error relating JURISDICTION

All my datasheets, tables, and ALL items inside BQ are un EU. When I try to do a View->to->Table 15 min scheduled query I get an error regarding my location, which is incorrect, because all, source and destiny are both on EU...
Anyone knows why?
There is a transient known issue matching your situation, GCP support team needs more time for troubleshooting. There may be a potential issue in the UI. I would ask you to try the following steps:
Firstly, try to make the same operation in Chrome's incognito mode.
Another possible workaround is trying to follow this official guide using a different approach than the UI (CLI for instance).
I hope it helps.

Migrate from youtrack to jira

After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer

Big Query table too fragmented - unable to rectify

I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?

Using SQL for cleaning up JIRA database

Has anyone had luck with removing large amount of issues from a jira database instead of using the frontend? Deleting 60000 issues with the bulktools is not really feasible.
Last time I tried it, the jira went nuts because of its own way of doing indexes.
How about doing a backup to xml, editing the xml, and reimporting?
We got gutsy and did a truncate on the jiraissues table and then use the rebuild index feature on the frontend. It looks like it's working!
This is old, but I see that this question was just edited recently, so to chime in:
Writing directly to the JIRA database is problematic. The reindex feature suggested in the Oct 14 08 answer just rebuilds the Lucene index, so it is unlikely to clean up everything that needs to be cleaned up from the database on a modern JIRA instance. Off the top of my head, this will probably leave data lying around in the following tables, among others:
custom field data (customfieldvalue table)
issue links (issuelink table)
versions and components (nodeassociation table, which contains other stuff too, so be careful!)
remote issue links or wiki mentions (remotelink table)
If one has already done such a manual delete on production, it's always a good idea to run the database integrity checker (YOURJIRAURL/secure/admin/IntegrityChecker!default.jspa) to make sure that nothing got seriously broken.
Fast forwarding to 2014, the best solution is to write a quick shell script that uses the REST API to delete all of the required issues. (The JIRA CLI plugin is usually a good option for automating certain types of tasks too, but as far as I can tell, it does not currently support the deletion of issues, so the REST API is your best bet.)