I am seeing there are duplicate issues with one defect within a minute. why this is happening and how can i get rid off this? any help will be appreciated.
There is no way to delete/remove an issue from AccuWork. You will need to close the duplicate issues that have been created and just keep the original open.
As for why this is happening, you could look at the history of these issues and see who is creating them. I am going to guess you have an automated process or an AccuBridge solution in place which creates the issues.
Check for more than one perl process on the machine running accubridge. That would be an indicator that you have multiple instances of the bridge running.
Related
Anyone see this behavior? For example here is my code in an activity....#{concat(
substring(activity('GetMaxDate').output.firstRow.MAX_DATE,0,4)
This IS saved. Multiple times. But when I run in debug this is what is run...
#{concat(\n substring(activity('GetMaxDate').output.firstRow.MAX_DATE,1,4)\n ,'
It's running the prior version (0,4) instead of the new version (1,4). I first noticed this because I changed the name of the activity and debug still ran the old name. This seems like new problem I've not had before. If I publish and run it as trigger it picks up the change. It's just debug that's not picking it up. This seems an inexcusable bug. This is 101 functionality folks.
Any suggestions? Should this be logged with Microsoft as bug?
Additinal option to Gary's comment:
C) Rename your pipeline, save, run debug. Rename back after.
This worked for me.
Seen this cache behavior in the past. Preview query shows cached data from source table even though the source table data was completely changed.
Deleting the pipeline,dataset.. and creating new pipeline solved the issue for me.
Seems this happenens when the debug is being used too many times. Recommend to log this behavior as a bug.
All my datasheets, tables, and ALL items inside BQ are un EU. When I try to do a View->to->Table 15 min scheduled query I get an error regarding my location, which is incorrect, because all, source and destiny are both on EU...
Anyone knows why?
There is a transient known issue matching your situation, GCP support team needs more time for troubleshooting. There may be a potential issue in the UI. I would ask you to try the following steps:
Firstly, try to make the same operation in Chrome's incognito mode.
Another possible workaround is trying to follow this official guide using a different approach than the UI (CLI for instance).
I hope it helps.
I observed something very strange today when trying to steam records into bigquery table , sometimes after the successful stream, it shows all the records being steamed into, something it only shows part of it? What I did was I deleted the table, and recreated it. Has anyone encountered any scenario like this? I am seriously concerned.
Many thanks.
Regards,
I've experienced a similar issue after deleting and recreating the table in a short time span, which is part of our e2e testing plan. As long as you do not delete/recreate your table streaming API works great. In our case workaround was to customize streaming table suffix for e2e execution only.
I am not sure it this was addressed or not, but I would expect constant improvement.
I've also created a test project reproducing the issue and shared it with BigQuery team.
I am having a very hard time making RavenFS behave properly and was hoping that I could get some help.
I'm running into two separate issues, one where uploading files to the ravenfs while using an embedded db inside a service causes ravendb to fall over, and the other where synchronizing two instances setup in the same way makes the destination server fall over.
I have tried to do my best in documenting this... Code and steps to reproduce these issues are located here (https://github.com/punkcoder/RavenFSFileUploadAndSyncIssue), and video is located here (https://youtu.be/fZEvJo_UVpc). I tried looking for these issues in the issue tracker and didn't find something directly that looked like it related, but I may have missed something.
Solution for this problem was to remove Raven from the project and replace it with MongoDB. Binary storage in Mongo can be done on the record without issue.
I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?