WorkItemMigration: No images in discussion comments - DevOps2DevOps - azure-devops-migration-tools

I have done a WorkItemMigration from one DevOps to another DevOps. In my test environment everything worked fine. But in the live system, all the images in the discussion comments are removed. Is there any setting that can cause missing images?

You need to set a PersonalAccessToken for the source and the target and then set "tachmentLinks to true.
This will use the rest API to download the image and ad them as additional attachments on the work item.

Related

Potential bug in GCP regarding public access settings for a file

I was conversing with someone from GCS support, and they suggested that there may be a bug and that I post what's happening to the support group.
Situation
I'm trying to adapt this Tensorflow demo ...
https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization
... to something I can use with images stored on my GCP account. Substituting one of my images to run through the process.
​​I have the bucket set for allUsers to have public access, with a Role of Storage Object Viewer.
However, the demo still isn't accepting my files stored in GCS.
For example, this file is being rejected:
https://storage.googleapis.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg
That file was downloaded from the examples in the demo, and then uploaded to my GCS and the link used in the demo. But it's not being accepted. I'm using the URL from the Copy URL link.
Re: publicly accessible data
I've been following the instructions on making data publicly accessible.
https://cloud.google.com/storage/docs/access-control/making-data-public#code-samples_1
I've performed all the above operations from the console, but the bucket still doesn't indicate public access for the bucket in question. So I'm not sure what's going on there.
Please see the attached screen of my bucket permissions settings.
So I'm hoping you can clarify if those settings look good for those files being publicly accessible.
Re: Accessing the data from the demo
I'm also following this related article on 'Accessing public data'
https://cloud.google.com/storage/docs/access-public-data#storage-download-public-object-python
There are 2 things I'm not clear on:
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
If I do need to add this to the code from the demo, can you tell me how I can find these 2 parts of the code:
a. source_blob_name = "storage-object-name"
b. destination_file_name = "local/path/to/file"
I know the path of the file above (01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg), but don't understand whether that's the storage-object-name or the local/path/to/file.
And if it's either one of those, then how do I find the other value?
And furthermore, to make a bucket public, why would I need to state an individual file? That's making me think that code isn't necessary.
Thank you for clarifying any issues or helping to resolve my confusion.
Doug
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
No, you don't need to. I actually did some testing and I was able to pull images in GCS, may it be set to public or not.
As what we have discussed in this thread, what's happening in your project is that the image you are trying to pull in GCS has a .jpeg extension but is not actually .jpeg. The actual image is in .jpg causing TensorFlow to not able to load it properly.
See this testing following the demo you've mentioned and the image from your bucket. Note that I used .jpg as the image's extension.
content_urls = dict(
test_public='https://storage.cloud.google.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpg'
)
Also tested another image from your bucket and it was successfully loaded in TensorFlow.
Most likely the problem is your turtle ends in .jpeg and your libraries are looking for .jpg.
The Errors you're seeing would be much more helpful to figure out the problem.

MediaWiki: Getting "readapidenied" error instead of login token

That's a quite puzzling problem. I've multiple MediaWiki installations. In this specific case: Version 1.34.
Now I can login to all of these MediaWikis. Everything works fine.
Now I can access all of these MediaWikis via API --- EXCEPT ONE. The strange thing is: All of them are configured almost identical. I even copied the configuration from one wiki where everything was working to the second wiki.
To be more precise. If I send ...
/wikiA/api.php?action=query&meta=tokens&format=json&type=login
... I get a very reasonable answer, e.g.:
{"batchcomplete":"","query":{"tokens":{"logintoken":"37ec2e690eeb48a10ac66b2ccbca2b576000f9f4+\\"}}}
If I send ...
/wikiB/api.php?action=query&meta=tokens&format=json&type=login
... I get the following answer, e.g.:
{"error":{"code":"readapidenied","info":"You need read permission to use this module.","*":"See http://xxx.xxx.xxx.xxx/wikiB/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."}}
This can be reproduced using any web browser.
Q: What could be the reason that on this wikiB I even can't access the normal login module? It can't be the configuration. It's almost completely identical. It can't be the source code. I ran a diff on the PHP files and found no significant differences. What could be wrong here? It seems it must be something with the database. But how do I approach this? Does anyone have an idea? I would appreciate it very much if you could help!
I analyzed the data base: No difference. I did more research using google: And found a bug report.
It's a bug in MediaWiki. They provided an official software release with THAT kind of bug.
It turnes out there is a 1.34.0 version and a 1.34.1 version. My WikiA has 1.34.1 while WikiB had 1.34.0. After copying this one single file includes/api/ApiQuery.php from WikiA to WikiB and everything worked fine.
https://gerrit.wikimedia.org/r/c/mediawiki/core/+/580097/

Create TFS Source Branch using Visual Studio Online / TFS 2015 Api

Does anyone know how to create a branch using the VSO Api. The documentation for Branches doesn't include a "create".
I have been experimenting with doing it via the ChangeSet Api without much success.
This is TFVC, not Git.
Just as what you see in "Branches" page, there isn't any way to create branch with the Rest API. And mostly, you can only read/get the information with the Version Control API for now.
I would recommend you to use Client Object Model Reference if you want to manage the Version Control programmatically. To create a branch, use the "CreateBranch()" method in Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer class.
The REST API apparently does allow one to create branches.
The confusion is that people think that this would be a PUT operation on the Branches endpoint of some kind.
It is not.
In the REST API, a branch is just one more kind of change that is checked in as part of a changeset.
It took me a long time to discover this, myself; and I was using the old SOAP API in the belief, shared by everyone else it appears from what I can find in Q&As on the WWW, that this wasn't part of the REST API.
Of course, using the SOAP API prohibits using .NET 5, because the assemblies only come for .NET Framework.
An abandonware API on an abandonware runtime is not a satisfactory way to talk to source control. ☺
The terrible Azure DevOps documentation gives no clue as to this, except for 1 obscure not-even-a-complete-sentence hidden in a minor class: "List of merge sources in case of rename or branch creation."
The only other clue is what appears in the JSON, from a get changeset changes, that describes the changeset of an already-made branch.
The (also abandoned) Azure DevOps sample code does not contain examples for even deleting an item, let alone branching.
Changesets are checked in via the changeset creation endpoint.
The individual change is a TfvcChange in the changeset's list of changes where:
the version control type (which is a set of flags) contains the branch flag; and
the merge source for the change specifies the source item and the range of changeset numbers.
Branching an entire tree appears to be a matter of branching the directory and all of the files and directories in the directory.
In C♯ or PowerShell, this is a TfvcChange with a VersionControlChangeType of Branch, in a TfvcChangeset passed to TfvcHttpClientBase.CreateChangesetAsync().

Opera Next extension autoupdate via update_url

I got problem with my company internal extension. They don't want to publish it, as it does gather data on external server. So I need to host it myself... but would like not to lose ability of autoupdate.
As far as I read I need to use update_url in manifest, but nothing more is said in Opera documentation...
"update_url": "http://path/to/updateInfo.xml", - as it is said in documentation page
Ok... and what should I put in that xml? Will it autoupdate or just notify users about new updates? Where do I put rest of updated files?
I tried to concat Opera itself about this question, but they don't give any contact information except something like if you have problem, ask on stackoverflow... so here I am.
If it does not work, I was thinking about really BAD method, using unsafe-eval and keeping newest version in local storage... but would rather like to avoid that.
In general the behavior is the same as for Chrome. You can base on this document: https://developer.chrome.com/extensions/autoupdate

How do I use multiple environments with copycopter?

(Using Rails 3.2.1) When using the copycopter-server app and editing the copy, it suggests that you can save as a draft and it will only submit to the development/staging sites or, if you click publish, it will save it to all environments- including production.
My question is- how do I setup these environments with copycopter? I've looked all over the place. The Railcast mentions the fact that you can use the feature, but never explains what you need to do in order to set it up.
Does anyone have any experience or with this?
I found in a (google) cached version of help.copycopter.com that you need to specify the environment in the copycopter.rb config file.
Change copycopter.rb to this:
CopycopterClient.configure do |config|
config.api_key = 'your_api_key_here'
config.host = 'host.name'
config.environment_name = Rails.env
end
Now, when you save something as a draft, it will populate to your development/staging servers automatically. When you publish something, it will populate all servers (including production). I'm not sure why they didn't add this to the original documentation, it took a long time to find.