Where does 'Microsoft Accessibility Insights' extension (web) store data? - accessibility-insights

Wondering if the community can help me understand if (scanned) data is only stored in the browser during a 'fast pass' scan? Or does this go to secure hosting elsewhere?

In general, Accessibility Insights for Web only stores complete scan data locally in the browser and does not upload results to any sort of external host.
If you chose to allow usage telemetry when you started the extension for the first time, it will upload some anonymous usage data. This includes information about how many violations of each type were found, but it does not include any information which would identify the user running the scan or the site the scan was run on - in particular, it does not include the URL or title of the site being scanned, and it does not include the "Path" or "Snippet" fields listed in Fast Pass's Automated Checks results. Telemetry is only sent if you opted in the first time you ran the extension, and you can disable telemetry at any time from the extension's Settings menu.
Though scan data is not stored in a hosted service, Accessibility Insights for Web does have a few features that allow a user to explicitly choose to export certain scan data to a specifically configured external host. You can find more information about that in this StackOverflow answer, which is related but not quite a duplicate of your question.

Related

Desktop Bridge UWP - Submission

On the submission properties tab on my UWP application, I cannot select "No" for the "Does this product access, collect, or transmit personal information (data that could be used to identify a person)?" question.
It is saying "Based on the capabilities your submission declares, a privacy policy URL is required."
I have reviewed my capabilities (which is empty), and I am not using any personal information on my application.
If the internetCapability is enabled, you will be required to provide a privacy policy, as your app could theoretically send any personal data over the internet.
You can easily generate a privacy policy with a tool like this.
Although #Martin's response is technically correct, it's incomplete.
Because you are submitting a Desktop Bridge app (with runFullTrust capability) your app has access to essentially everything the user has access to, and thus internetClient is redundant (unless you also have UWP components like a background task). According to the Store product page, your app has access to "all system resources" so in your privacy policy you might want to mention more than just network usage - you could mention that you don't collect or use any personal data, won't read files or access the microphone or the camera or location etc. I don't believe that's required, but it might make customers feel better.
(Note that the Store text might change to something more descriptive in the future).

Application Insights strategies for web api serving multiple clients

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.
this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)

how to build google gadget with persistent storage

I'm trying to make a google gadget that stores some data (say, statistics of users' actions) in a persistent way (i.e. statistics accumulates over time and over multiple users). Also I want these data to be placed at google free hosting, possibly together with the gadget itself.
Any ideas on how to do that?
I know, Google gadgets API has tools for working with remote data, but then the question is where to host it. Google Wave seemed to be an option, but it is no longer supported.
You should get a server and host it there.
You have then the best control over the code, the performance and the data itself.
There are several hosting providers out there who provide hosting for a reasonable price.
Naming some: Hostgator.com (US), Hetzner.de (DE), http://swedendedicated.com (SE, never used, just a quick search on the internet).

How does the Dropbox Datastore API differ from Parse?

How does the Dropbox Datastore API differ from similar offerings like Parse? One difference that I see is that my users pay for server storage instead of me. Are there other differences?
Disclaimer: I'm a Dropbox engineer who worked on the Datastore API, and know about the Parse API only indirectly. Weigh my opinion appropriately. Major differences I know of (pro and con):
Dropbox Datastores are free to the developer, and free the user for the first 5MB per-app (after which their Dropbox quota applies). Parse charges developers based on how many API requests they’re making.
Parse has minimal offline support, while Dropbox has full offline operation. With Dropbox, if the developer modifies data while offline, those modifications will be reflected in subsequent queries (with Parse, those changes are not reflected). Dropbox provides on-device query logic (unlike Parse) so that apps can continue to generate the views they need to, even when there’s no Internet available. In addition, Parse does not provide conflict resolution or querying offline.
Parse provides the ability to share data between users, and global data for all users of the app. Dropbox Datastores only support per-user data (for each app) for now (sharing is on the roadmap).
I would also add that:
Parse is full feature of backend of as service. You can find a pretty complete list of the other player in this field: http://en.wikipedia.org/wiki/Backend_as_a_service. They provide feature like:
Data service
User registration/auth
Push notification
Social
The dropbox Datastore APIs is more focusing on data services. (You also got the User part for free too?) Also it works full offline.
The Parse framework can store data that can be ready by any user in the application.
The Dropbox datastore, store data for each user, and you can't accesss data from other user. That's the main difference.
So easy to get lost in this since you have to read between the lines. My take is that with Datastore you are working with objects stored offline locally as json. I'm hoping they will soon release a Xamarin Android component - they released an IOS component last month. Since Xamarin targets both Android and IOS and Winphone, who knows why they made a dedicated IOS DLL for Xamarin but I digress. With Parse, it appears to me their intent is the always-connected-device. Sure you can save queries locally and you can save (save eventually) locally where Parse will push to the server when it is connected. But saving "eventually" and saving queries for offline work is a different design than just saving and letting Parse do it all in the background for you - which it does not unless I have missed something that would make this attractive to me. I cannot see Parse useable for devices that you know will be sometimes-connected, without a lot of code to make this happen and sync.

Skydrive sync REST API

I have read the docs for SkyDrive REST APIs but didn't find any API using which i can sync with the SkyDrive, without recursive polling the folders for update check.
Is there any API to get only the update for a user Drive?
A commonplace reality of epistemology is that...
It is typically much easier to prove that something exists than to prove that it does not exist
Never the less I can say with a high level of confidence that the official REST API for Skydrive doesn't include a way of getting a list of updated documents for synchronization purposes.
Furthermore I didn't see any evidence of a non-supported/non-official API that would serve this purpose and by observing the way the Windows Client for SkyDrive interacts with the server (within limit of fair-use reverse engineering), it appears that the synchronization is done by reviewing the directory tree rather than getting a differential list.
I believe the closes you can go is: Get a list of the user's most recently used documents
To get a list of SkyDrive documents that the user has most recently
used, use the wl.skydrive scope to make a GET request to
/USER_ID/skydrive/recent_docs, where USER_ID is either me or the user
ID of the consenting user. Here's an example.
GET http://apis.live.net/v5.0/me/skydrive/recent_docs?access_token=ACCESS_TOKEN