Can anyone explain what's the purpose of the ApplicationContentUriRules section in the Windows 8 manifest file? What should happen when I add a new URI in that section? Will it be executed as if it's in the local context? The docs say it's used for allowing external content to be loaded in iframe, but that's completely wrong. Can someone please provide a definitive answer on what's the purpose of this section?
It's used to allow specific pages in the web context (which are generally pages external to your app) to access certain native features - like geolocation and the clipboard - that they normally would be unable to access. It essentially provides a whitelist.
Related
Is there any authorization mechanism in docusaurus? In my case not everyone should have access to the documentation
Skimmed through the official documentaion and couldn't find the answer to my question
As far as I know, now it is not possible. For more details take a look at GitHub issue.
To quote the part that is relevant to you:
"Docusaurus builds a static site. IE the content is exactly the same for all users.
If you want different content displayed for different user roles, you have to:
make that content dynamic for each user: ie render it only on the client side by swizzling React components and fetching your dynamic data yourself, like in any React app)
build one different static site per user role, each containing a different set of visible pages. You can use serverless edge functions (Cloudflare Workers for example) to serve one static deployment or another based on the role of the user and pages they can access.
Remember all these are concerns that are outside the scope of Docusaurus. In the end, we just build a static React site."
I'm looking to build an app using property data. Nestoria has a free API and rules of use and Zoopla an API you register for. OnTheMarket and Rightmove have same terms of use to the letter (bizarre for competitors?). Rightmove advertise an API for upload but not download - I can't find anything for OnTheMarket.
I've discovered that Rightmove does have an API although the post code search is obfuscated by their own outcode mappings...
https://api.rightmove.co.uk/api/sale/find?index=0&sortType=1&numberOfPropertiesRequested=2&locationIdentifier=OUTCODE%5E1&apiApplication=IPAD
I'm wary of using an API that's not promoted. The alternative is scraping, which is harder technically and legally questionable, although from what I read the data is in the public domain and so free to use.
I've contacted Rightmove but got no response.
Is anyone using the Rightmove api and had this authorised by them? Seems most strange that it's open and available but barely mentioned when searching for it.
Can anyone clarify what rules/law/ethics are in place for scraping data?
Don't query their hidden API. But you can run a web crawler on RightMove.co.uk website, and it is perfectly legal as outlined in their Terms of Service under section 3.3 :
You must not use or attempt to use any automated program unless the automated program identifies itself uniquely in the User Agent field and is fully compliant with the Robots Exclusion Protocol
A web crawler like Apache Nutch perfectly follows the Robots Exclusion Protocol. From their robots.txt file I found they have elaborate nested sitemap.xml files, and hence they rather promote organized but polite crawling of their website. I was myself wanting to get their data, so I am beginning with my endeavour to crawl them with my resources - do let me know if you need access to this data.
You are not allowed to scrape their data, here what their terms&conditions say about it:
"You must not use or attempt to use any automated program (including, without limitation, any spider or other web crawler) to access our system or this Site. You must not use any scraping technology on the Site. Any such use or attempted use of an automated program shall be a misuse of our system and this Site. Obtaining access to any part of our system or this Site by means of any such automated programs is strictly unauthorised."
I'm new to Bigcommerce but experienced with web app development. I have a need to make customizations to a Bigcommerce store where I need to implement custom logic that runs on the server-side which affects the output in the UI by deciding which page to serve. For example, I want to have different versions of a product page for different locations. I want each version to have a static URL, however, for SEO purposes. I need to implement logic in the server-side to do something like detect user location based on IP and then determine which of the product version pages to serve. I realize I can do this with JavaScript but I don't want to as I don't think that would work as well for SEO.
I have looked over their API and templating briefly but am not seeing a real way that this is possible. Wondering if anyone can guide me in the right direction or is Bigcommerce too simplified to allow this sort of customization?
You do not have server side access on Bigcommerce. The only would to do this would be client side with javascript.
Is it possible to get a handle on a file which is opened by any external app via my application?
Using Cloud-Storage Apps as an example, I would like to track changes to a file opened via the Storage-Provider App, so the manipulated file can be uploaded again afterwards.
There are two possible answers here, depending on what kind of app you're implementing.
For general tracking purposes, you can try using the ContentsChanged event of the StoreFolderQueryResult/StorageFileQueryResult classes within Windows.Storage.Search. That is, you create a file or folder query for what you want to watch, and then register an event handler. Generally speaking, this works well for stuff on the local file system; it's not guaranteed if you're trying to run a query on files/folders whose backing store is elsewhere.
The subject is too detailed to be described here, but you can refer the "File and Folder Queries" in Chapter 11 of my free ebook Programming Windows Store Apps with HTML, CSS, and JavaScript, Second Edition, page 607. Even though I focus on JS as a language, the discussions of WinRT APIs like this are useful when working in any language...plus the ebook is free so there's nothing to lose.
The other mechanism would be useful if you're implementing an app that provides the interface to a cloud storage backend, like the OneDrive app that's part of Windows. In this case you'd want to use the CachedFileUpdater contract. See Appendix D, page 1288, of my aforementioned book.
I am in the process of converting a web application that has been in play for some time into a Sitefinity 4 managed site. There is plenty of documentation on how to use the software to CREATE a new site, but I've found precious little that describes the process of how to migrate from non-cms to Sitefinity.
So - specifically, I would like some guidance on the process of conversion from non-managed to managed. I've been searching google and the Sitefinity forums, etc. but finding nothing except how to migrate from one version of Sitefinity to another - not what I'm trying to do.
Any leads for web sites to visit or documentation pages to read would be very helpful.
You'll have to bite the bullet and invest more resources at the beginning of your project and not release anything for some period of time. You won't be able to drive your car without fundamental components. The same principle applies here.
Whatever your requirements are, you will either have to hack around the CMS and then fix the hacks later, or do it properly from the very start.
Look at your existing site and break it down into smaller chunks
Consult Sitefinity documentation/partners/freelancers on how your existing content can be migrated onto Sitefinity platform
Task up the migration and start implementing
This is very rough guideline, but so are your requirements.
To summarise, there is no quick way. You'll have to do it properly form the start, or invest more resources later on bug/hack fixing.
If we have Sitefinity at the root of the application, we cannot,
according to Sitefinity, have any pages not managed by SF
That's not entirely correct. Sitefinity allows you to add "external pages", meaning that you can create odes in your sitemap which would like to external pages. Thus, your navigation in Sitefinity would show a complete website page structure, while some of the pages in that structure would actually be linking to external ones.
It would be absolutely easy and quick task to create your page structure programatically.
actually this is quite simple to achieve. Sitefinity is completely dynamic CMS (meaning, no real files). The implementation uses RoutingEngine and VirtualPathProvider to achieve this. What this in reality means is that you have two solid and standard extension points to split the site in "sitefinity managed part" and "custom managed part".
So, a very simple way to do this is to simply register a route (more info here: ASP.NET routing) before the SitefinityPageRoute, as SitefinityPageRoute will throw 404 if it cannot find a page.
So, let's say you register a route "~/mystuff" before SitefinityPageRoute, all the requests that start with "~/mystuff" will first go to your RouteHandler where you can decide to handle them (write to http response) or do nothing and let it fell down to Sitefinity routes.
Another way is of course to implement a custom VirtualPathProvider, however, this may be an overkill if you simply want some pages to be handled differently.
All this being said, it's obvious that pages not handled by Sitefinity will not be handled by Sitefinity :) (so, no page editor, no workflows, no translations, no widgets, no templates, no themes).