Opera extensions (widgets): dynamic config file - opera

I have an Opera 11 extension, which has a background process and an injected script. These communicate very frequently with a remote server (not the webpage the user's viewing), using the background script's cross-site XMLHttpRequest capabilities.
I would like the URL of the server to be a preference, so that it can be modified by the user without editing the package. The config.xml file would good, for it accepts <preference name="serverUri" value="..." />. However, I would like the script to be able to update itself directly from the server (not through Opera's site), which can be achieved using <update-description href="http://myserver.com/client/update" />.
So what I would like to do is have the href attribute of the update-description element to be dependent on the value of the preference serverUri. I would imagine some syntax like this:
<update-description href="{$serverUri}" />
But I could not find any references to this kind of functionality. Is there some way to solve this?

It's not possible to use variables in the config.xml file as you've written and I don't think there are plans to add this.
I'm sure you know that preferences can be set not just with the preference element in config.xml but also using widget.setPreferenceForKey(value, key), but I don't think that solves your problem in this case.
The only workaround I can think of is if you have all your logic in an external script on your server and in your extension's local files (background script or injected script), just have a very simple couple of lines that reference your external script. Something like:
var script = document.createElement('script');
script.src = 'http://www.example.com/script.js';
document.body.appendChild(script);
You could then make the script's URL editable by the user and store it in widget.preferences.
EDIT by hallvors: This solution has serious drawbacks, see my comment below.

As far as I know this is not currently possible. It seems like a bit of an unusual use case, which could potentially be risky to implement, so it would be interesting to hear more about why you want to do this.

Related

Override forcedownload behavior in Sitecore

We had a problem with some of our IE clients failing to download a PDF, even after clicking on the link. We found the answer here resolved our problems: set forcedownload=true for PDF mime types in web.config.
However, that created another problem: we are now unable to render a PDF in a browser when we want to. We used to do this with an iframe. However, as you can see, the PDF just downloads, and does not render in the browser.
I learned that the forcedownload=true setting is actually a default in a subsequent version of Sitecore (v7.2). So, I'm hesitant to revert that.
So, how do I render a PDF in a browser in this situation?
You can leave forceDownload=false on the PDF mime type and instead set the following setting to false:
<setting name="Media.EnableRangeRetrievalRequest" value="false"/>
I faced the same dilema a few months back with the same initial fix. Found out the actual issue last week, I wrote a blog post about it. (In fact, I wrote the answer you linked to, I've updated it with the same information now for future visitors)
The issue is basically a combination of Adobe Reader plugin for IE9, chunked transfer encoding and streaming the file directly from the database. I found if you close your browser and try again, or force refresh with Ctrl+F5 it worked fine. Once Sitecore had cached the file to disk it would continue to work for everyone.
The above setting disables chunked transfer encoding, instead sending the file down to the browser as a single piece. This setting was introduced in Sitecore 6.5+
This is one of the flaws in the MediaRequestHandler and in my opinion; the forceDownload option is pretty useless the way it is designed by default. (Why would ever want to configure this option on media extension only?)
You’ll have to basically turn off the forcedownload option again and replace the MediaRequestHandler with your own one. I usually end up with writing my own anyway because if other issues with the default handler, such as dealing properly with CDN’s etc.
In the ProcessRequest pipeline, you can determine if the item should be “downloaded” or not by setting the Content-Disposition header. You basically need to get rid of the default handling of forceDownload and set your headers based on your own logic.
Personally I prefer to set a query string parameter, such as ?dl=1, and base the Content-Disposition header on this. You could also extend the MediaItem template to contain a default behavior on each item or sub tree (leverage from Sitecore inheritance and standard values), and potentially you could thereby also define (override) a specific filename on each item for the attachment part in the Content-Disposition header.
When rendering the link, you can leverage from the properties collection (write a suitable extension method or similar), so that you can clearly mark your code that the link is meant for download, but still leverage from the built in field render methods. Thereby you eliminate the risk of messing up the page editor etc.
/ Mikael
You have to disable range retrieval request in web.config by setting its value to false.
<setting name="Media.EnableRangeRetrievalRequest" value="false" />
MediaRequestHandler enables Sitecore to download PDF content partially in range using HTTP 206 Status code. You can also overwrite MediaRequestHandler and write your own custom implementation to handle media request.

Support for multiple environments in your windows store app

I have been working on a Windows Store app where I have to support multiple configuration parameters for my app. One of the parameters is the URL the app is talking to.
For example development environment, test, acceptance and finally production.
One of the things i'm currently thinking about is what is the most efficient way of supporting all these environments with the least effort. Because there isn't some kind of config file that we can change to update these parameters I came up with some ideas. I'm curious about other options that I might have not seen.
Here are the things I came up with:
1
Adding multiple configuration to the app and than using them in code to get the correct parameter like this:
private string webserviceUrl;
#if DEV
webserviceUrl = "devUrl";
#elif TEST
webserviceUrl = "testUrl";
#endif
2
With the approach in number 1 there are a few more options available like including a config xml file bases on the configuration, or fetching configuration settings from a webservice the first time the app is running.
3
Using a branch/merge strategy and update the config files in the branch. Advantage is that the code is clean and only contains the settings it needs for the build it's created for. And the package can be build by the build server. Disadvantage is that you need to branch/merge alot.
The last option feels like the most 'clean' solution to do this. Am I missing any options, or do you have experience with any of these methods? What would you prefer?
I think the assumption is that apps in the store will always point to production.
But, in saying that, I'm facing the same issue as we're side loading the application onto devices that we control, and not using the Windows Store at all.
To answer your question, I prefer option 1.
Option 2 and the xml/json config file seems like the best option though.
The webservice option probably won't work. What webservice URL do you use? And how will it work if you want some instances pointing to different environments as they will all be fetching the config from the same URL.
Another option you might want to consider would be options in the settings charm menu. For example, use radio buttons for the environments, and allow the user to configure which environment they want to target.
The issue would be locking it down in production for end users so that it isn't modifiable any more. Perhaps once "PROD" radio is selected, all the radio buttons are then hidden.
If you're deploying the application through side loading, then these settings could probably be configured during the install process.
I'd be interested to hear other opinions as well. This is also an old question, so I'd like to know what solution you decided on implementing.

dojo load js script and then execute it

I am trying to load a template with xhr and then append it to the page in some div.
the problem is that the page loads the script but doesn't execute it.
the only solution I got is to add some flags in the page (say: "Splitter"), before the splitter, I put the js code, and after the splitter I add the html code, and when getting the template by ajax, I split it. here is an example:
the data I request by ajax is:
//js code:
work_types = <?php echo $work_types; ?>; //json data
<!-- Splitter -->
html code:
<div id="work_types_container"></div>
so the callback returns 'data' which I simply split and exeute like this:
data = data.split("<!-- Splitter -->");
dojo.query("#some_div").append(data[1]); //html part
eval(data[0]); //js part
Although this works for me, but it doesn't seem so professional!
is there another way in dojo to make it work?
If you're using Dojo, it might be worth to look at the dojox/layout/ContentPane module (reference guide). It's quite similar to the dijit/layout/ContentPane variant but with one special extension, that it allows executing the JavaScript on that page (using eval()).
So if you don't want to do all that work by yourself, you could do something like:
<div data-dojo-type="dojox/layout/ContentPane" data-dojo-props="href: myXhrUrl, executeScripts: true"></div>
If you're concerned about it being a DojoX module (DojoX will disappear in Dojo 2.0), the module is labeled as maintained, so it has a higher chance of being integrated in dijit in later versions.
As an anwer to your eval() safety question (in comments). Well, it's allowed of course, else they wouldn't have such a function called eval(). But indeed, it's less secure, the reason for this is that the client in fact trusts the server and executes everything the server sends to the client.
Normally, there are no problems unless the server sends malicious content (this could be due to an issue on your server or man in the middle attacks) which will be executed and thus, causing an XSS vulnerability.
In the ideal world the server only sends data and the client interpretes this data and renders it by himself. In this design, the client only trusts data from the server, so no malicious logic can be executed (so there will be no XSS vulnerability).
It's unlikely that it will happen and the ideal world solution is not even possible in many cases since the initial page request (loading your webpage) is in fact a similar scenario where the client executes whatever the server sends.
Web application security is not about being 100% safe (it's impossible), but it's to try to create as less as possible open doors that can be used by hackers. It's up to you what you consider safe and to verify if the "ideal world" solution is possible in this specific scenario (it might not be, or it might take too much time compared to the other solution).

Stopping invalid file type or file name submissions in coldfusion

So, I'm having this lovely issue where people like to submit invalid file types or funky named files... (like.. hey_i_like_"quotes".docx) Sometimes they will even try to upload a .html link...
How should I check for something like this? It seems to create an error every time someone submits a poorly named item.
Should I create a cfscript that checks it before submission? Or is there an easier way?
If it was before submission it would be javascript not cfscript. Javascript can always be got round, so I'd say you'd be better doing it server-side with ColdFusion. Personally I'd just wrap the whole thing in a try/catch (you should do this anyway as a matter of course with all file upload type things), and throw an error back at them if their filename is no good.
When you say submit are you using cffile to allow your users to upload file.
If so, use the attribute "accept" with a try and catch around. for example....
<cftry>
<cffile action = "upload"
fileField = "FileContents"
destination = "c:\files\upload\"
accept="image/jpg, application/msword"
>
<cfcatch type="Any" >
<p>sorry we could not upload your file!</p>
</cfcatch>
</cftry>
I personally would not use "just" JavaScript as this could be disabled and you are back in the same boat.
Hope this helps.
On the server, as part of validation, use reFindNoCase() along with an appropriate regex to check for a properly formatted file path. You can find lots of example regex expressions for a file path on the internet, such at this one. Hope that helps.
As #Duncan pointed out, a client-side validation would most likely be in JavaScript. Personally, if I had time/resources, I would do this as a convenience for the end user. If they upload an enormous PDF when a DOCX is required by the system, it would be annoying for them not to receive a message until the upload is complete.
As far as filenames go, it seems to me that the simplest solution (and one I've used in the past) is to assume all filenames are bad, and rename them. There are several ways to do this. If you need to preserve the original filename, I would just use urlEncodedFormat() ot clean the filename into something that is web-friendly. If you need to preserve all versions, you can append a date/time stamp, so bob.xocx becomes bob_201104051129.docx or somesuch. If you must keep the original filename without any changes, I would recommend seting up a DB table as a pinter system, keeping the original name, timestamp, and other metadata there and referring to the file by renaming it to the ID.
But urlEncodedFormat() is probably enough for what you've outlined.
For user experience it's best to do it client-side but it's not bad at all to double check server side too.
For the client side part, I recommend using the jQuery validation plugin, easy to use.

Up and download directly - no waiting

I would want to program something where you upload a file on the one side and the other person can download it the moment I start uploading. I knew such a service but can't remember the name. If you know the service I'd like to know the name if its not there anymore I'd like to program it as an opensource project.
And it is supposed to be a website
What you're describing sounds a lot like Bit Torrent.
You might be able to achieve this by uploading via a custom ISAPI filter (if you use IIS) -- all CGI implementations won't start to run your script until the request has completed, which makes sense, as you won't have been told all the values just yet, I'd suspect ISAPI may fall foul of this as well.
So, your next best bet is to write a custom HTTP server, that can handle the serving of files yet to finish uploading.
pipebytes.com I found it :)