See this question first, and the top answer. That being said, I need to issue an XMLHttpRequest to a remote server from a mobile app. Can someone show an example? I can't figure out where to specify the URL (with parameters) as shown in the answer to the previous question.
XMLHttpRequests takes a URL as a parameter. It can be a relative URL, but it can also be a fully qualified URL...except that it probably won't do what you want, due to cross-site scripting protections built into most browsers.
This question has details on solutions to this issue.
Related
When I go to https://dojotoolkit.org/, I get, "Unable to connect". In some browsers I get "You have reached a domain that is pending ICANN verification".
I've used a number of dojo libraries in my code. Does anyone know what happened to the owner and whether this is likely to be fixed in the near future?
If it isn't fixed, what is my best option for replacing it?
This seems to be a temporary administrative DNS issue, based on their Twitter response:
We apologize for the issues accessing the Dojo 1 web site. We’re
working on it as fast as possible. In the mean time, you can add the
IP address directly to /etc/hosts. 104.16.205.241
There are also some workarounds on the dojo gitter.im channel:
Reference guide content is also at https://github.com/dojo/docs/ And
tutorials are at
https://github.com/dojo/dojo-website/tree/master/src/documentation/tutorials
Also, as mentioned in this related question, you can use the Archive.org Wayback Machine.
The site now appears to be back up. I was able to access it and get information on features I'm using.
Recently our website went from http to https. I, and others, are randomly getting "The Site Can't Provide a Secure Connection" page. Upon refresh, the page loads just fine. Why are we getting this initial page randomly?
FYI... We have http to https redirects in place.
Impossible to say without more details, but some things I can suggest are:
You have multiple servers and some are configured correctly and some incorrectly.
You are not including the full certificate chain. Sometimes your browser has the missing intermediary cached and sometimes not (see this answer for more info here: https://serverfault.com/questions/826100/ca-certificate-trouble-with-squid-on-centos7/826321#826321)
A bug in browser/software. I had this issue on Chrome when using Apache HTTP/2. Never did figure it out but a Chrome update fixed it.
Run https://www.ssllabs.com/ssltest/ on your site to confirm not a problem with your https set up and, if that doesn't work, or you don't understand the results it gives, then update your question with more details (what Server and Browser you are using and what version, if you have any proxy in place between your Browser and the site and, ideally the website name) if you want people to help you.
Also be aware this is a programming site and some people don't like these questions here and will suggest other Stack Exchange sites but honestly don't know where this question is best placed: serverfault.com maybe, but is for professional SysAdmins only, Unix and Linux seems a little generic (not even sure if you are using a Linux webserver!), Webmasters is more for content and SEO questions, Information and Security is more for theoretical SSL/TLS questions...
I am using wget to download url that could be used on either linux/osx/windows. My question is if server behavior could be affected by user-agent string (-U) option ? According to this MS link web server can use this information to provide content that is tailored for your specific browser. According to Apache doc(access control section) you can use these directives to deny access to a particular browser (User-Agent). So I am wondering if I need to download links with different user-agent for different OS or one download would suffice.
Is this actually done ? I tried bunch of servers but did not really see different behavior across user agents.
There are sites that prevent scraping by returning an error response when they detect you're hitting their servers with an automation tool instead of a browser, and the user agent is one of the aspects of detecting that difference.
Other than that not much useful can be said about this, as we don't know what sites you want to target, what HTTP server they run and what code runs on top of that.
I have found a strange issue which I do not completely understand. When I run the LoadRunner with just a single protocol, the browser (when recording starts) is ran but says "page not found" (as if the proxy was not set).
How come? The protocols specify what traffic will be captured but I assumed in just does not record the ones not specified. But why the browser could not find the page in single protocol and could in multiple?
I've found that the single protocol mode (I assume web here) is somewhat erratic and does not work all the time. The workaround is to use the multiple protocol mode, but select only Web (HTTP/HTML). This works much better.
The actual reasons for why this is the case are unknown, but at least give it a try!
As for other issues:
Check that your PROXY settings are correct when you invoke IE for recording. Your issue sounds a little like a proxy issue, but please post more details if none of the above works.
Over 90% of recording issues can be tracked to environment items, specifically do you have the right match up between version of LR and version/manufacturer of your browser plus are you signed in with the proper credentials plus do you have any conflicting software packages loaded, such as antivirus, which could be impacting the recordingf mechansim.
Where to start?
Makes sure you are signed in with Administrative credentials
Disable any antivirus running locally
Validate your browser manufacturer and version with the requirements for your version of LoadRunner
I'm wondering how I can find out where the culprit is, as to what is NOT being transmitted over SSL on my website. It's blowing my mind, because I use relative URLs or explicitly choose HTTPS:// for all links, images, etc...
Any ideas/tools to find out what the issue is?
Thanks.
If you mean that some resources are transferred over HTTP without encryption, you can check for this in Chrome's Developer tools in the tab Resources - that should tell you which parts come from where - look for those with address starting with http:// .
Alternately, use Fiddler: by default, it won't decrypt HTTPS connections, so you'll be seeing CONNECT requests for HTTPS, and GET/POST for HTTP - those are your culprits.
For those, like myself, who run into this issue i suggest a few tips while designing your website.
Always use relative paths when ever possible "images/someimage.png" instead of using domain paths like http://someDomainName/images/someimage.png so on. Any one of these and it will cause the browser to throw that warning at you.
When linking to external content, Google/other Ads, javascript sources(such as jquery, so on), or any other media... make sure you use a https:// link if they have one available. Myself, i had one tiny image for a link to an external site but they did not offer a https link to the image, so i simply downloaded it and put it in my images folder. Problem solved.
The Chrome resources list is a very helpful tool, not sure if Firefox has something similar in its tool box. Another method, if you have shell/command line access, is to use grep to search the files for "http:". This, most often, will show anything that is linking to non secure content.