I can't seem to get Orbited working with my Twisted app. I have a page, served by Twisted (say localhost:8000/page) which includes Orbited.js from the orbited server (localhost:8001/static/Orbited.js). I then have a TCP chat server example running on port 7777. I try to use Orbited.TCPSocket to connect to the chat server:
conn=new Orbited.TCPSocket();
conn.open("localhost", 7777);
conn.send("test\r\n"); //error: bad readyState
It works fine when Orbited is serving the page, but not when twisted serves it from a different port. My orbited.cfg looks like this:
[listen]
http://:8001
[access]
* -> localhost:7777
And before (which worked) I had this in it as well:
[static]
test=index.html
Where index.html was another page grabbing localhost:8001/static/Orbited.js, and was accessed from localhost:8001/test.
How do I need to change my config file to work with requests from my twisted site on another port?
Update
I tried changing Orbited.settings.port to 8001 before trying to open the connection, but I got an error: "unsafe javascript attempt to access frame with url http://localhost:8000/page from frame with url http://localhost:8001/static/xsdrBridge.html#1. Domains, protocols and ports must match."
Hmm, also, I just looked at the orbited wiki, and apparently, setting Orbited.settings.port is exactly what I'm supposed to do. but I'm getting horrible errors
You can call send() only after the connection is in opened state.
Put a handler for .onopen() and do a .send() from there.
I have used Orbited in the past. It works in general but there are several quirks to get it set up and running smoothly. The project itself seems to be in a state of flux (it seems to be moving to node.js). Both of these points lead me to suggest that - if you can avoid it - not to use Orbited.
Are there alternatives that are cleaner? I would say, yes. You can pretty much emulate Orbited with Websockets on stock Twisted. This will clearly work for newer browsers. What about older ones? Well, there are open-source projects that wrap websockets and fall back to flash as a transport for older browsers. The setup works quite well, and actually feels cleaner than using a solution like orbited.
If you check out http://github.com/rlotun/txWebSocket you'll find the current state of Twisted's websocket implementation, as well as an example of how to fall back to flash on older browsers. Hopefully this will be useful enough for you to serve as a drop in replacement to Orbited.
Related
I have been developing a crawling script for a number of news websites and using Scrapy to handle the logic.
When I run my script on an Ubuntu web server (Digital Ocean, if that helps), a lot of the websites that return 200 on my local machine turn out to be 417 instead.
I was wondering how I should fix this, if it is a problem at all? I'm actually not quite sure if it is affecting the final output, but it seems like it has been.
Some of my own research has turned up:
http://www.checkupdown.com/status/E417.html . I've tried adding an Expect header to my requests, which hasn't worked
I've heard that it might be a problem with HTTP 1.1 vs 1.0? EDIT: Nope. Scrapy's HTTPDownloaderHandler automatically chooses 1.1 if it is available
417 is the error a web server gives you when your client says it expects content-types a,b,c, but the content that the server could deliver doesn't match any of these types.
This looks like a scrapy bug or, more likely, misconfiguration.
It seems either your public ip address was already banned or was banned while you scraped by the web server of the page you want to scrape. For the first situation you can reboot your instance to get a new public ip (at least this works on Amazon). For the second scenario, here are some tips from the official documentation to avoid this situation:
rotate your user agent from a pool of well-known ones from browsers
(google around to get a list of them)
disable cookies (see COOKIES_ENABLED) as some sites may use cookies to spot bot behaviour
use download delays (2 or higher). See DOWNLOAD_DELAY setting.
if possible, use Google cache to fetch pages, instead of hitting the
sites directly
use a pool of rotating IPs. For example, the free Tor
project or paid services like ProxyMesh
use a highly distributed downloader that circumvents bans internally, so you can just focus on parsing clean pages. One example of such downloaders is Crawlera
Additionally, you can reduce concurrent requests settings in your spider, that worked once for me.
I want to capture all the network calls from Web Driver in Java. I am not doing any UI testing, just testing JS execution and, requests and responses of some network calls.
I tried using Browser Mob as is suggested in most forums, but I need it to work across all browsers. It worked flawlessly with Firefox, but I was facing some issues with the others. Safari driver doesn't event support a Proxy capability.
I don't want to use Fiddler as it involves some manual steps around invoking and storing the calls. Whereas, Browser Mob being an in-code proxy can be integrated in a more smoother fashion.
I also tried using the RC-like package included in Selenium standalone server package. But, I have some HTTPS calls and some nested iframes in cross domains. I am particularly interested in some cross domain POST call and it doesn't work out that well. Also, people keep saying it's not recommended to use that package.
So, I had a solution where we can use a standalone proxy server running on a machine. Using host entries, we'll point Web Driver to hit the proxy instead of the actual server. The proxy will record all the incoming calls and route them to the actual server host. Later, I can make a request to the proxy which will return me all the calls it intercepted. I am not sure whether it's still called a proxy or a router.
I came across TCPmon, but it's no longer being supported. Does anyone know some similar tools that could run on Unix systems or any alternate solutions?
We modified the Fiddler rules script to include a new exec action. If you use their native script editor, it also provide auto complete features and we were comfortably able to get around it. The syntax is similar to that of JavaScript.
The Fiddler package comes with a ExecActions.exe which can be used to pass console arguments to a running Fiddler instance using the command prompt.
The code we wrote processed all the sessions captured by Fiddler and wrote it to a file in a custom JSON format and later used GSON to deserialize it.
Please let me know, if you want further details.
My WebRTC app works fine when I connect two of the same browsers, but when I try a combination neither respond to each others signaling messages. Something probably worth mentioning is that I have not implemented TURN, however I don't see why that should make a difference so I'm not going to change that unless I'm fairly certain it will.
I don't have much of a clue where the error lies, so I will just add code on request for the sake of readability.
Make sure you enable DTLS-SRTP (Firefox only supports DTLS-SRTP) by passing the following to the PeerConnection constructor:
{ 'optional': [{'DtlsSrtpKeyAgreement': 'true'}]}
See this page for more details.
You have not really described what goes wrong with the signaling. No error messages and so on.
But based on the fact that you only see the fault when using two different web browsers I would recommend using Adapter.js that have been somewhat promoted from webRTC.
Link to webRTC demo that shows on the interoperability using Adapter.js(page also contains a link to Adapter.js):http://www.webrtc.org/demo
Direct link to
adapter.js
Try to turn off your firewalls to check if it fixes the problem.
In my case (Windown 7), default windows firewall didn't allow UDP for Private Inbound Connection Setting and Firefox + Chrome p2p connection just didn't work.
Hope it helps.
I have found a strange issue which I do not completely understand. When I run the LoadRunner with just a single protocol, the browser (when recording starts) is ran but says "page not found" (as if the proxy was not set).
How come? The protocols specify what traffic will be captured but I assumed in just does not record the ones not specified. But why the browser could not find the page in single protocol and could in multiple?
I've found that the single protocol mode (I assume web here) is somewhat erratic and does not work all the time. The workaround is to use the multiple protocol mode, but select only Web (HTTP/HTML). This works much better.
The actual reasons for why this is the case are unknown, but at least give it a try!
As for other issues:
Check that your PROXY settings are correct when you invoke IE for recording. Your issue sounds a little like a proxy issue, but please post more details if none of the above works.
Over 90% of recording issues can be tracked to environment items, specifically do you have the right match up between version of LR and version/manufacturer of your browser plus are you signed in with the proper credentials plus do you have any conflicting software packages loaded, such as antivirus, which could be impacting the recordingf mechansim.
Where to start?
Makes sure you are signed in with Administrative credentials
Disable any antivirus running locally
Validate your browser manufacturer and version with the requirements for your version of LoadRunner
I'm not really sure how to word this exactly, so hopefully someone can make sense of it. I've been working on an iPad app that syncs files from a server to your iPad and lets you build presentations with the various files. The corporation I'm working with on this app has a wireless network that requires you to re-authenticate every hour. So every hour instead of getting the expected JSON api response, any HTTP request pulls down the page needed to reauth with the wireless network. I was wondering is there is a specific HTTP response code related to getting sent that page or a "best-practice" way of testing for that page as opposed to JSON.
Granted I could just test to see if the response is HTML, but that doesn't account for other redirect responses that I haven't found yet. I could just test part of the HTML to see if it matches a predetermined portion of the html, but I'm an outside contractor. I can't guarantee they won't change the markup or verbiage of the page after I've made my deliverable.
So does anyone out in the ether know a "best practices" methodology for testing if the app needs to reauth before syncing?
I noticed that on Mac OS X and maybe even iOS, when you connect to a new Wi-Fi network, it will try to contact www.apple.com. This is being done to check if the internet connectivity is available. If it's not available, the Captive Network Assistant will pop-up, showing you the authentication page, or sometimes when I'm in Starbucks, an advertisement.
Following your question, since Apple themselves is doing it this way, I think you could check for HTTP response code, look for something in the HTML markup (slightly discouraged though) or trying to connect to a known server (Reachability).
For a corporation practicing well-documented projects, I am quite sure they won't be changing things without making sure that your app, once deployed will continue to work.