This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I find out whether a server supports the Range header?
I want to make a jPlayer media player but it says that the server must enable Range requests.
It says that this is easy to check for by seeing if your server's resonse includes the Accept-Ranges in its header - but I don't know how to do this 'easy' thing.
I think it is the same question as How can I find out whether a server supports the Range header? but I need a step by step idiot guide to how to carry out the test. I couldn't work it out from that answer. Can anyone help? I guess I need to upload a php page to my server with some code on it?
Thank you.
OK well apparently this is how you can do it (thanks to Mark Panaghiston at JPlayer for this)..
Navigate to the url address of a video (mp4 in my case) on the server in question in Chrome/Firefox.
Open up the developer tools (in Chrome, shortcut CTRL-SHIFT-i or F12)
Switch to the network tab of the developer tools
Select the video/file in question
Then click for the Headers tab for information
Look to see if you have a Header Response for Accept Ranges: Bytes
If this Accept Ranges has a value like bytes, then it means range requests are accepted.
Related
Our server have been hacked (injected code). We need to find the first date the pirate's url have been called from our server. Where can we find that ? Is there a way to find this in the log ? We found the pirate url in a piece of code on our payment page.
Thank for your precious help !
If you have a Linux server running apache usually all HTTP logs are written to var/log/httpd/
However, the question is how the URL gets called to find real (if there is any).
I'm implementing a python script mainly based on pyautogui. One of the things the script does is to open a chrome webpage. After that I would need to access the DOM of this currently open webpage.
Since I've not opened the browser with selenium, I can't use it to analyze the DOM.
However, my question is: is this currently open chrome page available/saved somewhere in the hard drive so that I can access it with selenium? Like an .html file?
I checked many other questions here and users talk about chrome cache, but there are no html files there.
I just need to be able to access the current open page and not all the historical data in the cache.
Opening web browser directly with selenium is not an option either, since most of the websites analyzed have captchas and distil technology.
Thanks.
If you start the original chrome with --remote-debugging-port=PORT_NR argument, and visit localhost:PORT_NR from another browser, you will have access to the full content of the browser, including dev console.
Once you have this, you have multiple ways to go:
You can visit http://localhost:PORT_NR with with any other browser (or even with the same browser), and you should have full access to the content of the original Chrome. With Selenium you should have a relatively easy time to get by.
You can also use the devtools api (the documentation.. is.. well... there is room for improvement. Search for chrome devtools protocol to be amazed by the lack of docs). As an example you can get to http://localhost:PORT_NR/json to get the available debugging URIs. Grab the relevant websocket endpoint (webSocketDebuggerUrl). Open a websocket connection, and issue a command, like {"method": "DOM.getDocument", "id":12}. You can find available DOM related commands here: https://chromedevtools.github.io/devtools-protocol/1-3/DOM
Sice I had to reinvet the wheel I may give some extra info that I coudn't find anywhere:
Start the Browser with remote debugging enabled (see previous posts)
Connect to the given port on localhost and use these HTTP-GET-Requests to geta very limited control on your browser:
https://chromedevtools.github.io/devtools-protocol/#endpoints
Most important:
GET /json/new?{url}
GET /json/activate/{targetId}
GET /json/close/{targetId}
GET /json or /json/list
To gain full control over the browser, you need to use a "websocket" connection. Each Object in the GET /json or /json/list has it's own ID. Use this ID to interact with the tab. Btw: Type "page" are normal tabs, the other stuff are extentions and so on. Once you know which Tab you want to influence, get it's "webSocketDebuggerUrl".
Use this URL and connect with something that can speak the Websocket-protocol.
Once connected, you must craft a valid Json by the following structure:
{
"id":0,
"method":"Page.navigate",
"params":{url:http://google.com}}
}
Notes:
ID is a simple counter (int) that get bigger - not the ID of the tab(!)
Method is the method described in the docs params is also in the docs.
The return values are always JSONs.
From now on you can use the official docs:
https://chromedevtools.github.io/devtools-protocol/tot/Page/#method-navigate
Dunno how other ppl found out about it but it took a few hours to get it working. Probably cause everyone is just using python's selenium to do it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am creating a web service of scheduled posts to some social network.Need help dealing with file uploads under high traffic.
Process overview:
User uploads files to SomeServer (not mine).
SomeServer then responds with a JSON string.
My web app should store that JSON response.
Option 1: Save, cURL POST, delete tmp
The stupid way I made it work:
User uploads files to MyWebApp;
MyWebApp cURL's the file further to SomeServer, getting the response.
Option 2: JS magic
The smart way it could be perfect:
User uploads the file directly to SomeServer, from within an iFrame;
MyWebApp gets the response through JavaScript.
But this is(?) impossible due to the 'Same Origin Policy', isn't it?
Option 3: nginx proxying?
The better way for a production server:
User uploads files to MyWebApp;
nginx intercepts the file uploads and sends them directly to the SomeServer;
JSON response is also intercepted by nginx and processed by MyWebApp.
Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer?
I don't have a server to use to stand in for SomeServer for me to test out my suggestions, but I'll give it a shot anyway. If I'm wrong, then I guess you'll just have to use Flash (sample code from VK).
How about using an iFrame to upload the file to SomeServer, receive the JSON response, and then use postMessage to pass the JSON response from the iFrame to your main window from your site. As I understand it, that is pretty much the motivation for creating postMessage in the first place.
Overall, I'm thinking of something like this or YUI's io() module but with postMessage added to get around the same origin policy.
Or in VK's case, using their explicit iFrame support. It looks to me like you can add a method to the global VK object and then call that method from the VK origin domain using VK.callMethod(). You can use that workaround to create a function that can read the response from the hidden iFrame.
So you use VK.api('photos.getUploadServer', ...) to get the POST URL.
Then you use JS to insert that URL as the action for your FORM that you use to upload the file. Follow the example under "Uploading Files in an HTML Form" in the io() docs and in the complete function, use postMessage to post the JSON back to your parent window. See example and docs here. (If it doesn't work with io(), you can certainly make it work using the roll-your-own example code if I'm right about VK.callMethod().)
Then in response to the postMessage you can use regular AJAX to upload the JSON response back to your server.
I can see only two major approaches to this problem: server-side proxying and javascript/client-side cross-site uploading. Your approaches 1 and 3 are the same thing. It shouldn't really matter whether you POST files with means of cURL or nginx - not performance-wise anyway. So if you already implemented approach 1 from your question, I don't see any reason to switch to 3.
In regards to javascript and Same Origin Policy, it seems there are many ways to achieve your goal, but in all of these ways, either your scenario must be supported by SomeServer's developers, or you have to have some sort of access to SomeServer. Here's an approximate list of possibilities:
CORS—your domain must be allowed to access SomeServer's domain;
Changing document.domain—this requires that your page and target page are hosted on subdomains of the same domain;
Using a flash uploader (e.g. SWFUpload)—it is still required that your domain is allowed via the cross-domain policy, in case of Flash, via a crossdomain.xml in the root of SomeServer's domain;
xdcomm (e.g. EasyXDM)—requires that you can upload at least an html page to the target domain. This page can then be used as a javascript proxy for your manipulations with SomeServer's iframe.
The last one could, actually, be a real possibility for you, since you can upload files to SomeServer. But of course, it depends on how it's implemented—for example, in case there is another domain the files are served from, or if there are some security measures which won't allow you to host html files, it may not work out.
I am learning about apache and its various modules, currently i am confused about mod_expires. What i read so far is that using this module we can set future expiry header for static files so that browser need not to request them each time.
I am confused about the fact that if some one change css/js or any image file in between, how will browser come to know about it since we have already told the browser that this is not going to change say for next 1 year.
Thanks in advance
It may not be possible for all provided content on your HTTP server, but you can simply change the name of the file to update a file on the client side from the server. At that point, the browser will download the new content.
Sometimes, for websites with less traffic it is far more functional to set the cache to a much lower value.
An expiration of 365 days should always be used with caution, and the fact that you can set an expiration of 1 year does not mean you always have to do it. In other words, do not fall prey to premature optimization.
A good example of setting cache expiration to 1 year are countries' flags, which are not likely to change. Also, be aware that with a simple browser refresh of a page, the client can discard the local cache and download the content again from the origin.
A good and easy way of testing all this is to use Firefox with Firebug. With this extension, you can analyze requests and responses.
Here you can find the RFC specifications.
Where can I find the source of the soap server in rebol mentioned here:
http://www.rebolplanet.com/zine/rzine-1-02/#sect6.
the link http://www.compkarori.co.nz/reb/discordian.txt doesn't work any more.
It's also now in my GitHub repo
https://github.com/gchiu/Rebol2/blob/master/Scripts/discordian.r
Whenever you have this sort of question, try pasting the URL into the search box of archive.org (The Internet Archive).
In this case, a copy of the file was snapshotted in 2004:
http://web.archive.org/web/20040205210622/http://www.compkarori.co.nz/reb/discordian.txt
(You might let the operators of the site know the %reb/ directory is missing, since the others in the set are still there.)