Browser Caching Versions in Google Chrome with Tampermonkey - browser-cache

I have a question about caching versions of imported files (via web requests) in Google Chrome:
Let's say I have script.js, whose URL is:
http://www.getscripts.com/script.js (URL contents arbitrary after "http://", because TamperMonkey imports over the HTTP protocol)
If I import the script in Tampermonkey using #require, I want to use a query string for its version to avoid caching.
Caching versions:
Let's say, that I first #require the 1st "version" of the script (created it, and inserted initial content), by giving the require a URL of http://www.getscripts.com/script.js?v=1, so I pass in the URL a query string of the version v=1, and that the script file of version v=1 was not cached already.
I make some changes to the code of script.js, and the script that the URL provides also gets updated (I use surge.sh).
Then, I change my #require URL to: http://www.getscripts.com/script.js?v=2, so I pass in the URL a query string of the version v=2
Then I make some more changes in the code, make sure the URL gets the updated file, and give the #require my initial URL with v=1: http://www.getscripts.com/script.js?v=1
Question:
The script file that will be returned (via an HTTP request) - will it be of version 1, or 2?
What I'm doing is trying to force-download a new version of my script file, after I update the script's code, since Tampermonkey caches script files without re-downloading them, unless there was some changes made in the URL of the #require (what does the HTTP request).

This was solved by forcing the browser to download a new version of the script by adding a version parameter to the script's URL, as suggested by wOxxOm above.

Related

File upload request failed in Jmeter even after following correct steps

I have followed below steps to upload file in jmeter but it didn't worked. It throws Sorry, an error occurred while trying to execute your request. Please try again I have attached screenshots for more details.
Enabled Use multipart/form-data
Copied the file which is going to upload in /bin directory
I have tried with check/Uncheck Use multipart/form-data but no luck
In my HTTP request I passes action_id=1203 as Query parameters and in Form Parameters I am Passing other parameters like msgId, fieldId etc. but from screenshot you can observe when I execute that it passes my whole form parameters in just one single key of "msgId" I don't know why?
This are the headers which I pass
My Request with Query and Form parameters
File upload tab of http request
After execution request failed with this output. Here it passes all form params in single "msgId" key
F12 - Network request of Webpage form parameters (checked manually on web it works fine. Problem is in my jmeter request)
Just record the file upload using JMeter's HTTP(S) Test Script Recorder and it will generate the relevant HTTP Request sampler and HTTP Header Manager configuration which can be later on correlated/parameterized.
The only thing you need to do is to copy the file you're uploading to "bin" folder of your JMeter installation before recording. File path can be changed to whatever you want afterwards.
Also according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.5 (or whatever is the latest stable version which is available at JMeter Downloads page) as soon as possible.

Download a file with a parameter in URL

I noticed that downloading a file on a Apache server with:
example.com/myfile.zip?parameter=2365
gives exactly the same result / same downloaded file / same downloaded filename on client browser as downloading:
example.com/myfile.zip
Is it a supported and documented feature of Apache?
I am happy with it, and this will be useful for tracking purposes (I can ship download links with parameters ?source=email and then I will be able to see the parameter in Apache logs, this is useful!), but I wanted to be sure it will work on all browsers (Chrome, Firefox, IE, Safari, etc.)
Note: Are we sure that on most browsers, the downloaded file will have filename myfile.zip and not myfile.zip<somechar>parameter=2365? I tried wget example.com/myfile.zip?parameter=2365 and unfortunately the filename on disk is myfile.zip?parameter=2365, so it doesn't work well with wget.
It's a query string, if there's nothing configured to analyse the parameters then the server will just return the file.
As you've noticed, the Apache logs will include this information if they are configured to. It's perfectly safe to use that syntax, it is defined as part of the URI specification.

Issues with intern-runner and proxyUrl that contains subfolders

I need to setup intern to test ajax calls from a different server. I set everything up sort of following the official wiki in this address
https://github.com/theintern/intern/wiki/Using-Intern-to-unit-test-Ajax-calls
My config file has proxyUrl set to http://localhost:8080/sub
and http://localhost:8080/sub is setup as a reverse proxy to inter-runner in http://localhost:9000
When I run ./node_modules/.bin/intern-runner -config=tests/config from the tests root folder, the browser opens up and is able to request several files, until it tries to request the config file. That's when it receives a 404, because it requests the wrong address - http://localhost:8080/tests/config.js - without the sub folder.
I'm wondering if I'm missing something inside the config file, or if intern is not able to use proxies with subfolders. I tried to set the baseUrl parameter, but it had no effect.
Any ideas?
Update:
It seems that sometimes intern-runner uses the path provided in the config param, and sometimes it uses the one in the proxyUrl parameter inside the config file. As a workaround, what I did was to place the config file and the tests on 2 folders (actually I made a symbolic link). The first on tests/ and the second on sub/tests/ and ran it using ./node_modules/.bin/intern-runner -config=sub/tests/config.
It works, but it's kind of stupid and I really wished there was a better way to do it.
This is indeed a limitation/bug of intern. It assumes that the proxy sits at the root of the absolute domain name, i.e. that it has a pathname of /.
An issue has been created on intern's github repository here and the corresponding pull request that fixes the problem is here. Hopefully this gets merged into the upcoming 2.1 release of intern.

Server side: detecting if a user is downloading (save as...) or visualizing a file in the browser

I'm writing an apache2 module
by default and when viewed in a web browser, the module would only print the first lines of a large file and convert them to HTML.
if the user choose to 'download as...', the whole raw file would be downloaded.
Is it possible to detect this choice on the server side ? (for example is there a specific http header set ?).
note: I would like to avoid any parameter in the GET url (e.g: "http://example.org/file?mode=raw" )
Pierre
added my own answer to close the question: as said #alexeyten there is no difference. I ended by a javascript code the alter the index.html file generated by apache.

Selenium IDE Base URL and Open commands

What is the use of Base URL is Selenium IDE because even when I enter the wrong URL there or leave it blank, and run the script, it just runs fine.
I have this URL, as base URL http://test.info:50/test and in the open command when I use the part /test of the URL, so the URL to be opened should be http://test.info:50/test/test (which is not the actual URL) and selenium keeps running the script on the Base URL above and shows no error.
So, my question hear is what is the use of Base URL when it could be left blank or empty. What is the use of the Open command when I have used the full URL in the Base URL part.
Hope the question is clear. Please help.
The base URL should be the index of your site. NOT the directory under test.
For example,
BaseURL: http://google.com/
Open: /search
This will open http://google.com/search as the beginning Url. From then, you should continue testing.
In your case, specify
BaseURL: http://test.info:50/
Open: /test
And you'll be golden.
EDIT:
and selenium keeps running the script on the Base URL above and shows no error.
Selenium IDE will show no errors because Selenium IDE doesn't care where your test is ran.. it's not limited to that (nor should it be). It will only spit out an error for something is wrong in your script. Nothing to do with your opening of urls. You could open something like /somebullcrapdirectory and it'd still be fine. It'd fail at the point of performing any subsequent actions though, since /somebullcrapdirectory would actually be an invalid directory.
I hope you understand Abhi,
When user give the blank base url after that comes the error,
[warn] Base URL is not set. Updating base URL from current window.
[error] baseUrl wasn't absolute:
We can simply use the open command and leave the target section blank.
This will use the base URL and open it.
The URL specified in Base URL input field in de Selenium IDE, and the URL specified in the target parameter of the open command, will not just simply be concatenated. Opening /test/ will just be seen as an absolute URL for your domain.
You can, however, specify a target ./test/, which requests, as I experienced, requests http://test.info:50/test/test/. I find this very useful, as in some environments, my web application will reside on the root and in some other environments in a base path like /myapp/.
With respect to a blank Base URL, I assume that this worked because the browser page in which the test was run already showed a page from the correct domain. If you would have executed the open command on a page in any other site, the open command would have requested the page /test/ from that site. After that, that base URL would have pointed to that site.