I've tried both
wget --user=myuser --password=mypassword myfile
and
wget --ftp-user=myuser --ftp-password=mypassword myfile
but I keep getting the error
HTTP request sent, awaiting response... 401 Authorization Required
Authorization failed.
I know the file is there, and I know the username/password are correct - I can ftp in with no problem. Any thoughts on what's going on here? How do I even tell if wget is paying attention to the username/password that I'm giving it? (The error is the same if I simply don't provide that info.)
try wget --http-user=username --http-password=password http://....
Are you using an "ftp://" URL? From the error message it appears that you're making a request for an "http://" URL.
One more comment:
Setting --user and --password sets the user/pw for both ftp and http requests, so that's more general.
In my case, nothing worked, except using --ask-password
I was using a https URL.
it might be useful to add that if you need to add a domain name before that the backslash must be escaped i.e. "\" is preceded with another "\" e.g. "domain\\username" similarly if the password has any chars that requires escaping (i supposed, havn't tested it).
wget --http-user=domain\\\username --http-password=password http://...
Related
I try to fetch some data from a Microsoft Dynamics Nav WebService.
This service uses the NTML authentication.
If I open the webservice url in a browser and use the given credentials everything works fine.
For setting up the environment for the WebService Client, I used the command line to check whether everything is working fine, I was, at a specific point, unable to authenticate.
Thats the command I am using:
curl --ntlm -u "DOMAIN\USERNAME" -k -v "http://hostname:port/instance/Odata/Company('CompanyName')/Customer"
The command will prompt for the password.
I copy in the password and everything is doing fine.
But when I use this command, with the password already included, it stops working and the authentication fails:
curl --ntlm -u "DOMAIN\USERNAME:PASSWORD" -k -v "http://hostname:port/instance/Odata/Company('CompanyName')/Customer"
The password contains some special chars, so I tried to use the percent encoding, which had no effect at all.
It is very difficult to research this kind of issue. Searching for curl + ntlm authentication issues provides a lot of results, but nothing is related to this specific kind of issue.
Does anyone of you guys already had experience with this kind of issue?
I had a problem with authentication because of cookies. I solved this containing cookies in txt file and using exactly this file through all requests. For example, after login request I saved this cookies:
curl -X POST -u username:password https://mysite/login -c cookies.txt
And with next request I used this file like this:
curl -X POST -u username:password https://mysite/link -b cookies.txt
This solution worked for me, I don't know if your problem is similar, but, I think, you may try this.
I was struggling with similar issue for a long time and finally I found this curl bug report #1253 NTLM authentication fails when password contains special characters (british pound symbol £) .
NTLM authentication in cURL supports only ASCII characters in passwords! This is still the case in version 7.50.1 on Ubuntu but I tested this on many different distributions and it is always the same. This bug also will break curl_init() in PHP (tested on PHP7). The only way to solve that is to avoid non ASCII characters in NTLM authentication passwords.
If you are using Python then you are lucky. Apparently Python developers rewrote cURL implementation and it works with non ASCII characters if you use HttpNtlmAuth package.
Try with nltm flag.
Something like this:
curl -v --proxy-nltm -u 'username:password' youproxy.com:8080 someURL
from > curl --help
-x, --proxy [PROTOCOL://]HOST[:PORT] Use proxy on given port
--proxy-anyauth Pick "any" proxy authentication method (H)
--proxy-basic Use Basic authentication on the proxy (H)
--proxy-digest Use Digest authentication on the proxy (H)
--proxy-negotiate Use Negotiate authentication on the proxy (H)
--proxy-ntlm Use NTLM authentication on the proxy (H)
I'm having trouble using wget for my Debian 7.0 VPS server hosted by OVH.
I'm trying to download a ZIP file from MediaFire, and when I connected via SSH I typed,
wget http://download1472.mediafire.com/5ndlsskkyfmg/dgx7zbbdbxawbwd/Vhalar-GGJ16.zip
Then, this is my output,
--2016-03-07 20:17:52-- http://download1472.mediafire.com/5ndlsskkyfmg/dgx7zbbd bxawbwd/Vhalar-GGJ16.zip
Resolving download1472.mediafire.com (download1472.mediafire.com)... 205.196.123 .160
Connecting to download1472.mediafire.com (download1472.mediafire.com)|205.196.12 3.160|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://www.mediafire.com/?dgx7zbbdbxawbwd [following]
--2016-03-07 20:17:52-- http://www.mediafire.com/?dgx7zbbdbxawbwd
Resolving www.mediafire.com (www.mediafire.com)... 205.196.120.6, 205.196.120.8
Connecting to www.mediafire.com (www.mediafire.com)|205.196.120.6|:80... connect ed.
HTTP request sent, awaiting response... 301
Location: /download/dgx7zbbdbxawbwd/Vhalar-GGJ16.zip [following]
--2016-03-07 20:17:52-- http://www.mediafire.com/download/dgx7zbbdbxawbwd/Vhala r-GGJ16.zip
Connecting to www.mediafire.com (www.mediafire.com)|205.196.120.6|:80... connect ed.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `Vhalar-GGJ16.zip'
[ <=> ] 94,265 440K/s in 0.2s
2016-03-07 20:17:52 (440 KB/s) - `Vhalar-GGJ16.zip' saved [94265]
The download took less than 1 second, and it's a 280MB zip file. Also, it seems to say "440 KB/s", and that math just doesn't add up.
I'm confused as to why I can't download this zip file to my server via SSH, instead of downloading it to my computer, then re-uploading it to the server.
Does anyone see a flaw I'm making in my command?
What you're doing when you're using wget to download that zip file is just downloading the html page that the zip file sits on. You can see this because if you redo the command to output to an html file like such:
wget http://download1472.mediafire.com/5ndlsskkyfmg/dgx7zbbdbxawbwd/Vhalar-GGJ16.html
and open it in the web browser of your choice, you'll get the fancy html page of that link with the mediafire download button on it.
This is entirely because mediafire wants you to verify that you're human with a captcha before you can download it. Try doing the captcha and then issuing the command:
wget http://download1472.mediafire.com/gxnd316uacsg/dgx7zbbdbxawbwd/Vhalar-GGJ16.zip
It will work.
If you have not completed the captcha on whatever computer you're trying to download it from, you need to. This is what the captcha originally looks like. Once you finish it and click "Authorize Download" you'll have free reign to wget the file from the server.
If all else fails, download it originally on your computer and use the scp command to transfer it over.
Look at the contents of the 94kb file that you downloaded in something like vi. Odds are it's not a zip file, but a html file, telling you what went wrong, and what you need to do to download the file.
A browser would have known this (the mime type would tell it that it is being served HTML, and it would display it to you rather than download it).
It is likely that this is a measure by Mediafire to prevent automated downloads of their files. It's possible that spoofing the user-agent header might help, but unlikely.
Just one tip for anyone working with wget to download a file, but the URL has a proxy string at the end after the file name (eg.?This_is_a_query_string_sample_&_123545_& ) i.e. the URL is of the form:
http://download1472.mediafire.com/5ndlsskkyfmg/dgx7zbbdbxawbwd/Vhalar-GGJ16.html?This_is_a_query_string_sample_&_123545_&
In this case always use double quotes while using wget (since & has a special meaning in shell environments)
wget "http://download1472.mediafire.com/5ndlsskkyfmg/dgx7zbbdbxawbwd/Vhalar-GGJ16.html?This_is_a_query_string_sample_&_123545_&"
You can also download the zip file specifying a new name to that file with the wget option -O.
wget -O new_name_for_the_file.zip <url-address.zip>
This is what I tried:
curl http://git.ep.petrobras.com.br/api/v3/session --data-urlencode 'login=myUser&password=myPass'
Answer:
{"message":"401 Unauthorized"}
The problem is the data-urlencode CURL option. Since it's an HTTP POST you don't need to URL encode the data, and is actually encoding the & into & and causing your issue. Instead use the --data option.
curl http://git.ep.petrobras.com.br/api/v3/session --data 'login=myUser&password=myPass'
Also, be careful sending credentials over plain HTTP. It could be easily sniffed.
This is how:
$ curl http://git.ep.petrobras.com.br/api/v3/session/ --data-urlencode 'login=myUser' --data-urlencode 'password=myPass'
The solution pointed out by Steven doesn't work if your username or password contains characters that have to be urleencoded. The name=content format will urlencode the content part (the name part has to be urlencoded but login and password are fine).
To actually retrieve the private_token you can pipe the output of curl into jq like this:
$ curl [as above] | jq --raw-output .private_token
x_the_private_token_value_x
This way you can easily use it in a shell script.
Also, as Steven pointed out already, please use https instead so that your password is not transmitted in clear text across the wire.
Note: this workflow no longer works as of GitLab 8.6.0 as the default password has been removed.
Changelog: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/CHANGELOG#L205
I only just noticed this and raised the issue. Leaving this note here to hopefully save someone else some time. Hopefully, this is a decision that will be reviewed and reverted.
Discussion/issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/1980
I am trying to back up one of my sites that is password protected using wget. I can't seem to format the command correctly because I keep getting 401 errors:
wget http://dev.example.com/"Login?mode=login
> &user-username=TYPEUSERNAMEHERE&user-password=TYPEPASSWORDHERE"
Can anyone tell me what I am doing wrong here? What is the correct way to download an entire directory that is password protected using wget? Thanks!
Wait I got it:
wget -r --http-user=USERNAME --http-passwd='PASSWORD'
If the password protection is by htaccess you have to form your url like
http://USERNAME:PASSWORD#Domain.
I know this is simple.. I am jus missing something.. I give up!!
#!/bin/sh
export http_proxy='http://unblocksitesnow.info'
rm -f index.html*
strace -Ff -o /tmp/mm.log -s 200 wget 'http://slashdot.org'
I have used different proxy servers.. to no avail.. I get some default page..
In /etc/wgetrc use_proxy = on
Actually I am trying to use this setting(http_proxy) with python's urllib2. It access some default page as well..
strace - does a dns lookup of the proxy server
GET http://slashdot.org/ HTTP/1.0\r\nUser-Agent: Wget/1.11.4\r\nAccept: /\r\nHost: slashdot.org\r\n\r\n
Any pointers??
For some apps, HTTP_PROXY is case sensitive. It's best to set it in upper case.
# export HTTP_PROXY=http://server/
or
# export HTTP_PROXY=http://server:8888/
The problem was I was using proxy sites. These sites expect you to send GET request to the proxy site (with the target site as parameter in URL or whatever site specific mechanisms they implement).
I was looking for proxy sites like http://www.proxy4free.com/page1.html
I connect to their respective ports and send a get request to the original target site..
Often you need a port with the proxy-server, for example:
export http_proxy=http://unblocksitesnow.info:30000
Also, the single quotes are not needed.
On Debian/Ubuntu if you need apt-get via the proxy you will also need to update
/etc/apt/apt.conf
If the file doesnt exist, create it and apt-get update to confirm
As well as export http_proxy="<ADD>:<PORT>"