I am trying to install RVM on Mac OSX 10.5. When I do I get the following errors.
mitch:~ mitch$ bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head )
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 185 100 185 0 0 347 0 --:--:-- --:--:-- --:--:-- 0
bash: line 1: html: No such file or directory
bash: line 2: syntax error near unexpected token `<'
'ash: line 2: `<head><title>301 Moved Permanently</title></head>
I also tried to install using this:
bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)
Which does not produce any errors but also does not install or download anything.
Any ideas on how I can get RVM to install?
Thank you in advance.
Try this, works for me
bash < <( curl https://rvm.io/releases/rvm-install-head )
or use -L option to tell curl to follow 301 redirection
bash < <( curl -L http://rvm.io/releases/rvm-install-head )
I used this command for RVM installation and it works fine.....
bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
It looks like you have the same problem I had, and it was actually to do with curl.
You need to enable ssl support in curl, I found the solution with mac ports, in this post
http://naleid.com/blog/2009/03/16/enabling-https-support-in-curl-installed-through-macports-on-osx/
sudo port -f upgrade curl +ssl
Note the +ssl option which adds this support.
Do ls ~/.rvm and see if the directory has been created. If it has, delete it using rm -rf ~/.rvm. That will clean out any partially installed RVMs.
Then do bash < <(curl -s https://rvm.io/install/rvm). If should be successful, and will present an introduction screen if it was.
Follow the directions in the intro text, and append RVM's initialization command to the end of your ~/.bashrc file. Be sure to read the directions about its placement.
Type rvm notes and read what it says for MacOS prerequisites. You'll need the latest XCode from Apple, which is free, but it's a big download.
At that point you should be able to use RVM to install some Rubies into its sandbox.
Related
I'm trying to install nvm using curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash on WSL2, but I'm getting different errors. Initially, the curl command would return the following:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0curl: (6) Could not resolve host: raw.githubusercontent.com
After running netsh int ip reset in Windows, which was suggested in another question, the same command started timing instead:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:04:59 --:--:-- 0
curl: (28) Connection timed out after 300000 milliseconds
I've also tried manually saving the install.sh to my machine and running it locally (after setting its permissions with chmod +x install.sh), but that returns a similar error:
> $ ./install.sh
=> Downloading nvm from git to '/home/mparra/.nvm'
=> Cloning into '/home/mparra/.nvm'...
fatal: unable to access 'https://github.com/nvm-sh/nvm.git/': Failed to connect to github.com port 443: Connection timed out
Failed to clone nvm repo. Please report this!
I can successfully ping github.com. ping -c 100 github.com returns the following:
--- github.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99156ms
rtt min/avg/max/mdev = 15.280/20.739/85.205/9.141 ms
This issue suggests that a Windows update resolved the issue, but that's not an option for me since it's a work machine and I can't update beyond build 18363.2039. I've also checked that my VPN is not enabled and I set my DNS to 8.8.8.8 and 8.8.4.4, which had no effect.
Please try the following in your WSL
sudo rm /etc/resolv.conf
sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo bash -c 'echo "[network]" > /etc/wsl.conf'
sudo bash -c 'echo "generateResolvConf = false" >> /etc/wsl.conf'
sudo chattr +i /etc/resolv.conf
I can install with curl.
I have a feeling you are probably correct about this being the same issue mentioned on Github that was resolved in a Windows update.
If that's truly the case, you are probably going to continue to run into issues even after getting nvm installed. For instance, nvm probably will have trouble downloading Node releases.
The easiest solution that I can propose, if it works for you, is to simply convert to WSL1 instead of WSL2. WSL1 will handle most (but not all) Node use-cases just as well as WSL2. And WSL1 handles networking very differently than WSL2. If the Windows networking stack is working fine for you, then WSL1's should as well.
As noted in that Github issue, this seemed to be a problem that occurred only in Hyper-V instances. WSL2 runs in Hyper-V, but WSL1 does not.
If you go this route, you can either:
create a copy of your existing WSL2 distribution and convert that copy to WSL1. From PowerShell:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar
mkdir .\path\for\new\instance
wsl --import WSL1 .\path\for\new\instance path\to\backup.tar --version 1 # WSL1 can be whatever name you choose
wsl -d WSL1
Note that you'll be root, by default. To change the default user, follow this answer.
Or, just convert the WSL2 instance to WSL1:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar # Just in case
wsl --set-version <distroname> 1
If WSL1 doesn't work for you (at least in the short term until your company pushes that update), then there may be another option similar to the one mentioned in this comment on that Github issue. Let me know if you need to go that route, and I'll see if I can simply that a bit.
I'm having a difficult time getting phantomjs installed on my server. I haven't found very good directions anywhere and the best I've found give me errors when I try to complete them. As of now I'm following these steps and getting these errors.
Successfully used putty to login as root and run the following commands
Line 1: yum install fontconfig freetype freetype-devel fontconfig-devel libstdc++
No errors
Line 2: wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.8-linux-x86_64.tar.bz2
No errors
Line 3: mkdir -p /opt/phantomjs
No errors
Line 4: tar -xjvf ~/phantomjs-1.9.8-linux-x86_64.tar.bz2 --strip-components 1 /opt/phantomjs/
Error: opt/phantomjs: Not found in archive
For this error (line 4) I ftp into my server and didn't see any directory for opt/phantomjs. I created one but am having the same "Not found in archive" error.
After this the only other lines of code, from what I've found, should be:
Line 5: ln -s /opt/phantomjs/bin/phantomjs /usr/bin/phantomjs
Line 6: phantomjs /opt/phantomjs/examples/hello.js
If anyone has any insight I'd greatly appreciate it!
Well after a lot of trial and error it seems to be working (so far). The problem was the syntax of line 4. This solved the issue and line 5 and 6 worked fine.
UPDATED LINE 4: tar -xjvf ~/phantomjs-1.9.8-linux-x86_64.tar.bz2 --strip-components=1 -C /opt/phantomjs/
Hopefully this helps someone else having the same issue.
Anyone know of a good tutorial on using it for highcharts in php?
I used this command to place the binay in /usr/local/bin
curl -Ls https://github.com/Medium/phantomjs/releases/download/v1.9.19/phantomjs-1.9.8-linux-x86_64.tar.bz2 | tar jxvf - --strip-components=2 -C /usr/local/bin/ ./phantomjs-1.9.8-linux-x86_64/bin/phantomjs
Using Android and Gradle how can I save the console messages of gradlew tasks to a file? For example when running 'gradlew connectedCheck -i' how do I save the run times and any failures to a file?
In bash/command line run:
./gradlew connectedCheck -i 2>&1 | tee file.txt
In Powershell on Windows where tee is typically not available, you can do the same thing with the normal redirection operator (looks similar to BASH, but does indeed work):
./gradlew connectedCheck -i 2>&1 > file.txt
As far as I know this should work all the way back to Powershell 2.0, only because we still use it at work on some of our older servers. I can't find docs for anything older than v3.0, for which the documentation is here:
about_Redirection | Microsoft Docs
I've just installed RVM on a new machine and when switching into a directory containing a .rvmrc file (which I've accepted) I'm getting:
ERROR: Neither sha256sum nor shasum found in the PATH
I'm on OS X 10.5.8. — Probably missing something somewhere. Any ideas what's going on and how to fix this?
My OpenSSL happened to not have a sha256 enc function for some reason:
$ openssl sha256
openssl:Error: 'sha256' is an invalid command.
After some googling, I found that there is an equivalent called gsha256sum that comes with the homebrew recipe "coreutils". After installing that (brew install coreutils), I had a gsha256sum binary in /usr/local/bin, so it was just a matter of symlinking it:
$ sudo ln -s /usr/local/bin/gsha256sum /usr/local/bin/sha256sum
That fixed it for me.
ciastek's answer worked for me until I tried to run rvm within a $() in a bash script - rvm couldn't see the sha256sum function. So I created a file called sha256sum with the following contents:
openssl sha256 "$#" | awk '{print $2}'
put it in ~/bin, made it executable, and added that folder to my path (and removed the function from my .bashrc).
(Many thanks to my coworker Rob for helping me find that fix.)
Means you're missing the binary in /usr/bin or your path is somehow missing /usr/bin. Open a new shell and run echo $PATH | grep '/usr/bin' and see if its returned. Also, ls -alh /usr/bin/shasum and make sure the binary is there and executable. There is no sha256sum on OS X, just shasum.
On MacOS Sierra run
$ shasum -a 256 filename
Based on #vikas027 comment just add
alias sha256sum='shasum -a 256' to your ~/.zshrc
In my opinion Leopard just doesn't have /usr/bin/shasum.
Take a look at shasum manpage - this manpage is only for Snow Leopard. Other manpages, like ls manpage (can't link to it, not enough reputation), are for previous versions of MacOS X.
Workaround: Use OpenSSL to calculate sha256 checksums.
Leopards' OpenSSL (0.9.7) doesn't handle sha256. Upgrade OpenSSL. I've used MacPorts (can't link to it, not enough reputation). OpenSSL's dependecy zlib 1.2.5 required to upgrade XCode to 3.1. Can I get Xcode for Leopard still? is helpful.
Alias sha256sum to OpenSSL and correct the way it formats an output. I've put in my .bash_profile:
function sha256sum() { openssl sha256 "$#" | awk '{print $2}'; }
I'm on a relatively fresh install of Lion (OS X 10.7.4). In my /usr/bin/ folder I had these files:
-rw-rw-rw- 35 root wheel 807B /usr/bin/shasum
-rwxr-xr-x 1 root wheel 7.5K /usr/bin/shasum5.10
-rwxr-xr-x 1 root wheel 7.5K /usr/bin/shasum5.12
I had a shasum, it just wasn't marked as executable. A quick sudo chmod a+x /usr/bin/shasum solved the issue for me.
For mac os X 10.9.5 and you profile get /usr/bin path
date +%s | shasum | base64 | head -c 32 ; echo
And if you found yourself here in 2022 wondering what works on the latest Mac (Mac OS Big Sur). Do following.
sudo brew install coreutils
sudo ln -s /usr/bin/shasum<Version_for_your_installation> /usr/local/bin/sha256sum
I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage