HTTP error when installing pytorch - amazon-s3

I use ubuntu14.04, without GPU.
when I install pytorch in anaconda(I type command just as pytorch.org says),things always get wrong,like this:
------------------------------------------------------------
Fetching package metadata ...........
Solving package specifications: .
Package plan for installation in environment /home/zhanglu/anaconda3:
The following NEW packages will be INSTALLED:
pytorch: 0.1.9-py36_2 soumith
torchvision: 0.1.7-py36_1 soumith
The following packages will be UPDATED:
conda: 4.3.8-py36_0 --> 4.3.13-py36_0
Proceed ([y]/n)? y
pytorch-0.1.9- 100% |################################| Time: 0:02:49 1.44 MB/s
pytorch-0.1.9- 100% |################################| Time: 0:05:52 692.97 kB/s
pytorch-0.1.9- 100% |################################| Time: 0:01:15 3.23 MB/s
CondaError: CondaHTTPError: HTTP None None for url <None>
Elapsed: None
An HTTP error occurred when trying to retrieve this URL.
ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),)
CondaError: CondaHTTPError: HTTP None None for url <None>
Elapsed: None
An HTTP error occurred when trying to retrieve this URL.
ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),)
CondaError: CondaHTTPError: HTTP None None for url <None>
Elapsed: None
An HTTP error occurred when trying to retrieve this URL.
ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),)
----------------------------------------------------------------
I've tried many times, but every time error occurs in the same step.
I'm looking forward your answers,thanks very very much for your help!!!

Please update to the latest version of Navigator.
Open a terminal (on Linux or Mac) or the Anaconda Command Prompt (on windows)
and type
$ conda update anaconda-navigator

Related

Hyperledger Fabric error: "TLS: bad certificate server" when installing chaincode

I'm just starting learning HLF, and I have an error while following tutorial from the docs: link
I downloaded fabric-samples using this command (replaced bit.ly link with the destination):
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.2.2 1.4.9
I run logspout in one terminal and try to execute peer lifecycle chaincode install basic.tar.gz in another one, and this is the result i get
Error: failed to retrieve endorser client for install: endorser client
failed to connect to localhost:7051: failed to create new connection:
context deadline exceeded
Log presented by Logspout:
peer0.org1.example.com|2022-03-15 13:03:24.452 UTC [core.comm]
ServerHandshake -> ERRO 04a Server TLS handshake failed in 2.650245ms
with error remote error: tls: bad certificate server=PeerServer
remoteaddress=172.22.0.1:61126
I set the envs in terminal as instructed in the docs, and I checked that CORE_PEER_TLS_ROOTCERT_FILE variable points to an existing file. The content of the file is the same as on the container.
What I tried to do:
download fabric-samples again and redo all the setup with copy-pasting the commands directly from docs
Do you have any suggestions where I can look for an issue?
I resolved the problem, I was using peer version 2.2.1 from previous experiments, it probably collided with FABRIC_CFG_PATH

cos-extensions install gpu failed to download driver signature on GCP Compute Engine VM

I am working with GPU supported VMs on GCP Compute Engine.
As OS I use a container optimized version (COS 89-16108.403.47 LTS), which supports simple GPU driver installation by running 'cos-extensions install gpu' via SSH (see Google doc).
This had worked perfectly so far until I started getting an error message saying that the download of some driver signature fails (see full error message below) a couple of days ago and I couldn't get it to work ever since.
Can someone either confirm that I am experiencing a bug here or help me fix this problem?
Many thanks in advance!
~ $ cos-extensions install gpu
Unable to find image 'gcr.io/cos-cloud/cos-gpu-installer:v2.0.3' locally
v2.0.3: Pulling from cos-cloud/cos-gpu-installer
419e7ae5bb1e: Pull complete
6f6ec2441524: Pull complete
11d24f918ba9: Pull complete
Digest: sha256:1cf2701dc2c3944a93fd06cb6c9eedfabf323425483ba3af294510621bb37d0e
Status: Downloaded newer image for gcr.io/cos-cloud/cos-gpu-installer:v2.0.3
I0618 06:33:49.227680 1502 main.go:21] Checking if this is the only cos_gpu_installer that is running.
I0618 06:33:49.258483 1502 install.go:74] Running on COS build id 16108.403.47
I0618 06:33:49.258505 1502 installer.go:187] Getting the default GPU driver version
I0618 06:33:49.285265 1502 utils.go:72] Downloading gpu_default_version from https://storage.googleapis.com/cos-
tools/16108.403.47/gpu_default_version
I0618 06:33:49.353149 1502 utils.go:120] Successfully downloaded gpu_default_version from https://storage.google
apis.com/cos-tools/16108.403.47/gpu_default_version
I0618 06:33:49.353381 1502 install.go:85] Installing GPU driver version 450.119.04
I0618 06:33:49.353461 1502 cache.go:69] error: failed to read file /root/var/lib/nvidia/.cache: open /root/var/l
ib/nvidia/.cache: no such file or directory
I0618 06:33:49.353482 1502 install.go:120] Did not find cached version, installing the drivers...
I0618 06:33:49.353491 1502 installer.go:82] Configuring driver installation directories
I0618 06:33:49.421021 1502 installer.go:196] Updating container's ld cache
I0618 06:33:49.526673 1502 signature.go:30] Downloading driver signature for version 450.119.04
I0618 06:33:49.526712 1502 utils.go:72] Downloading 450.119.04.signature.tar.gz from https://storage.googleapis.
com/cos-tools/16108.403.47/extensions/gpu/450.119.04.signature.tar.gz
E0618 06:33:49.657028 1502 artifacts.go:106] Failed to download extensions/gpu/450.119.04.signature.tar.gz from
public GCS: failed to download 450.119.04.signature.tar.gz, status: 404 Not Found
E0618 06:33:49.657487 1502 install.go:175] failed to download driver signature: failed to download driver signat
ure for version 450.119.04: failed to download extensions/gpu/450.119.04.signature.tar.gz
This seems to be a known issue, you can find it reported here and a similar thread with workarounds here.
Looks like there is a delay between the release of new COS version and release of updated drivers.
However, I ran cos-extensions list just now, and it seems there are drivers available:
$ cos-extensions list
Available extensions for COS version 89-16108.403.47:
[gpu]
450.119.04 [default]
450.80.02
And signatures as well:
$ wget https://storage.googleapis.com/cos-tools/16108.403.47/extensions/gpu/450.119.04.signature.tar.gz
--2021-06-21 12:49:58-- https://storage.googleapis.com/cos-tools/16108.403.47/extensions/gpu/450.119.04.signature.tar.gz
Resolving storage.googleapis.com... 173.194.198.128, 64.233.191.128, 173.194.74.128, ...
Connecting to storage.googleapis.com|173.194.198.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4588 (4.5K) [application/octet-stream]
Saving to: '450.119.04.signature.tar.gz'
450.119.04.signature.tar.gz 100%[=============================================>] 4.48K --.-KB/s in 0s
2021-06-21 12:49:58 (62.0 MB/s) - '450.119.04.signature.tar.gz' saved [4588/4588]

What gives Hyperledger fabric a ssl legitimacy error when I use -k/--insecure?

I'm trying to install the binaries for Hyperledger fabric but I run into an error.
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I do know this means i need to put -k or --insecure in the curl statment. However I am doing so and it doesn't work.
curl --insecure -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s 2.3.0 1.4.9
when I run this the first half is fine, the hyperledger/fabric-samples repo clone will succeed.
Here is the entire output
\Clone hyperledger/fabric-samples repo
===> Cloning hyperledger/fabric-samples repo
Cloning into 'fabric-samples'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 7386 (delta 2), reused 8 (delta 0), pack-reused 7366
Receiving objects: 100% (7386/7386), 4.26 MiB | 905.00 KiB/s, done.
Resolving deltas: 100% (3823/3823), done.
===> Checking out v2.3.0 of hyperledger/fabric-samples
Pull Hyperledger Fabric binaries
===> Downloading version 2.3.0 platform specific fabric binaries
===> Downloading: https://github.com/hyperledger/fabric/releases/download/v2.3.0/hyperledger-fabric-windows-amd64-2.3.0.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
==> There was an error downloading the binary file.
------> 2.3.0 platform specific fabric binary is not available to download <----
I am still a student so it could easily be a stupid problem, but I'm really stuck. So please can someone help me?
So, the problem was -k/--insecure was not good enough i had to change it for all connections with :
echo insecure >> ~/.curlrc

conda install in Julia fails because of SSL on windows 10

conda install xxx fails with the following error:
SSLError(MaxRetryError('HTTPSConnectionPool(host=\'repo.anaconda.com\',
port=443): Max retries exceeded with url:
/pkgs/main/win-64/current_repodata.json (Caused by
SSLError(SSLError("bad handshake: Error([(\'SSL routines\',
\'tls_process_server_certificate\', \'certificate verify
failed\')])")))'))
Maybe there must be something very subtle causing the trouble in my environment but I really have no idea.
It used to work but after I reinstall Windows 10 image, the issue began to appear.
It is on Windows 10. Conda version is 4.7.12. My .condarc is placed in %USERPROFILE% and looks like:
channels:
- defaults
show_channel_urls: true
allow_other_channels: true
proxy_servers:
https: https://x.x.x.x:8080
ssl_verify: false
All available advice over the Internet for the above error tell that ssl_verify set to false is enough.
Where else should I take a look into?
EDIT:
I found some useful info at:
https://stackoverflow.com/a/56717433/7341479
It solves the issue for running conda install Anaconda Powershell opened in Admin. but it still remains for when I run pkg build PyCall in Julia

yum update httpd failed on CentOS 7

I am trying to update Cent OS 7.2 box for compliance activities. It is failing with error "error: unpacking of archive failed on file /usr/sbin/suexec;5a02e28f: cpio: cap_set_file" as below,
15-186 ~# yum update httpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.scalabledns.com
* epel: mirror.oss.ou.edu
* extras: ftp.usf.edu
* updates: ftp.usf.edu
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-40.el7.centos will be updated
---> Package httpd.x86_64 0:2.4.6-67.el7.centos.6 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================================
Updating:
httpd x86_64 2.4.6-67.el7.centos.6 updates 2.7 M
Transaction Summary
==============================================================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 2.7 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
httpd-2.4.6-67.el7.centos.6.x86_64.rpm | 2.7 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : httpd-2.4.6-67.el7.centos.6.x86_64 1/2
Error unpacking rpm package httpd-2.4.6-67.el7.centos.6.x86_64
error: unpacking of archive failed on file /usr/sbin/suexec;5a02e28f: cpio: cap_set_file
httpd-2.4.6-40.el7.centos.x86_64 was supposed to be removed but is not!
Verifying : httpd-2.4.6-40.el7.centos.x86_64 1/2
Verifying : httpd-2.4.6-67.el7.centos.6.x86_64 2/2
Failed:
httpd.x86_64 0:2.4.6-40.el7.centos httpd.x86_64 0:2.4.6-67.el7.centos.6
Complete!
It worked on another similar box. Checked many blogs and redhat bugzilla but no way out. Anybody has any clue whats happening here?
It's a moby issue - https://github.com/moby/moby/issues/6980. Try to change docker storage driver/host OS.