Impossible to install python package with anaconda on corporate laptop - ssl

I have anaconda install on my corporate laptop. I want to install 2 python packages( Potply & Fuzzywuzzy) but each time I try I have the same error message
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
<https://repo.anaconda.com/pk
gs/r/win-64/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on
your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
ConnectTimeout(MaxRetryError("HTTPSConnectionPool
(host='repo.anaconda.com',pot
=443): Max retries exceeded with url: /pkgs/r/win-64/repodata.json.bz2
(Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection
object at 0x00000000054D45F8>, 'Connection to repo.anaconda.com timed out.
(connect timeout=9.
15)'))"))
I have tried to use use the command :
conda config --set ssl_verify no
or
conda config --set ssl_verify false
but none of them are working for me. Also because it's my company laptop I am not admin so I am not able to change the firewall and connection properties and I am not able to contact the service desk to help me on that.
So I 'll be more than happy to hear your solution(s).

Related

Hyperledger fabric2.5 peer lifecycle chaincode approveformyorg Error: timed out waiting for txid on all peers

I am running HyperledgerFabric 2.5 and I would like to deploy a 3 organizations network with multi peers when I commitChaincodeDefinition i faced an "Error: timed out waiting for txid on all peers"
i already tried to fix it after doing research and saw that it DNS error people solved it by adding an extra host but i couldn't fix it.
Any suggestion will be appreciated

Issues in opening the mlflow ui with pygrok

I am very new to MLops and Ml flow. I am trying to use MLflow on google Colab to track models. however, i am not able to open the ui on the local server.
I got a couple of errors:
The connection to http:xxxx was successfully tunneled to your ngrok client, but the client failed to establish a connection to the local address localhost:80.
Make sure that a web service is running on localhost:80 and that it is a valid address.
The error encountered was: dial tcp 127.0.0.1:80: connect: connection refused
Post this error, i did certain changes to the environment and downloaded ngrok.
and provided the auth token to NGROK_AUTH_TOKEN = "xxxx"
Now i am getting the below message:
The code that i am using is:
!pip install pyngrok --quiet
from pyngrok import ngrok
ngrok.kill()
NGROK_AUTH_TOKEN = ""
ngrok.set_auth_token(NGROK_AUTH_TOKEN)
public_url = ngrok.connect(port="127.0.0.1:5000 ", proto="http", options={"bind_tls": True})
print("MLflow Tracking UI:", public_url)
Any help is highly appreciated.
TIA...

MWAA: install python requirements behind proxy

we've launched a private MWAA environment. We are able to access the UI, but we're having some trouble installing our python requirements.
MWAA picks up the requirements file from S3, but runs into a timeout when trying to install the python packages.
This is expected, because we're behind a proxy, so my question would be: how do we tell MWAA to use our proxy while installing our python dependencies?
This is what our CloudWatch logstream (requirements_install_ip*) tells us:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None))
after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection
object at 0x7fda26b394d0>, 'Connection to pypi.org timed out. (connect timeout=15)')'
We have contacted AWS support on this, and apparently there is no such option as to pass the proxy variable. So we placed a feature request.
Even though I'm not sure if this is going to be implemented at all, anybody interested in this may feel free to subscribe to the MWAA document history feed.
You can set this in your pip.ini
[global]
index = https://eg.nexus.repo.url
index-url = https://eg.nexus.repo.url
To get where your pip.ini is, you can do:
pip config -v list

Unable to instantiate the chaincode in muticloud setup

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)
#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

tensorflow serving: failed to connect to 'ipv4:127.0.0.1:9000'

I have installed and configured tensorflow serving on an "AWS t2.large Ubuntu 14.04" server.
When I attempt to test the server with the mnist_client utility by executing the command, bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000, I receive the following error message:
E0605 05:03:54.617558520 1232 tcp_client_posix.c:191] failed to connect to 'ipv4:127.0.0.1:9000': timeout occurred
Any idea how to fix this?
I haven't heard of anything like this, but did note that (at least with other test clients) when the server was not ready/up yet, requests would timeout. So my guess is that the server is not up yet or perhaps in some other bad state.
I met the same problem before. The root cause is mnist_client was run in my local machine instead of the server, because the command connects to localhost bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
It succeeds when I run mnist_client utility in the server.