Botocore - how to handle failover condition - amazon-s3

I have more than 4 servers up and running s3-auth server on them.
I am able to authenticate user from first server.
if we turn off first server, how to validate that s3 server is not running on first server through botocore, so it should automatically use next server.
When i turn off first server and send request to list users, response never comes from botocore. It retry operation for 5 times and after that it do nothing.
Botocore Version: 1.3.30
Boto3 version: 1.2.2
Please help on this.
See below botocore logs:
DEBUG:botocore.endpoint:Response received to retry, sleeping for 7.72060814652 seconds
DEBUG:botocore.hooks:Event request-created.iam.ListUsers: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f238444ccd0>>
DEBUG:botocore.auth:Calculating signature using v4 auth.
DEBUG:botocore.auth:CanonicalRequest:
POST
/
host:iam.test.com:8085
user-agent:Boto3/1.2.2 Python/2.7.5 Linux/3.10.0-229.11.1.el7.x86_64 Botocore/1.3.30
x-amz-date:20170322T131218Z
host;user-agent;x-amz-date
b6359072c78d70ebee1e81adcbab4f01bf2c23245fa365ef83fe8f1f955085e2
DEBUG:botocore.auth:StringToSign:
AWS4-HMAC-SHA256
20170322T131218Z
20170322/us-east-1/iam/aws4_request
e74bb593aaf7d92c8dfb517a4daedfe353ec4f9806a7b1c50bca7d7ed2e9e45e
DEBUG:botocore.auth:Signature:
adc96eb87a11f5b69214163bfafab7a82574113be0bcb0cddb43dfa70cbbc789
DEBUG:botocore.endpoint:Sending http request: <PreparedRequest [POST]>
INFO:botocore.vendored.requests.packages.urllib3.connectionpool:Starting new HTTP connection (5): iam.test.com
DEBUG:botocore.endpoint:ConnectionError received when sending HTTP request.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/botocore/endpoint.py", line 174, in _get_response
proxies=self.proxies, timeout=self.timeout)
File "/usr/lib/python2.7/site-packages/botocore/vendored/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/botocore/vendored/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(111, 'Connection refused'))
DEBUG:botocore.hooks:Event needs-retry.iam.ListUsers: calling handler <botocore.retryhandler.RetryHandler object at 0x7f23843a2790>

Related

Azure cli behind corporate proxy not working (SSL: WRONG_VERSION_NUMBER) [duplicate]

Running python version 3.9.1 on arch linux with OpenSSL version 1.1.1i and pyopenssl version 1.1.1i I get the following error when trying to use an https proxy with the requests module:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "/usr/lib/python3.9/site-packages/urllib3/connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/usr/lib/python3.9/site-packages/urllib3/connection.py", line 496, in _connect_tls_proxy
return ssl_wrap_socket(
File "/usr/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 424, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "/usr/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 466, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/usr/lib/python3.9/site-packages/urllib3/util/retry.py", line 573, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3.9/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)')))
The code I am running is:
import requests
proxy = {
'https' : 'https://proxyip:proxyport'
}
requests.get("https://google.com", proxies=proxy)
No matter what https proxy I try, I get the same error. I have also reinstalled both openssl and python with no change. Any suggestions?
... line 496, in _connect_tls_proxy
Your code is trying to use the (new) support for accessing the proxy itself over HTTPS. This is done because you've explicitly given that URL as the proxy as https://... and not http://...:
'https' : 'https://proxyip:proxyport'
^^^^^^
It is very likely that the proxy itself does not support TLS connections to the proxy. Commonly HTTP proxies have a plain HTTP connections to the proxy only. They still can proxy HTTPS traffic this way, since the client will simply issue a CONNECT request to the proxy to create a tunnel and then use end-to-end TLS between client and server.
Accessing a proxy by HTTPS will add an additional layer of TLS between client and proxy, which is not supported by most proxies. Therefore, you likely need plain HTTP proxy instead:
'https' : 'http://proxyip:proxyport'
^^^^^^
Note that in older versions of the requests library both access with http:// and https:// worked. These older versions had no support for HTTPS to the proxy and simply used plain HTTP even if https:// would be specified.
Add login.microsoftonline.com;management.azure.com as exceptions will work.
it was fixed with my case by this command:
python3 -m pip install urllib3==1.22

Getting (insecure_transport) OAuth 2 MUST utilize https with CERT managed by Heroku. I have a subdomain attached pointed to Server

I'm trying to user google sheets API service, which requires an HTTPS connection. I'm getting the following error
Exception Type: InsecureTransportError at my_site/google/success/
Exception Value: (insecure_transport) OAuth 2 MUST utilize https.
I am using Heroku, and on my settings it says AMC Status: ok. I verified that I'm using HTTPS by running curl -vI https://my_site/google/success
which returned:
SSL certificate verify ok
From my perspective it seems that I am using HTTPS, but I am getting this error. What could I be doing wrong? Surely I have something misconfigured Anything else I need to provide from troubleshooting? Here is the Full traceback:
traceback (most recent call last):
File "/app/.heroku/python/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/app/.heroku/python/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/app/google_api/views.py", line 56, in authorize_success
flow.fetch_token(authorization_response=code)
File "/app/.heroku/python/lib/python3.8/site-packages/google_auth_oauthlib/flow.py", line 286, in fetch_token
return self.oauth2session.fetch_token(self.client_config["token_uri"], **kwargs)
File "/app/.heroku/python/lib/python3.8/site-packages/requests_oauthlib/oauth2_session.py", line 239, in fetch_token
self._client.parse_request_uri_response(
File "/app/.heroku/python/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/clients/web_application.py", line 203, in parse_request_uri_response
response = parse_authorization_code_response(uri, state=state)
File "/app/.heroku/python/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 256, in parse_authorization_code_response
raise InsecureTransportError()
import os
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'

SFTPOperator not able to authenticate with a host that requires both password and public key authentication

Airflow version: 2.0.0
When I use the sftp command to manually connect to the host from any airflow worker everything works fine. Here is the error log from when I try to use the operator which under the hood uses the paramiko library to transfer files:
{ssh.py:202} WARNING - No Host Key Verification. This wont protect against Man-In-The-Middle attacks
{transport.py:1819} INFO - Connected (version 2.0, client 1.91)
{transport.py:1819} INFO - Auth banner: b'MOMENTUM SYSTEMS - SSH Server\nAuthentication Methods Supported:\nPUBLICKEY, PASSWORD'
{transport.py:1819} INFO - Authentication continues...
{transport.py:1819} INFO - Disconnect (code 2): unexpected service request
{taskinstance.py:1396} ERROR - Authentication failed.
Traceback (most recent call last):
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task
result = task_copy.execute(context=context)
File "/home/centos/airflow-dags/utils/operators/s3_to_sftp.py", line 76, in execute
sftp_client = ssh_hook.get_conn().open_sftp()
File "/home/centos/.local/lib/python3.7/site-packages/airflow/providers/ssh/hooks/ssh.py", line 225, in get_conn
client.connect(**connect_kwargs)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 446, in connect
passphrase,
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 764, in _auth
raise saved_exception
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 751, in _auth
self._transport.auth_password(username, password)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/transport.py", line 1509, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/auth_handler.py", line 236, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
The Airflow connection that I use has the password and no additional options in extra.
The answer provided to the linked question worked for my use case:
Multi-factor authentication (password and key) with Paramiko

fail to create a connection with nessus server

I am trying to get a connection with the Nessus server with the bellow command in python but it failed with an error message can you tell me what can be the cause. I have checked my network connection it is fine.
requests.post( 'https://164.99.175.30:8834/'+ '/session',data={'username':'admin','password':'micro#123'},verify=False)```
error message
Traceback (most recent call last):
File "nessus.py", line 425, in <module>
login()
File "nessus.py", line 111, in login
res = requests.post(url + '/session',data={'username':username,'password':password},verify=verify)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='localhost', port=8834): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f46f2d6d410>: Failed to establish a new connection: [Errno 111] Connection refused',))
The nessus api is depreciated as of version 7.x, this is the best source I could find.
EDIT: I have found a better source directly from tenable
What has been removed from Nessus 7:
There is a restriction in scan API capabilities.
The ability to manage scans via API and CLI has been removed in v7. All Nessus Pro scanning operations must be done through the user interface.
So currently the ability of the Nessus API is as follows:
Removed the ability to run scans or reports and create new objects
The Read features, where the ability to pull scan data so GET /scan/scan ID now works again and this aids with some of the integration processes.
https://community.tenable.com/s/article/The-differences-between-Nessus-6-and-Nessus-7
This is only for Nessus pro versions

Proxy Error when Requests connecting from HTTPS

I have been testing a site with a social media login validation. It's quite primitive aside from that I built in the ability to connect through a proxy. This works fine, including the proxy, when I run it from my box or the server the site is on. However, once I enabled https/ssl on my server and used the proxy, it stopped working with this error:
File "myprogram.py", line 274, in login
r = self.s.get(self.url)
File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 476, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.6/site-packages/requests/adapters.py", line 424, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='socialmediawebsite', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error(111, 'Connection refused')))
I've read a solid amount and have looked through the code for requests and urllib3, but it seems more and more like I have a fundamental misunderstanding. I've tried HTTPAuth, setting verify to False, and setting environment variables. I have also tried without the proxy, and it works, which has added to my confusion. It seems to me like there might be another piece of technology required to send this https request through the proxy. Or is it just that I have to open up more ports somewhere or something equally simple?
There is one question similar to mine here: Python Requests doesnt work for https proxy
But the accepted answer is factually inaccurate and didn't work for me.
What my code looks like:
proxies = {
'http': 'http://user:pass#10.10.1.10:3128',
'https': 'http://user:pass#10.10.1.10:3128',
}
self.s.update(proxies)
self.s.get(self.url)