urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme None - python-3.8

Recently my application started getting an error related to proxies
> in __init__
> raise ProxySchemeUnknown(proxy.scheme) urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme None
I did not make any changes to the code or performed any updates to python3.8, which is what im using.
here is the function im using to fetch proxies from an api that pulls them from the DB
def get_proxy(self):
try:
req = self.session.post(url=self.script_function_url, headers=self.script_function_header, json={"action": "proxy"}, verify=False, timeout=20).json()
self.proxy = {"https": req['ipAddress']+":"+req['port']}
except Exception as e:
print(f'Proxy error: {e}')
exit()
any help would be greatly appreciated i am completely new to python.

I don't know what exact line is causing the error in your code and if you have a proxy yourselve, but I know that you need to specify a scheme to do API calls behind a proxy.
So in windows you would do:
set http_proxy=http://xxx.xxx.xxx.xxx:xxxx
set https_proxy=http://xxx.xxx.xxx.xxx:xxxx
Key point here is to add the http:// in front.

Related

Intermittent authentication error when posting to a pubsub topic

We have a data pipeline built in Google Cloud Dataflow that consumes messages from a pubsub topic and streams them into BigQuery. In order to test that it works successfully we have some tests that run in a CI pipeline, these tests post messages onto the pubsub topic and verify that the messages are written to BigQuery successfully.
This is the code that posts to the pubsub topic:
from google.cloud import pubsub_v1
def post_messages(project_id, topic_id, rows)
futures = dict()
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(
project_id, topic_id
)
def get_callback(f, data):
def callback(f):
try:
futures.pop(data)
except:
print("Please handle {} for {}.".format(f.exception(), data))
return callback
for row in rows:
# When you publish a message, the client returns a future. Data must be a bytestring
# ...
# construct a message in var json_data
# ...
message = json.dumps(json_data).encode("utf-8")
future = publisher.publish(
topic_path,
message
)
futures_key = str(message)
futures[futures_key] = future
future.add_done_callback(get_callback(future, futures_key))
# Wait for all the publish futures to resolve before exiting.
while futures:
time.sleep(1)
When we run this test in our CI pipeline it has started failing intermittently with error
21:38:55: AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x7f5247407220>" raised exception!
Traceback (most recent call last):
File "/opt/conda/envs/py3/lib/python3.8/site-packages/grpc/_plugin_wrapping.py", line 89, in __call__
self._metadata_plugin(
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/transport/grpc.py", line 101, in __call__
callback(self._get_authorization_headers(context), None)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/transport/grpc.py", line 87, in _get_authorization_headers
self._credentials.before_request(
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/credentials.py", line 134, in before_request
self.apply(headers)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/credentials.py", line 110, in apply
_helpers.from_bytes(token or self.token)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/_helpers.py", line 130, in from_bytes
raise ValueError("***0!r*** could not be converted to unicode".format(value))
ValueError: None could not be converted to unicode
Error: The operation was canceled.
Unfortunately this only fails in our CI pipeline, and even then it is failing intermittently (only fails on a small percentage of all CI pipeline runs). If I run the same test locally it succeeds every time. When running in the CI pipeline the code is authenticating as a service account whereas when I run it locally it is authenticating as myself
I know from the error message that it is failing on this code:
if isinstance(result, six.text_type):
return result
else:
raise ValueError("{0!r} could not be converted to unicode".format(value))
https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/google/auth/_helpers.py#L127-L130
which is in a python library from google that we install using pip.
Clearly the expression:
isinstance(result, six.text_type)
is evaluating to False. I put a breakpoint on that code when I ran it locally and discovered that under normal circumstances (i.e. when it works) the value of result is something like this:
That looks like some sort of auth token.
Given the error message:
ValueError: None could not be converted to unicode
it seems that whatever action is being undertaken by the google authentication libraries it is passing None through to the code shown above.
I am at the bounds of my knowledge here. Given this is only failing in a CI pipeline I don't have the opportunity to put a breakpoint in my code and debug it. Given the call stack in the error message this is something to do with authentication.
I'm hoping someone can advise on a course of action.
Can anyone explain a means by which I can discover why None is being passed through to the code that is raising an error?
We had the same error. Finally solved it by using a JSON Web Token for authentication per Google's Quckstart. Like so:
import json
from google.cloud import pubsub_v1
from google.auth import jwt
def post_messages(credentials_path, topic, list_of_messages):
credentials_dict = json.load(open(credentials_path,'r'))
audience = "https://pubsub.googleapis.com/google.pubsub.v1.Publisher"
credentials_ob = jwt.Credentials.from_service_account_info(
credentials_dict, audience=audience
)
publisher = pubsub_v1.PublisherClient(credentials=credentials_ob)
for message_dict in list_of_message_dicts:
message = json.dumps(message_dict, default=str).encode("utf-8")
future = publisher.publish(topic, message)
We also updated our environment but it didn't fix the ValueError until we changed to jwt. Here's the environment in any case:
google-api-core==2.4.0
google-api-python-client==2.36.0
google-auth==2.3.2
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.6
google-cloud-core==2.1.0
google-cloud-pubsub==2.9.0
Tried the jwt solution above and though it solved the issue, it drastically degraded my write throughput.
Offering another work around that solved this issue for me.
My GOOGLE_APPLICATION_CREDENTIALS env var was set to the location of my key-file. Instead, unset that env variable and, at the start of your process, run
gcloud auth activate-service-account {account_name} --key-file {location_of_key_file}
This allows the google auth to bypass a key file and use the default service account set up (which is now the original, intended service account). Works with normal throughput and zero errors. :)

S3 Boto3 Stubber doesn't have mapping for download file?

Currently writing tests and trying to make use of the Stubber provided by botocore.
I'm trying:
client = boto3.client("s3")
response = {'Body': 'content'}
expected_params = {'Bucket': 'a_bucket_name', 'Key': 'a_path', 'Filename': 'a_target'}
with Stubber(client) as stubber:
stubber.add_response('download_file', response, expected_params)
download_file(client, "a_bucket_name", "a_path", "a_target")
Where that download file is my own function that just wraps the client download_file call. It works in practice.
However, the test fails on the stubber.add_response due to a 'OperationNotFound' error. I stepped through using the debugger, and the issue appears here in the stub API:
if not hasattr(self.client, method):
raise ValueError(
"Client %s does not have method: %s"
% (self.client.meta.service_model.service_name, method))
# Create a successful http response
http_response = AWSResponse(None, 200, {}, None)
operation_name = self.client.meta.method_to_api_mapping.get(method) <------- Error here
self._validate_response(operation_name, service_response)
There doesn't seem to be a mapping between the two in the dictionary, is this a failure of the stub API or am I missing something?
I've just found this issue, so looks like for once it really is the library and not me:
https://github.com/boto/botocore/issues/974
That's because download_file and upload_file are customizations which live in boto3. They call out to one or many requests under the hood. Right now there's not a great story for supporting customizations other than recording underlying commands they use and adding them to the stubber. There's an external library that can handle that for you, though we don't support it ourselves.

Acessing a database using zeep

I am trying to programmatically retrieve information from a database(BRENDA) using Zeep.
The following is the code.
import zeep
import hashlib
wsdl = "https://www.brenda-enzymes.org/soap/brenda.wsdl"
password = hashlib.sha256("xx".encode('utf-8')).hexdigest()
parameters = "xxx," + password + ",ecNumber*{}#organism*{}#".format("2.7.1.2", "Homo sapiens")
client = zeep.Client(wsdl=wsdl)
print(client)
km_string = client.getKmValue(parameters)
However, I get the following error
AttributeError: 'Client' object has no attribute 'getKmValue'
Could someone help me with this?
The above code works fine while using SOAPpy library in python 2. However, I couldn't successfully install SOAPpy in python 3, therefore I tried Zeep.
The sample code that shows SOAP implementation is available here
We fixed the webservice. It should work, now. Please have a look at the SOAP documentation on our website.
not the resolution but some hints.
1) with zeep you need to put .service between client and the name of the method. the correct syntax is client.service.getKmValue(parameters) (take a look at documentation)
anyway for zeep, getKmValue doesn't exists (but it exists on the wsdl schema and SoapUi see it).
you can also try py-suds,
but for some reason i obtain a 403 calling the wsdl.
from suds.client import Client
import hashlib
client = Client("https://www.brenda-enzymes.org/soap/brenda.wsdl")

sesame http repository connection - begin()

I have the following code for creating connection and starting a transaction:
org.openrdf.repository.RepositoryConnection con = repo.getConnection();
con.begin();
The line con.begin() produces the following error:
No signature of method: org.openrdf.repository.http.HTTPRepositoryConnection.begin() is applicable for argument types: () values: []
Possible solutions: wait(), find(), wait(long), is(java.lang.Object), print(java.io.PrintWriter), print(java.lang.Object)
The call is legitimate, I don't know how I could fix this. I considered not using the call, but was told the call is necessary to keep commit from becoming automatic. I'm not sure what is the best solution here, any help is much appreciated.
I fixed it by using def con =, instead of org.openrdf.repository.RepositoryConnection.
Also, I had a conflict for three different jar files (httpclient, httpcore, httpmime), removing older copies of them solved the issue.

Magento SOAP API: Error in retrieving catalogCategoryTree

Currently I'm using Magento 1.9.01 and PHP 5.3.28. In ASP .NET I'm trying to retrieve the catalog tree by using the SOAP API using the following code:
var magentoService = new MagentoService.Mage_Api_Model_Server_Wsi_HandlerPortTypeClient();
var sessionId = magentoService.login(userName, apiKey);
var categoryTree = magentoService.catalogCategoryTree(sessionId, "", "");
The errror I get is "Internal Error. Please see log for details."
And in the logs I can see the following:
Argument 1 passed to Mage_Catalog_Model_Category_Api::_nodeToArray() must be an instance of Varien_Data_Tree_Node, null given
From what I've read it can be a bug with PHP 5.4 or greater, but not the version I'm using... So if someone has any idea how to solve this, it will be greatly appreaciated.
Seems pretty straight forward, though the error thrown suggests a much bigger problem. First make sure that the variables are exactly as you specified in your Magento installation (pay attention to caps). Second you can't pass empty strings, instead try "Null".
Good luck