Exception Resource Temporarily unavailable while firing 50 requests for checking snapshot exits or not - ssl

I have created snapshots of 50 disks because gcloud does not allow snapshots with similar name. Before firing snapshot create I was checking if snapshot exists or not in gcp and fired 50 requests simultaneously and almost 5-6 requests failed with below exception.
snapshots().get(project=self.project, snapshot=name).execute()
exception :
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/_helpers.py", line 133, in positional_wrapper return wrapped(*args, **kwargs)
File "/tmp/cloudpoint/libs/gcp/lib/googleapiclient/http.py", line 837, in execute method=str(self.method), body=self.body, headers=self.headers)
File "/tmp/cloudpoint/libs/gcp/lib/googleapiclient/http.py", line 163, in _retry_request resp, content = http.request(uri, method, *args, **kwargs)
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/transport.py", line 175, in new_request redirections, connection_type)
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/transport.py", line 282, in request connection_type=connection_type)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1607, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1349, in _request(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1305, in _conn_request response = conn.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse response.begin()
File "/usr/lib/python2.7/httplib.py", line 453, in begin version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 409, in _read_status line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 480, in readline data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/ssl.py", line 756, in recv return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 643, in read v = self._sslobj.read(len)
error: [Errno 11] Resource temporarily unavailable

The error message "Resource temporarily unavailable" means that the Compute Engine API was not available to fulfill the request. Since you made 50 simultaneous requests to check if the snapshots exist, the Compute Engine API was not able to handle all 50 of the requests at the same time so it timed out for 5-6 of the 50 requests.

Related

Selenium"Can't connect to HTTPS URL because the SSL module is not available

I have an anaconda environment with selenium installed. When I try to run I get this error:
Traceback (most recent call last):
File "c:\Users\Nick\Desktop\Code\product-scraper\sephora-scraper\scraper.py", line 31, in <module>
ChromeDriverManager().install(), options=options)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\chrome.py", line 34, in install
driver_path = self._get_driver_path(self.driver)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\manager.py", line 21, in _get_driver_path
driver_version = driver.get_version()
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\driver.py", line 40, in get_version
return self.get_latest_release_version()
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\driver.py", line 63, in get_latest_release_version
resp = requests.get(f"{self._latest_release_url}_{self.browser_version}")
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='chromedriver.storage.googleapis.com', port=443): Max retries exceeded with url: /LATEST_RELEASE_88.0.4324 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
I'm new to anaconda so I don't know what else to provide. Please leave a comment if I need to anything and I will add it right away. Thanks.
Try to add this path to your environment variable:
..\Anaconda3
..\Anaconda3\scripts
..\Anaconda3\Library\bin
You might need to restart windows after set up environment path

Socket error when connect to Google Big Query in my local PC

It was working fine for me to run following steps in past two days to connect to Big query in pycharm in my PC
step 1:gcloud auth application-default login
step 2 :then connect to BG in the Pycharm in my local PC.
however, when I used the same method to try today:
Below error occurs:
Since I am behind "Great Wall" in China, so can only use VPN to login in google cloud, so I am not sure if it is caused by the VPN or does it have something to do with the setting of my google'account ? however, I tried with "service-account-key", it happen to be the same issue log:
Traceback (most recent call last):
File "C:/Users/emma/PycharmProjects/GCP/INIT.py", line 134, in <module>
explicit()
File "C:/Users/emma/PycharmProjects/GCP/INIT.py", line 54, in explicit
for dataset in bigquery_client.list_datasets():
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 218, in _items_iter
for page in self._page_iter(increment=False):
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 247, in _page_iter
page = self._next_page()
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 347, in _next_page
response = self._get_next_page_response()
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 396, in _get_next_page_response
query_params=params)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 299, in api_request
headers=headers, target_object=_target_object)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 193, in _make_request
return self._do_request(method, url, headers, data, target_object)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 223, in _do_request
body=data)
File "C:\Python27\lib\site-packages\google_auth_httplib2.py", line 187, in request
self._request, method, uri, request_headers)
File "C:\Python27\lib\site-packages\google\auth\credentials.py", line 121, in before_request
self.refresh(request)
File "C:\Python27\lib\site-packages\google\oauth2\service_account.py", line 310, in refresh
request, self._token_uri, assertion)
File "C:\Python27\lib\site-packages\google\oauth2\_client.py", line 143, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "C:\Python27\lib\site-packages\google\oauth2\_client.py", line 104, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "C:\Python27\lib\site-packages\google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1272, in _conn_request
conn.connect()
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1075, in connect
raise socket.error, msg
socket.error: [Errno 10060]
I update the python-cloud library, then everything works fine now.
Hope it keep it all the time :(

SSL3_GET_RECORD:wrong version number when loading to bigquery table from google cloud storage

When loading to a Bigquery table from files in Google cloud storage, I kept getting this SSL3_GET_RECORD:wrong version number exception.
Eventually when I check the job history from the Google Bigquery job history webpage, the load job will show that it has succeeded.
Could you please help? Thank you.
Here is the error message I am getting:
========================================
== Platform ==
CPython:2.7.6:Linux-2.6.18-194.32.1.el5-x86_64-with-redhat-5.5-Final
== bq version ==
2.0.18
== Command line ==
['/opt/google-cloud-sdk/platform/bq/bq.py', '--credential_file', '/offworld/hornet/.config/gcloud/legacy_credentials/clok#vindicotech.com/singlestore.json', '--project', 'formal-cascade-571', 'load', '--source_format=NEWLINE_DELIMITED_JSON', 'dw_sandbox.impressions_20140603', 'gs://dw_sandbox/impressions/20140603/20140604175042285_20140604195938608_20140603_0_*', '/offworld/specificmedia/logsTobq/schemas/impressionsSchema.txt']
== UTC timestamp ==
2014-06-05 01:19:06
== Error trace ==
File "/opt/google-cloud-sdk/platform/bq/bq.py", line 779, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/opt/google-cloud-sdk/platform/bq/bq.py", line 1020, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 2011, in Load
upload_file=upload_file, **kwds)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1611, in ExecuteJob
job_id=job_id)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1599, in RunJobSynchronously
result = self.WaitJob(job_reference)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1713, in WaitJob
done, job = self.PollJob(job_reference, status=status, wait=wait)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1752, in PollJob
job = self.apiclient.jobs().get(**dict(job_reference)).execute()
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 307, in execute
return super(BigqueryHttp, self).execute(**kwds)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/apiclient/http.py", line 716, in execute
body=self.body, headers=self.headers)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/client.py", line 490, in new_request
redirections, connection_type)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1586, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1333, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1289, in _conn_request
response = conn.getresponse()
File "/usr/lib64/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/usr/lib64/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib64/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib64/python2.7/ssl.py", line 241, in recv
return self.read(buflen)
File "/usr/lib64/python2.7/ssl.py", line 160, in read
return self._sslobj.read(len)
========================================
Unexpected exception in load operation: [Errno 1] _ssl.c:1426:
error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Is there any chance that you're using the same HTTP object in multiple threads? I.e the thread you create the job in is not necessarily the same thread you poll for completion in? If this is the case, this came up internally within google today, and this was the fix:
class _HTTPFactoryWrapper(object):
"""Wraps a request factory so that each request returns a new http object.
API client's Http object is not threadsafe since calls to the same domain will
reuse the same HTTPConnection. If one API call is outstanding then a second
will try to send a request over the same domain. This causes chaos that
seems to surface itself as SSLErrors during processing.
"""
def __init__(self, factory):
self.factory = factory
def request(self, *args, **kwargs):
return self.factory.Create().request(*args, **kwargs)
Then change the creation of the bigquery stub from:
return discovery.build(api_name, api_version, http=http_factory.Create())
To
http_wrapper = _HTTPFactoryWrapper(http_factory)
return discovery.build(api_name, api_version, http=http_wrapper)

BadStatusLine: '' error on deleting google calendar event from python code

I've successfully created and updated the calendar events from python. And the below is my code to delete an event from python code.
def delete_google_event(self, cr, uid, task, user):
g_client = gtools.gcal.google_calendar_interface()
g_client.connect(user.google_email, user.google_password)
g_client.delete(task.google_event_id)
message = "Google event deleted, old id: %s" % (task.google_event_id)
I get the below error when using the above code. From the error message BadStatusLine: '' i understand that i receive a request from the server that system do not understand. But not sure how to solve it. And also the error seems to be with google cal API. Will there be any versioning prob? (i do it in openerp and i guess it's not a problem of openerp)
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} deleting http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} quering element uri: http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
2013-09-02 12:21:16,945 17720 ERROR jul-16-7575-t1 openerp.osv.osv: Uncaught exception
Traceback (most recent call last):
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 131, in wrapper
return f(self, dbname, *args, **kwargs)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 197, in execute
res = self.execute_cr(cr, uid, obj, method, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 185, in execute_cr
return getattr(object, method)(cr, uid, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 67, in unlink
self.delete_google_event(cr, uid, task, goog_uid)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 92, in delete_google_event
g_client.delete(task.google_event_id)
File "/usr/lib/python2.7/dist-packages/gtools/gcal.py", line 78, in delete
self.cal_srv.DeleteEvent(event_uri)
File "/usr/lib/pymodules/python2.7/gdata/calendar/service.py", line 313, in DeleteEvent
url_params=url_params, escape_params=escape_params)
File "/usr/lib/pymodules/python2.7/gdata/service.py", line 1429, in Delete
headers=extra_headers, url_params=url_params)
File "/usr/lib/pymodules/python2.7/atom/__init__.py", line 92, in optional_warn_function
return f(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/atom/service.py", line 185, in request
data=data, headers=all_headers)
File "/usr/lib/pymodules/python2.7/gdata/auth.py", line 725, in perform_request
return http_client.request(operation, url, data=data, headers=headers)
File "/usr/lib/pymodules/python2.7/atom/http.py", line 174, in request
return connection.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 371, in _read_status
raise BadStatusLine(line)
BadStatusLine: ''
I've referred this link Why am I getting this error in python ? (httplib). Still not sure of the prob. Kindly give me some clues to fix this. Thanks a lot for your time.
Hurrah!! It works fine in the Direct Connection. But not under proxy.

Unable to load data into bigquery (BadStatusLine)

I am trying to load a local file into an existing table within bigquery. Tried 3 times on different days. File has 1.1m rows. I can't spot any specific error being encountered. Following are the details spit out...
== Platform ==
CPython:2.7.4:Linux-2.6.18-308.11.1.el5.centos.plus-x86_64-with-redhat-5.8-Final
== bq version ==
v2.0.12
== Command line ==
['/opt/./python2.7.4/bin/bq', 'load', '395733598146:apache_l1.sjc_web_201304', 'x.2013-04-23']
== UTC timestamp ==
2013-05-01 18:48:17
== Error trace ==
File "build/bdist.linux-x86_64/egg/bq.py", line 652, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "build/bdist.linux-x86_64/egg/bq.py", line 880, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1634, in Load
upload_file=upload_file, **kwds)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1366, in ExecuteJob
job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1352, in RunJobSynchronously
upload_file=upload_file, job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1346, in StartJob
projectId=project_id).execute()
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 274, in execute
return super(BigqueryHttp, self).execute(**kwds)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 656, in execute
_, body = self.next_chunk(http=http)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 784, in next_chunk
headers=headers)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/oauth2client/client.py", line 428, in new_request
redirections, connection_type)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1570, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1317, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1286, in _conn_request
response = conn.getresponse()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 373, in _read_status
raise BadStatusLine(line)
BigQuery doesn't like uploading large local files directly. Try first uploading it into a google cloud storage bucket (gs://), and then importing it into BQ from there.
Install gsutil directions from command line, or use your Google Developer's console in your web browser
You can able to load a local file into an exiting BigQuery table:
All Rows:
bq load --source_format=CSV mydataset.mytable myfile.csv col1:INTEGER,col2:STRING
Skip First Row:
bq load --skip_leading_rows=1 --source_format=CSV mydataset.mytable myfile.csv col1:INTEGER,col2:STRING