SSL3_GET_RECORD:wrong version number when loading to bigquery table from google cloud storage - google-bigquery

When loading to a Bigquery table from files in Google cloud storage, I kept getting this SSL3_GET_RECORD:wrong version number exception.
Eventually when I check the job history from the Google Bigquery job history webpage, the load job will show that it has succeeded.
Could you please help? Thank you.
Here is the error message I am getting:
========================================
== Platform ==
CPython:2.7.6:Linux-2.6.18-194.32.1.el5-x86_64-with-redhat-5.5-Final
== bq version ==
2.0.18
== Command line ==
['/opt/google-cloud-sdk/platform/bq/bq.py', '--credential_file', '/offworld/hornet/.config/gcloud/legacy_credentials/clok#vindicotech.com/singlestore.json', '--project', 'formal-cascade-571', 'load', '--source_format=NEWLINE_DELIMITED_JSON', 'dw_sandbox.impressions_20140603', 'gs://dw_sandbox/impressions/20140603/20140604175042285_20140604195938608_20140603_0_*', '/offworld/specificmedia/logsTobq/schemas/impressionsSchema.txt']
== UTC timestamp ==
2014-06-05 01:19:06
== Error trace ==
File "/opt/google-cloud-sdk/platform/bq/bq.py", line 779, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/opt/google-cloud-sdk/platform/bq/bq.py", line 1020, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 2011, in Load
upload_file=upload_file, **kwds)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1611, in ExecuteJob
job_id=job_id)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1599, in RunJobSynchronously
result = self.WaitJob(job_reference)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1713, in WaitJob
done, job = self.PollJob(job_reference, status=status, wait=wait)
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 1752, in PollJob
job = self.apiclient.jobs().get(**dict(job_reference)).execute()
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 307, in execute
return super(BigqueryHttp, self).execute(**kwds)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/apiclient/http.py", line 716, in execute
body=self.body, headers=self.headers)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/client.py", line 490, in new_request
redirections, connection_type)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1586, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1333, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/google-cloud-sdk/bin/bootstrapping/../../lib/httplib2/__init__.py", line 1289, in _conn_request
response = conn.getresponse()
File "/usr/lib64/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/usr/lib64/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib64/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib64/python2.7/ssl.py", line 241, in recv
return self.read(buflen)
File "/usr/lib64/python2.7/ssl.py", line 160, in read
return self._sslobj.read(len)
========================================
Unexpected exception in load operation: [Errno 1] _ssl.c:1426:
error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number

Is there any chance that you're using the same HTTP object in multiple threads? I.e the thread you create the job in is not necessarily the same thread you poll for completion in? If this is the case, this came up internally within google today, and this was the fix:
class _HTTPFactoryWrapper(object):
"""Wraps a request factory so that each request returns a new http object.
API client's Http object is not threadsafe since calls to the same domain will
reuse the same HTTPConnection. If one API call is outstanding then a second
will try to send a request over the same domain. This causes chaos that
seems to surface itself as SSLErrors during processing.
"""
def __init__(self, factory):
self.factory = factory
def request(self, *args, **kwargs):
return self.factory.Create().request(*args, **kwargs)
Then change the creation of the bigquery stub from:
return discovery.build(api_name, api_version, http=http_factory.Create())
To
http_wrapper = _HTTPFactoryWrapper(http_factory)
return discovery.build(api_name, api_version, http=http_wrapper)

Related

The security token included in the request is expired, when I try to update credentials

I am using Celery with SQS as a broker and I am trying to renew my credentials "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY", before they expire, the first time I run the task and the result is success, but after 15 minutes it expires although credentials have been renewed, the function to update credentials is as follows:
import os
import boto3
from celery import Celery
from kombu.utils.url import safequote
def update_aws_credentials():
role_info = {
'RoleArn': f"arn:aws:iam::{os.environ['AWS_ACCOUNT_NUMER']}:role/my_role_execution",
'RoleSessionName': 'roleExecution',
'DurationSeconds': 900
}
sts_client = boto3.client('sts', region_name='eu-central-1')
credentials = sts_client.assume_role(**role_info)
aws_access_key_id = credentials["Credentials"]['AccessKeyId']
aws_secret_access_key = credentials["Credentials"]['SecretAccessKey']
aws_session_token = credentials["Credentials"]["SessionToken"]
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1'
os.environ["AWS_SESSION_TOKEN"] = aws_session_token
return aws_access_key_id, aws_secret_access_key
def get_celery(aws_access_key_id, aws_secret_access_key):
broker = f"sqs://{safequote(aws_access_key_id)}:{safequote(aws_secret_access_key)}#"
backend = 'redis://redis-service:6379/0'
celery = Celery(f"my_task", broker=broker, backend=backend)
celery.conf["broker_transport_options"] = {
'polling_interval': 30,
'region': 'eu-central-1',
'predefined_queues': {
f"my_queue": {
'url': f"https://sqs.eu-central-1.amazonaws.com/{os.environ['AWS_ACCOUNT_NUMER']}/my_queue"
}
}
}
celery.conf["task_default_queue"] = f"my_queue"
return celery
def refresh_sqs_credentials():
access, secret = update_aws_credentials()
return get_celery(access, secret)
Running refresh_sqs_credentials, new credentials are created:
celery = worker.refresh_sqs_credentials()
And then I run my task with celery:
task = celery.send_task('my_task.code_of_my_task', args=[content], task_id=task_id)
All tasks that I run before 15 minutes finish successfully, but after 15 minutes the error is the following:
[2021-12-14 14:08:15,637] ERROR in app: Exception on /tasks/run [POST]
Traceback (most recent call last):
File "/api/app.py", line 87, in post
task = celery.send_task('glgt_ap35080_dev_sqs_runalgo.allocation_alg_task', args=[content], task_id=task_id)
File "/usr/local/lib/python3.6/site-packages/celery/app/base.py", line 717, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/local/lib/python3.6/site-packages/celery/app/amqp.py", line 547, in send_task_message
**properties
File "/usr/local/lib/python3.6/site-packages/kombu/messaging.py", line 178, in publish
exchange_name, declare,
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 525, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/messaging.py", line 200, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
return self._put(routing_key, message, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 294, in _put
c.send_message(**kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the SendMessage operation: The security token included in the request is expired
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_restplus/api.py", line 325, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/flask/views.py", line 88, in view
return self.dispatch_request(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/flask_restplus/resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "/api/app.py", line 90, in post
abort(500)
File "/usr/local/lib/python3.6/site-packages/werkzeug/exceptions.py", line 774, in abort
return _aborter(status, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/werkzeug/exceptions.py", line 755, in __call__
raise self.mapping[code](*args, **kwargs)
werkzeug.exceptions.InternalServerError: 500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
10.142.95.217 - - [14/Dec/2021 14:08:15] "POST /tasks/run HTTP/1.1" 500 -
I'm storing the credentials in environment variables, I don't understand why it expires after 15 minutes, can someone help me please?
The versions of the packages used are:
boto3==1.14.54
celery==5.0.0
kombu==5.0.2
pycurl==7.43.0.6
Thank you

Exception Resource Temporarily unavailable while firing 50 requests for checking snapshot exits or not

I have created snapshots of 50 disks because gcloud does not allow snapshots with similar name. Before firing snapshot create I was checking if snapshot exists or not in gcp and fired 50 requests simultaneously and almost 5-6 requests failed with below exception.
snapshots().get(project=self.project, snapshot=name).execute()
exception :
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/_helpers.py", line 133, in positional_wrapper return wrapped(*args, **kwargs)
File "/tmp/cloudpoint/libs/gcp/lib/googleapiclient/http.py", line 837, in execute method=str(self.method), body=self.body, headers=self.headers)
File "/tmp/cloudpoint/libs/gcp/lib/googleapiclient/http.py", line 163, in _retry_request resp, content = http.request(uri, method, *args, **kwargs)
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/transport.py", line 175, in new_request redirections, connection_type)
File "/tmp/cloudpoint/libs/gcp/lib/oauth2client/transport.py", line 282, in request connection_type=connection_type)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1607, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1349, in _request(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1305, in _conn_request response = conn.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse response.begin()
File "/usr/lib/python2.7/httplib.py", line 453, in begin version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 409, in _read_status line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 480, in readline data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/ssl.py", line 756, in recv return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 643, in read v = self._sslobj.read(len)
error: [Errno 11] Resource temporarily unavailable
The error message "Resource temporarily unavailable" means that the Compute Engine API was not available to fulfill the request. Since you made 50 simultaneous requests to check if the snapshots exist, the Compute Engine API was not able to handle all 50 of the requests at the same time so it timed out for 5-6 of the 50 requests.

BadStatusLine: '' error on deleting google calendar event from python code

I've successfully created and updated the calendar events from python. And the below is my code to delete an event from python code.
def delete_google_event(self, cr, uid, task, user):
g_client = gtools.gcal.google_calendar_interface()
g_client.connect(user.google_email, user.google_password)
g_client.delete(task.google_event_id)
message = "Google event deleted, old id: %s" % (task.google_event_id)
I get the below error when using the above code. From the error message BadStatusLine: '' i understand that i receive a request from the server that system do not understand. But not sure how to solve it. And also the error seems to be with google cal API. Will there be any versioning prob? (i do it in openerp and i guess it's not a problem of openerp)
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} deleting http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} quering element uri: http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
2013-09-02 12:21:16,945 17720 ERROR jul-16-7575-t1 openerp.osv.osv: Uncaught exception
Traceback (most recent call last):
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 131, in wrapper
return f(self, dbname, *args, **kwargs)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 197, in execute
res = self.execute_cr(cr, uid, obj, method, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 185, in execute_cr
return getattr(object, method)(cr, uid, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 67, in unlink
self.delete_google_event(cr, uid, task, goog_uid)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 92, in delete_google_event
g_client.delete(task.google_event_id)
File "/usr/lib/python2.7/dist-packages/gtools/gcal.py", line 78, in delete
self.cal_srv.DeleteEvent(event_uri)
File "/usr/lib/pymodules/python2.7/gdata/calendar/service.py", line 313, in DeleteEvent
url_params=url_params, escape_params=escape_params)
File "/usr/lib/pymodules/python2.7/gdata/service.py", line 1429, in Delete
headers=extra_headers, url_params=url_params)
File "/usr/lib/pymodules/python2.7/atom/__init__.py", line 92, in optional_warn_function
return f(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/atom/service.py", line 185, in request
data=data, headers=all_headers)
File "/usr/lib/pymodules/python2.7/gdata/auth.py", line 725, in perform_request
return http_client.request(operation, url, data=data, headers=headers)
File "/usr/lib/pymodules/python2.7/atom/http.py", line 174, in request
return connection.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 371, in _read_status
raise BadStatusLine(line)
BadStatusLine: ''
I've referred this link Why am I getting this error in python ? (httplib). Still not sure of the prob. Kindly give me some clues to fix this. Thanks a lot for your time.
Hurrah!! It works fine in the Direct Connection. But not under proxy.

Unable to load data into bigquery (BadStatusLine)

I am trying to load a local file into an existing table within bigquery. Tried 3 times on different days. File has 1.1m rows. I can't spot any specific error being encountered. Following are the details spit out...
== Platform ==
CPython:2.7.4:Linux-2.6.18-308.11.1.el5.centos.plus-x86_64-with-redhat-5.8-Final
== bq version ==
v2.0.12
== Command line ==
['/opt/./python2.7.4/bin/bq', 'load', '395733598146:apache_l1.sjc_web_201304', 'x.2013-04-23']
== UTC timestamp ==
2013-05-01 18:48:17
== Error trace ==
File "build/bdist.linux-x86_64/egg/bq.py", line 652, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "build/bdist.linux-x86_64/egg/bq.py", line 880, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1634, in Load
upload_file=upload_file, **kwds)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1366, in ExecuteJob
job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1352, in RunJobSynchronously
upload_file=upload_file, job_id=job_id)
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 1346, in StartJob
projectId=project_id).execute()
File "build/bdist.linux-x86_64/egg/bigquery_client.py", line 274, in execute
return super(BigqueryHttp, self).execute(**kwds)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 656, in execute
_, body = self.next_chunk(http=http)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/apiclient/http.py", line 784, in next_chunk
headers=headers)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 120, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/oauth2client/client.py", line 428, in new_request
redirections, connection_type)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1570, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1317, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/python2.7.4/lib/python2.7/site-packages/httplib2-0.8-py2.7.egg/httplib2/__init__.py", line 1286, in _conn_request
response = conn.getresponse()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/opt/python2.7.4/lib/python2.7/httplib.py", line 373, in _read_status
raise BadStatusLine(line)
BigQuery doesn't like uploading large local files directly. Try first uploading it into a google cloud storage bucket (gs://), and then importing it into BQ from there.
Install gsutil directions from command line, or use your Google Developer's console in your web browser
You can able to load a local file into an exiting BigQuery table:
All Rows:
bq load --source_format=CSV mydataset.mytable myfile.csv col1:INTEGER,col2:STRING
Skip First Row:
bq load --skip_leading_rows=1 --source_format=CSV mydataset.mytable myfile.csv col1:INTEGER,col2:STRING

hg push error and username not specified in .hg/hgrc. Keyring will not be used

I did the following:
hg clone ...somelink.to.repo.in.hg... Giga
cd Giga
ls (...it shows me giga.txt file exist in Giga directory)
vi giga.txt (...made some changes..)
hg commit -m "byte"
hg out (got the following error)
** unknown exception encountered, details follow
** report bug details to http://mercurial.selenic.com/bts/
** or mercurial#selenic.com
** Mercurial Distributed SCM (version 1.5)
** Extensions loaded: acl, bugzilla, children, churn, color, convert, extdiff, fetch, gpg, graphlog, hgcia, hgk, highlight, interhg, keyword, mercurial_keyring, mq, notify, pager, patchbomb, progress, purge, rebase, record, relink, schemes, share, transplant, zeroconf
Traceback (most recent call last):
File "/usr/bin/hg", line 27, in <module>
mercurial.dispatch.run()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 16, in run
sys.exit(dispatch(sys.argv[1:]))
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 30, in dispatch
return _runcatch(u, args)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 47, in _runcatch
return _dispatch(ui, args)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 466, in _dispatch
return runcommand(lui, repo, cmd, fullargs, ui, options, d)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 336, in runcommand
ret = _runcommand(ui, options, cmd, d)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 128, in wrap
return wrapper(origfn, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/pager.py", line 66, in pagecmd
return orig(ui, options, cmd, cmdfunc)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 517, in _runcommand
return checkargs()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 471, in checkargs
return cmdfunc()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 465, in <lambda>
d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/color.py", line 352, in nocolor
return orig(*args, **opts)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/mq.py", line 2648, in mqcommand
return orig(ui, repo, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/graphlog.py", line 365, in graph
return orig(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/commands.py", line 2275, in outgoing
other = hg.repository(cmdutil.remoteui(repo, opts), dest)
File "/usr/lib/python2.6/site-packages/mercurial/hg.py", line 82, in repository
repo = _lookup(path).instance(ui, path, create)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 271, in instance
inst.between([(nullid, nullid)])
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 190, in between
d = self.do_read("between", pairs=n)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 134, in do_read
fp = self.do_cmd(cmd, **args)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 85, in do_cmd
resp = self.urlopener.open(req)
File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib/python2.6/urllib2.py", line 855, in http_error_401
url, req, headers)
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 339, in basic_http_error_auth_reqed
File "/usr/lib/python2.6/urllib2.py", line 833, in http_error_auth_reqed
return self.retry_http_basic_auth(host, req, realm)
File "/usr/lib/python2.6/urllib2.py", line 836, in retry_http_basic_auth
user, pw = self.passwd.find_user_password(realm, host)
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 333, in find_user_password
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 184, in find_auth
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 67, in get_http_password
File "/usr/local/lib/python2.6/site-packages/keyring/core.py", line 37, in get_password
return _keyring_backend.get_password(service_name, username)
File "/usr/local/lib/python2.6/site-packages/keyring/backend.py", line 143, in get_password
items = gnomekeyring.find_network_password_sync(username, service)
gnomekeyring.IOError
My ~/.hgrc (OpenSUSE machine)
[ui]
username=c123456 <Arun.Sangal#MyCompany.com>
[extensions]
mercurial_keyring = /root/mercurial_keyring.py
#[trusted]
#users = *
#groups = *
[extensions]
acl =
bugzilla =
children =
churn =
color =
convert =
eol = !
extdiff =
factotum = !
fetch =
gpg =
graphlog =
hgcia =
hgcr-gui-qt = !
hgk =
highlight =
interhg =
keyword =
largefiles = !
mercurial_keyring =
mq =
notify =
pager =
patchbomb =
perfarce = !
progress =
projrc = !
purge =
rebase =
record =
relink =
schemes =
....
........etc
My local repository
(on OpenSuse cloned folder - inside: /Giga/.hg/hgrc) is:
[paths]
default = http://the.hg.server.com/hg/TestHgRepo1/
myrepo = http://the.hg.server.com/hg/TestHgRepo1/
[auth]
myrepo.schemes = http https
myrepo.prefix = the.hg.server.com/hg
myrepo.username = c123456
I tried everything but this Keyring thing is not working. I get prompt everytime I do:
hg out
hg push
etc hg operation but not when I do
hg commit
Can someone please tell what the heck I'm missing here. Tried the same excercise on Windows with TortoiseHg, with C:...\mercurial.ini (Windows side kinda of unix ~/.hgrc file).. and updated/made sure local repository cloned folder's ../clonedfolder/.hg/hgrc file contains the similar [auth] ..3 lines but Mercurial on Linux OpenSUSE and on Windows using TortoiseHg is not working with keyring.
It's prompting me for entering user credentials again n again :((
can someone pls correct me on what should I do to get this resolved.
if prompted multiple times for user credentials in mercurial. Setup Mercurial_Keyring and then
this question comes which nobody explained in an easy way.
??? how to make the [auth] xx.prefix = servername/hg_or_something work for all repositories under servername/hg location either if I use servername, servername's IP or servername's FQDN ?
Final ANSWER: Arun • 2 minutes ago −
OK, I put this in ~/.hgrc (Linux/Unix -home directory's .hgrc hidden file) or Windows users %UserProfile%/mercurial.ini or %HOME%/mercurial.ini file.
[auth]
default1.schemes = http https
default1.prefix = hg_merc_server/hg
default1.username = c123456
default2.schemes = http https
default2.prefix = hg_merc_server.company.com/hg
default2.username = c123456
default3.schemes = http https
default3.prefix = 10.211.222.321/hg
default3.username = c123456
Now, I can checkout using either Server/IP/Server's FQDN.