My scrapy settings.py
from datetime import datetime
file_name = datetime.today().strftime('%Y-%m-%d_%H%M_')
save_name = file_name + 'Mobile_Nshopping'
FEED_URI = 'ftp://myusername:mypassword#ftp.mymail.com/uploads/%(save_name)s.csv'
when I'm running my spider scrapy crawl my_project_name getting error...
Can I have to create a pipeline?
\scrapy\extensions\feedexport.py:247: ScrapyDeprecationWarning: The `FEED_URI` and `FEED_FORMAT` settings have been deprecated in favor of the `FEEDS` setting. Please see the `FEEDS` setting docs for more details
exporter = cls(crawler)
Traceback (most recent call last):
File "c:\users\viren\appdata\local\programs\python\python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\viren\appdata\local\programs\python\python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\viren\AppData\Local\Programs\Python\Python38\Scripts\scrapy.exe\__main__.py", line 7, in <module>
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\cmdline.py", line 145, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\cmdline.py", line 100, in _run_print_help
func(*a, **kw)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\cmdline.py", line 153, in _run_command
cmd.run(args, opts)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\commands\crawl.py", line 22, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\crawler.py", line 191, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\crawler.py", line 224, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\crawler.py", line 229, in _create_crawler
return Crawler(spidercls, self.settings)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\crawler.py", line 72, in __init__
self.extensions = ExtensionManager.from_crawler(self)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\middleware.py", line 53, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\middleware.py", line 35, in from_settings
mw = create_instance(mwcls, settings, crawler)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\misc.py", line 167, in create_instance
instance = objcls.from_crawler(crawler, *args, **kwargs)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 247, in from_crawler
exporter = cls(crawler)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 282, in __init__
if not self._storage_supported(uri, feed_options):
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 427, in _storage_supported
self._get_storage(uri, feed_options)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 458, in _get_storage
instance = build_instance(feedcls.from_crawler, crawler)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 455, in build_instance
return build_storage(builder, uri, feed_options=feed_options, preargs=preargs)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 46, in build_storage
return builder(*preargs, uri, *args, **kwargs)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 201, in from_crawler
return build_storage(
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 46, in build_storage
return builder(*preargs, uri, *args, **kwargs)
File "c:\users\viren\appdata\local\programs\python\python38\lib\site-packages\scrapy\extensions\feedexport.py", line 192, in __init__
self.port = int(u.port or '21')
File "c:\users\viren\appdata\local\programs\python\python38\lib\urllib\parse.py", line 174, in port
raise ValueError(message) from None
ValueError: Port could not be cast to integer value as 'Edh=)9sd'
I don't know how to store CSV into FTP.
error is coming because my password is int?
Is there anything I forget to write?
Can I have to create a pipeline?
Yes, you probably should create a pipeline. As shown in the Scrapy Architecture Diagram, the basic concept is this: requests are sent, responses come back and processed by the spider, and finally, the pipeline does something with the items returned by the spider. In your case, you could create a pipeline that saves the data in a CSV file and uploads it to an ftp server. See Scrapy's Item Pipeline documentation for more information.
I don't know how to store CSV into FTP. error is coming because my password is int? Is there anything I forget to write?
I believe this is due to the deprecation error below (and shown at the top of the errors you provided):
ScrapyDeprecationWarning: The FEED_URI and FEED_FORMAT settings have been deprecated in favor of the FEEDS setting. Please see the FEEDS setting docs for more details.
Try replacing FEED_URI with FEEDS; see the Scrapy documentation on FEEDS.
You need to specify the port as well.
You can specify this in settings.
See also class definition from scrapy docs
class FTPFilesStore:
FTP_USERNAME = None
FTP_PASSWORD = None
USE_ACTIVE_MODE = None
def __init__(self, uri):
if not uri.startswith("ftp://"):
raise ValueError(f"Incorrect URI scheme in {uri}, expected 'ftp'")
u = urlparse(uri)
self.port = u.port
self.host = u.hostname
self.port = int(u.port or 21)
self.username = u.username or self.FTP_USERNAME
self.password = u.password or self.FTP_PASSWORD
self.basedir = u.path.rstrip('/')
Related
Unhandled error in Deferred: 2020-07-24 09:12:40 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last): File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 192, in crawl return self._crawl(crawler, *args, **kwargs) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 196, in _crawl d = crawler.crawl(*args, **kwargs) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator return _cancellableInlineCallbacks(gen) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- --- File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 87, in crawl self.engine = self._create_engine() File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 101, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/core/engine.py", line 69, in init self.downloader = downloader_cls(crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/core/downloader/init.py", line 83, in init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/middleware.py", line 35, in from_settings mw = create_instance(mwcls, settings, crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/utils/misc.py", line 150, in create_instance instance = objcls.from_crawler(crawler,
*args, **kwargs) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy_selenium/middlewares.py", line 67, in from_crawler middleware = cls( File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy_selenium/middlewares.py", line 43, in init for argument in driver_arguments: builtins.TypeError: 'NoneType' object is not iterable
2020-07-24 09:12:40 [twisted] CRITICAL: Traceback (most recent call last): File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 87, in crawl self.engine = self._create_engine() File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/crawler.py", line 101, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/core/engine.py", line 69, in init self.downloader = downloader_cls(crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/core/downloader/init.py", line 83, in init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/middleware.py", line 35, in from_settings mw = create_instance(mwcls, settings, crawler) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy/utils/misc.py", line 150, in create_instance instance = objcls.from_crawler(crawler,
*args, **kwargs) File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy_selenium/middlewares.py", line 67, in from_crawler middleware = cls( File "/home/baku/Dev/workspace/moje-python/scrape_linkedin/venv/lib/python3.8/site-packages/scrapy_selenium/middlewares.py", line 43, in init for argument in driver_arguments: TypeError: 'NoneType' object is not iterable
my settings.py
from shutil import which
SELENIUM_DRIVER_NAME = 'firefox'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
SELENIUM_BROWSER_EXECUTABLE_PATH = which('firefox')
...
'scrapy_selenium.SeleniumMiddleware': 800,
looks like permissions for driver are good:
:/usr/local/bin$ ll | grep gecko
-rwxrwxrwx 1 baku baku 7008696 lip 24 09:09 geckodriver*
crawler code:
class LinkedInProfileSeleniumSpider(scrapy.Spider):
name = 'lips'
allowed_domains = ['www.linkedin.com']
def start_requests(self):
yield SeleniumRequest(
url="https://www.linkedin.com/login/",
callback=self.proceed_login,
wait_until=(
EC.presence_of_element_located(
(By.CSS_SELECTOR, "#username")
)
),
script='window.scrollTo(0, document.body.scrollHeight);',
wait_time=30
)
def proceed_login(self, response):
# AFTER LOGIN
driver = response.request.meta['driver']
...
can you please help why it's failing? thanks!
( btw it works with chrome drivers, fails with gecko )
The same problem I had on mac, this one I am trying on ubuntu machine.
Not sure what can be the issue, there to debug etc.
I does not even land into self.proceed_login. Fails on first request.
I am trying to deploy a scrapy project via scrapyd-deploy to a remote scrapyd server. The project itself is functional and works perfectly on my local machine and on the remote server when I deploy it via git push prod to the remote server.
With scrapyd-deploy I get this error:
% scrapyd-deploy example -p apo
{ "node_name": "spider1",
"status": "error",
"message": "/usr/local/lib/python3.8/dist-packages/scrapy/utils/project.py:90: ScrapyDeprecationWarning: Use of environment variables prefixed with SCRAPY_ to override settings is deprecated. The following environment variables are currently defined: EGG_VERSION\n warnings.warn(\nTraceback (most recent call last):\n File \"/usr/lib/python3.8/runpy.py\", line 193, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 40, in <module>\n main()\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 37, in main\n execute()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/cmdline.py\", line 142, in execute\n cmd.crawler_process = CrawlerProcess(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 280, in __init__\n super(CrawlerProcess, self).__init__(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 152, in __init__\n self.spider_loader = self._get_spider_loader(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 146, in _get_spider_loader\n return loader_cls.from_settings(settings.frozencopy())\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 60, in from_settings\n return cls(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 24, in __init__\n self._load_all_spiders()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 46, in _load_all_spiders\n for module in walk_modules(name):\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/utils/misc.py\", line 77, in walk_modules\n submod = import_module(fullpath)\n File \"/usr/lib/python3.8/importlib/__init__.py\",
line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"<frozen importlib._bootstrap>\",
line 1014, in _gcd_import\n File \"<frozen importlib._bootstrap>\",
line 991, in _find_and_load\n File \"<frozen importlib._bootstrap>\",
line 975, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\",
line 655, in _load_unlocked\n File \"<frozen importlib._bootstrap>\",
line 618, in _load_backward_compatible\n File \"<frozen zipimport>\",
line 259, in load_module\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\",
line 31, in <module>\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\",
line 36, in GetbidSpider\n File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/act_functions.py\",
line 10, in create_image_dir\nNotADirectoryError: [Errno 20]
Not a directory: '/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/../images/allaboutwatches'\n"}
My guess is that it has to do something with this method I am calling, since part of the error disapears once I outcomment the method call:
# function will create a custom name directory to hold the images of each crawl
def create_image_dir(name):
project_dir = os.path.dirname(__file__)+'/../' #<-- absolute dir the script is in
img_dir = project_dir+"images/"+name
if not os.path.exists(img_dir):
os.mkdir(img_dir);
custom_settings = {
'IMAGES_STORE': img_dir ,
}
return custom_settings
Same goes for this method:
def brandnames():
brands = dict()
script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
rel_path = "imports/brand_names.csv"
abs_file_path = os.path.join(script_dir, rel_path)
with open(abs_file_path, newline='') as csvfile:
reader = csv.DictReader(csvfile, delimiter=';', quotechar='"')
for row in reader:
brands[row['name'].lower()] = row['name']
return brands
How can I change the method or the deploy config and keep my functionality as it is?
os.mkdir might be failing because it cannot create nested directories. You can use os.makedirs(img_dir, exist_ok=True) instead.
os.path.dirname(__file__) will point to /tmp/... under scrapyd. Not sure if this is what you want. I would use an absolute path without calling
os.path.dirname(__file__) if you don't want images to be downloaded under /tmp.
(P6Svenv)malikarumi#Tetuoan2:~/Projects/P6/P6Svenv/test2/test2/spiders$ scrapy crawl zomd
Traceback (most recent call last):
File "/usr/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==1.0.3.post6-g2d688cd', 'console_scripts', 'scrapy')()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 209, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 115, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 296, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/lib/pymodules/python2.7/scrapy/spiderloader.py", line 30, in from_settings
return cls(settings)
File "/usr/lib/pymodules/python2.7/scrapy/spiderloader.py", line 21, in __init__
for module in walk_modules(name):
File "/usr/lib/pymodules/python2.7/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/malikarumi/Projects/P6/P6Svenv/test2/test2/spiders/t350_crawl.py", line 36
def parse_item(self, response):
^
IndentationError: unindent does not match any outer indentation level
Do you see it? Scrapy isn't even calling the spider I specified on the command line!
I see that super in the traceback, but all my t350's are derived from CrawlSpider. zomd is subclassed from scrapy.Spider. Why is this happening and what do I do about it?
Spider's name doesn't equal to the file name. It is defined within the spider file by the second line below:
class CAPjobSpider(Spider):
name = "spider_name"
The above spider's name is "spider_name", even if the file may be "New_York.py".
I've successfully created and updated the calendar events from python. And the below is my code to delete an event from python code.
def delete_google_event(self, cr, uid, task, user):
g_client = gtools.gcal.google_calendar_interface()
g_client.connect(user.google_email, user.google_password)
g_client.delete(task.google_event_id)
message = "Google event deleted, old id: %s" % (task.google_event_id)
I get the below error when using the above code. From the error message BadStatusLine: '' i understand that i receive a request from the server that system do not understand. But not sure how to solve it. And also the error seems to be with google cal API. Will there be any versioning prob? (i do it in openerp and i guess it's not a problem of openerp)
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} deleting http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
{/usr/lib/python2.7/dist-packages/gtools/gcal.py} quering element uri: http://www.google.com/calendar/feeds/default/private/full/fpdoqrq4q5rroggkn2uaamojb0
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
!!!!http://localhost:9888/
!!!!
!!!!http://localhost:9888/
2013-09-02 12:21:16,945 17720 ERROR jul-16-7575-t1 openerp.osv.osv: Uncaught exception
Traceback (most recent call last):
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 131, in wrapper
return f(self, dbname, *args, **kwargs)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 197, in execute
res = self.execute_cr(cr, uid, obj, method, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/osv/osv.py", line 185, in execute_cr
return getattr(object, method)(cr, uid, *args, **kw)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 67, in unlink
self.delete_google_event(cr, uid, task, goog_uid)
File "/opt/workspace/openerp space/openerp-7.0-20130716-231027/openerp/addons/google_calendar_task_sync/project_google_calendar.py", line 92, in delete_google_event
g_client.delete(task.google_event_id)
File "/usr/lib/python2.7/dist-packages/gtools/gcal.py", line 78, in delete
self.cal_srv.DeleteEvent(event_uri)
File "/usr/lib/pymodules/python2.7/gdata/calendar/service.py", line 313, in DeleteEvent
url_params=url_params, escape_params=escape_params)
File "/usr/lib/pymodules/python2.7/gdata/service.py", line 1429, in Delete
headers=extra_headers, url_params=url_params)
File "/usr/lib/pymodules/python2.7/atom/__init__.py", line 92, in optional_warn_function
return f(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/atom/service.py", line 185, in request
data=data, headers=all_headers)
File "/usr/lib/pymodules/python2.7/gdata/auth.py", line 725, in perform_request
return http_client.request(operation, url, data=data, headers=headers)
File "/usr/lib/pymodules/python2.7/atom/http.py", line 174, in request
return connection.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 371, in _read_status
raise BadStatusLine(line)
BadStatusLine: ''
I've referred this link Why am I getting this error in python ? (httplib). Still not sure of the prob. Kindly give me some clues to fix this. Thanks a lot for your time.
Hurrah!! It works fine in the Direct Connection. But not under proxy.
I did the following:
hg clone ...somelink.to.repo.in.hg... Giga
cd Giga
ls (...it shows me giga.txt file exist in Giga directory)
vi giga.txt (...made some changes..)
hg commit -m "byte"
hg out (got the following error)
** unknown exception encountered, details follow
** report bug details to http://mercurial.selenic.com/bts/
** or mercurial#selenic.com
** Mercurial Distributed SCM (version 1.5)
** Extensions loaded: acl, bugzilla, children, churn, color, convert, extdiff, fetch, gpg, graphlog, hgcia, hgk, highlight, interhg, keyword, mercurial_keyring, mq, notify, pager, patchbomb, progress, purge, rebase, record, relink, schemes, share, transplant, zeroconf
Traceback (most recent call last):
File "/usr/bin/hg", line 27, in <module>
mercurial.dispatch.run()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 16, in run
sys.exit(dispatch(sys.argv[1:]))
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 30, in dispatch
return _runcatch(u, args)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 47, in _runcatch
return _dispatch(ui, args)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 466, in _dispatch
return runcommand(lui, repo, cmd, fullargs, ui, options, d)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 336, in runcommand
ret = _runcommand(ui, options, cmd, d)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 128, in wrap
return wrapper(origfn, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/pager.py", line 66, in pagecmd
return orig(ui, options, cmd, cmdfunc)
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 517, in _runcommand
return checkargs()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 471, in checkargs
return cmdfunc()
File "/usr/lib/python2.6/site-packages/mercurial/dispatch.py", line 465, in <lambda>
d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/color.py", line 352, in nocolor
return orig(*args, **opts)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/mq.py", line 2648, in mqcommand
return orig(ui, repo, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/extensions.py", line 116, in wrap
util.checksignature(origfn), *args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/hgext/graphlog.py", line 365, in graph
return orig(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/util.py", line 401, in check
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/mercurial/commands.py", line 2275, in outgoing
other = hg.repository(cmdutil.remoteui(repo, opts), dest)
File "/usr/lib/python2.6/site-packages/mercurial/hg.py", line 82, in repository
repo = _lookup(path).instance(ui, path, create)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 271, in instance
inst.between([(nullid, nullid)])
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 190, in between
d = self.do_read("between", pairs=n)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 134, in do_read
fp = self.do_cmd(cmd, **args)
File "/usr/lib/python2.6/site-packages/mercurial/httprepo.py", line 85, in do_cmd
resp = self.urlopener.open(req)
File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib/python2.6/urllib2.py", line 855, in http_error_401
url, req, headers)
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 339, in basic_http_error_auth_reqed
File "/usr/lib/python2.6/urllib2.py", line 833, in http_error_auth_reqed
return self.retry_http_basic_auth(host, req, realm)
File "/usr/lib/python2.6/urllib2.py", line 836, in retry_http_basic_auth
user, pw = self.passwd.find_user_password(realm, host)
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 333, in find_user_password
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 184, in find_auth
File "build/bdist.linux-i686/egg/mercurial_keyring.py", line 67, in get_http_password
File "/usr/local/lib/python2.6/site-packages/keyring/core.py", line 37, in get_password
return _keyring_backend.get_password(service_name, username)
File "/usr/local/lib/python2.6/site-packages/keyring/backend.py", line 143, in get_password
items = gnomekeyring.find_network_password_sync(username, service)
gnomekeyring.IOError
My ~/.hgrc (OpenSUSE machine)
[ui]
username=c123456 <Arun.Sangal#MyCompany.com>
[extensions]
mercurial_keyring = /root/mercurial_keyring.py
#[trusted]
#users = *
#groups = *
[extensions]
acl =
bugzilla =
children =
churn =
color =
convert =
eol = !
extdiff =
factotum = !
fetch =
gpg =
graphlog =
hgcia =
hgcr-gui-qt = !
hgk =
highlight =
interhg =
keyword =
largefiles = !
mercurial_keyring =
mq =
notify =
pager =
patchbomb =
perfarce = !
progress =
projrc = !
purge =
rebase =
record =
relink =
schemes =
....
........etc
My local repository
(on OpenSuse cloned folder - inside: /Giga/.hg/hgrc) is:
[paths]
default = http://the.hg.server.com/hg/TestHgRepo1/
myrepo = http://the.hg.server.com/hg/TestHgRepo1/
[auth]
myrepo.schemes = http https
myrepo.prefix = the.hg.server.com/hg
myrepo.username = c123456
I tried everything but this Keyring thing is not working. I get prompt everytime I do:
hg out
hg push
etc hg operation but not when I do
hg commit
Can someone please tell what the heck I'm missing here. Tried the same excercise on Windows with TortoiseHg, with C:...\mercurial.ini (Windows side kinda of unix ~/.hgrc file).. and updated/made sure local repository cloned folder's ../clonedfolder/.hg/hgrc file contains the similar [auth] ..3 lines but Mercurial on Linux OpenSUSE and on Windows using TortoiseHg is not working with keyring.
It's prompting me for entering user credentials again n again :((
can someone pls correct me on what should I do to get this resolved.
if prompted multiple times for user credentials in mercurial. Setup Mercurial_Keyring and then
this question comes which nobody explained in an easy way.
??? how to make the [auth] xx.prefix = servername/hg_or_something work for all repositories under servername/hg location either if I use servername, servername's IP or servername's FQDN ?
Final ANSWER: Arun • 2 minutes ago −
OK, I put this in ~/.hgrc (Linux/Unix -home directory's .hgrc hidden file) or Windows users %UserProfile%/mercurial.ini or %HOME%/mercurial.ini file.
[auth]
default1.schemes = http https
default1.prefix = hg_merc_server/hg
default1.username = c123456
default2.schemes = http https
default2.prefix = hg_merc_server.company.com/hg
default2.username = c123456
default3.schemes = http https
default3.prefix = 10.211.222.321/hg
default3.username = c123456
Now, I can checkout using either Server/IP/Server's FQDN.