gcloud auth login : Properties Parse Error - authentication

i installed google-cloud-sdk in ubuntu 14.04 and when tried to login,it is showing this error.
krish#jarvis:~$ gcloud auth login
Traceback (most recent call last):
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 87, in
from googlecloudsdk.calliope import base
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/calliope/base.py", line 8, in
from googlecloudsdk.core import log
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 413, in
_log_manager = _LogManager()
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 195, in init
self.console_formatter = _ConsoleFormatter(sys.stderr)
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/log.py", line 172, in init
use_color = not properties.VALUES.core.disable_color.GetBool()
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/properties.py", line 782, in GetBool
value = _GetBoolProperty(self, PropertiesFile.Load(), required)
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/properties.py", line 1141, in Load
PropertiesFile._PROPERTIES = PropertiesFile(paths)
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/properties.py", line 1160, in init
self.__Load(properties_path)
File "/usr/bin/../lib/google-cloud-sdk/./lib/googlecloudsdk/core/properties.py", line 1174, in __Load
raise PropertiesParseError(e.message)
googlecloudsdk.core.properties.PropertiesParseError: File contains no section headers.
file: /home/krish/.config/gcloud/properties, line: 1
'h4\xaf\xe3\xda^\xa6\xe8\xb2\xdb`$?\x11\x7f\xce\xc1\x1f\x88\xcd"\x82c\x13Bj\x07\xc3\xe3\x9ds\xdd d\xe1\n'

This does look like you have a corruppted gcloud config file /home/krish/.config/gcloud/properties. Check what's the context, may be it's just one line and you can fix it, or just move/remove it (you will need to set all configurations once again if you did any).

Related

Django tests pass locally but not on Github Actions push

My tests pass locally and in fact on Github Actions it also says "ran 8 tests" and then "OK" (and I have 8). However, the test stage fails due to a strange error in the traceback.
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 421, in execute
return Database.Cursor.execute(self, query)
sqlite3.OperationalError: near "SCHEMA": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/work/store/store/manage.py", line 22, in <module>
main()
File "/home/runner/work/store/store/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/commands/test.py", line 23, in run_from_argv
super().run_from_argv(argv)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/core/management/commands/test.py", line 55, in handle
failures = test_runner.run_tests(test_labels)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/test/runner.py", line 736, in run_tests
self.teardown_databases(old_config)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django_heroku/core.py", line 41, in teardown_databases
self._wipe_tables(connection)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django_heroku/core.py", line 26, in _wipe_tables
cursor.execute(
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 421, in execute
return Database.Cursor.execute(self, query)
django.db.utils.OperationalError: near "SCHEMA": syntax error
Error: Process completed with exit code 1.
These are all just default Django files and I haven't messed with any of them. I don't really know what to do about it and internet searches yield nothing helpful.
Just had the same issue. For me I had django_heroku in my project settings.py. It looks like, based on the comments that your app is using this.
I had something like this in settings.py:
import django_heroku
django_heroku.settings(locals())
change to the following so it doesn't use django_heroku on github actions:
if os.environ.get('ENVIRONMENT') != 'github':
import django_heroku
django_heroku.settings(locals())
and then declare an environment variable in the workflow file, I have this:
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9, '3.10']
env:
DJANGO_SECRET_KEY: "someKey"
ENVIRONMENT: github
Note: this is assuming you have environment variables setup. I know there are different ways of doing it, so depending on how they are setup you might have to change how they are accessed
more info here
edit:
As per the comments, you don't have to declare an additional environment variable because github has a default variable called GITHUB_ACTIONS
so you can do
if os.environ.get('GITHUB_ACTIONS') != 'true':
import django_heroku
django_heroku.settings(locals())

Unable to load web page with seleniumwire

Unable to load the web page using seleniumwire, I am observing this error in the browser.
This page isn't working
xxx.xyz didn't send any data.
ERR_EMPTY_RESPONSE
When I replace seleniumwire with selenium while initializing the webdriver, the issue is no longer observed.
Seleniumwire was working fine and the below-mentioned error started occurring a couple of days ago.
Seleniumwire version: 4.4.0
Python 3.9
MacOS Big Sur
AttributeError: module 'lib' has no attribute 'SSL_CTX_get0_param'
ERROR:seleniumwire.server:127.0.0.1:61095: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/server.py",
line 113, in handle root_layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/modes/http_proxy.py",
line 9, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 285, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http1.py",
line 100, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 206, in call if not self._process_flow(flow): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 285, in _process_flow return self.handle_regular_connect(f) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 224, in handle_regular_connect layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 278, in call self._establish_tls_with_client_and_server() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 358, in _establish_tls_with_client_and_server self._establish_tls_with_server() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 445, in _establish_tls_with_server self.server_conn.establish_tls( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/connections.py",
line 295, in establish_tls self.convert_to_tls(cert=client_cert, sni=sni, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/net/tcp.py",
line 382, in convert_to_tls context = tls.create_client_context( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/net/tls.py",
line 285, in create_client_context param = SSL._lib.SSL_CTX_get0_param(context._context)
AttributeError: module 'lib' has no attribute 'SSL_CTX_get0_param'
This looks like you are using an outdated version of the cryptography library.

Scrapyd-Deploy: SPIDER_MODULES not found

I am trying to deploy a scrapy 2.1.0 project with scrapy-deploy 1.2 and get this error:
scrapyd-deploy example
/Library/Frameworks/Python.framework/Versions/3.8/bin/scrapyd-deploy:23: ScrapyDeprecationWarning: Module `scrapy.utils.http` is deprecated, Please import from `w3lib.http` instead.
from scrapy.utils.http import basic_auth_header
fatal: No names found, cannot describe anything.
Packing version r1-master
Deploying to project "crawler" in http://myip:6843/addversion.json
Server response (200):
{"node_name": "spider1", "status": "error", "message": "/usr/local/lib/python3.8/dist-packages/scrapy/utils/project.py:90: ScrapyDeprecationWarning: Use of environment variables prefixed with SCRAPY_ to override settings is deprecated. The following environment variables are currently defined: EGG_VERSION\n warnings.warn(\nTraceback (most recent call last):\n File \"/usr/lib/python3.8/runpy.py\", line 193, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 40, in <module>\n main()\n File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 37, in main\n execute()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/cmdline.py\", line 142, in execute\n cmd.crawler_process = CrawlerProcess(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 280, in __init__\n super(CrawlerProcess, self).__init__(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 152, in __init__\n self.spider_loader = self._get_spider_loader(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 146, in _get_spider_loader\n return loader_cls.from_settings(settings.frozencopy())\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 60, in from_settings\n return cls(settings)\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 24, in __init__\n self._load_all_spiders()\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 46, in _load_all_spiders\n for module in walk_modules(name):\n File \"/usr/local/lib/python3.8/dist-packages/scrapy/utils/misc.py\", line 69, in walk_modules\n mod = import_module(path)\n File \"/usr/lib/python3.8/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 973, in _find_and_load_unlocked\nModuleNotFoundError: No module named 'crawler.spiders_prod'\n"}
crawler.spiders_prod is the first module defined in SPIDER_MODULES
Part of crawler.settings.py:
SPIDER_MODULES = ['crawler.spiders_prod', 'crawler.spiders_dev']
NEWSPIDER_MODULE = 'crawler.spiders_dev'
The crawler works localy, but using deploy it will fail to use whatever I call the folder where my spiders live in.
scrapyd-deploy setup.py:
# Automatically created by: scrapyd-deploy
from setuptools import setup, find_packages
setup(
name = 'project',
version = '1.0',
packages = find_packages(),
entry_points = {'scrapy': ['settings = crawler.settings']},
)
scrapy.cfg:
[deploy:example]
url = http://myip:6843/
username = test
password = whatever.
project = crawler
version = GIT
Is this possibly a bug or am I missing something?
Modules have to be initialised within scrapy. This happens through simply placing the following file into each folder defined as a module:
__init__.py
This has solved my described problem.
Learning:
If you want to split your spiders into folders, it is not enough to simple create a folder and specify this folder as a module within the settings file, but you also need to place this file into the new folder. Funny engough the crawler works, without the file just deployment to scrapyd fails.

Using python-rq with Zope

I'm trying to use Python-RQ with Zope by calling an external method (for background tasks) from ZMI after a certain operation. The file called by external method resides in Extensions. It initialises connection to Redis and imports a module that runs the background tasks. The question is where should this to be imported file be placed ? Python-RQ does not seem to recognise if I put it inside Products directory. It throws no module named Products.xyz. Below is the code snippet
from redis import Redis
from rq import Queue
from Products.def_update_company_status import ae_update_company_status
q = Queue(connection=Redis())
def rq_worker(context):
q.enqueue(ae_update_company_status)
return 'DONE'
The rq_worker function is invoked by the external method.
Below is the error
18:12:40 default: Products.def_update_company_status.ae_update_company_status() (4b2b5c81-e329-4031-a3e7-b9b1bb198278)
18:12:40 ImportError: No module named Products.def_update_company_status
Traceback (most recent call last):
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/job.py", line 498, in perform
try:
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/job.py", line 206, in func
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "build/bdist.linux-x86_64/egg/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named Products.def_update_company_status
Traceback (most recent call last):
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/job.py", line 498, in perform
try:
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/job.py", line 206, in func
File "/home/zope/ams/lib/python2.6/site-packages/rq-0.6.0-py2.6.egg/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "build/bdist.linux-x86_64/egg/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named Products.def_update_company_status
18:12:40 Moving job to u'failed' queue
18:12:40
18:12:40 *** Listening on default...

GSUTIL traceback-Linux Mint

Im trying to install GSUTIL, after installation it gives the following output for every command,
Traceback (most recent call last):
File "/usr/local/bin/gsutil", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 632, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (httplib2 0.8 (/usr/lib/python2.7/dist-packages), Requirement.parse('httplib2>=0.9.1'))
That means you need to update the version of httplib2 installed on your system to at least v 0.9.1.