AWX CLI export command inside gitlab runner throws 'ApiV2' object has no attribute 'execution_environments' - gitlab-ci

I am trying to export all data of AWX to JSON File , with following command, and this command is a part of gitlab cicd, So self hosted gitlab runner executing this command. I tried running the same command on other machine, which works fine. The version of python is same on both side.
awx --conf.host http://{AWX_URL} --conf.token {AWX_TOKEN} --conf.insecure export -k --job-template > job_tempalte.json;
DEBUG:awxkit.api.pages.page:get_page: /api/v2/workflow_job_templates/
DEBUG:awxkit.api.pages.page:set_page: <class 'awxkit.api.pages.workflow_job_templates.WorkflowJobTemplates'> /api/v2/workflow_job_templates/
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/awxkit/cli/__init__.py", line 25, in run
cli.parse_resource()
File "/usr/lib/python3.9/site-packages/awxkit/cli/client.py", line 152, in parse_resource
self.resource = parse_resource(self, skip_deprecated=skip_deprecated)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 220, in parse_resource
response = command.handle(client, parser)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 179, in handle
data = client.v2.export_assets(**kwargs)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/api.py", line 201, in export_assets
endpoint = getattr(self, resource)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/page.py", line 115, in __getattr__
raise AttributeError("{!r} object has no attribute {!r}".format(self.__class__.__name__, name))
AttributeError: 'ApiV2' object has no attribute 'execution_environments'

Downgrade the awxkit version to 17.1.0
pip install awxkit==17.1.0

Related

I can't manage files from s3 using ansible aws_s3 module

Running the following task on my playbook:
- name: "Check if hold file exists..."
amazon.aws.aws_s3:
region: <region>
bucket: <bucket>
prefix: "hold_jobs_{{ env }}"
mode: list
register: s3_files
I Got the following error:
{
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_amazon.aws.aws_s3_payload_b92r42e3/ansible_amazon.aws.aws_s3_payload.zip/ansible_collections/amazon/aws/plugins/modules/aws_s3.py\", line 427, in bucket_check\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 661, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (400) when calling the HeadBucket operation: Bad Request\n",
"boto3_version": "1.9.223",
"botocore_version": "1.12.253",
"error": {
"code": "400",
"message": "Bad Request"
}
I am using this configuration:
ansible [core 2.11.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
jinja version = 2.10.1
libyaml = True
I need help to solve this, thanks in advance. For this moment I am using a paliative solution, with shell module.

Nsupdate module in ansible causes an error

When i try to run the following .yml role i get an error with nsupdate.
I am using centos7 and the machine is running bind.
When i do nsupdate with either the original DNS server or the ansible master i can update the records, only when i use the nsupdate module it doesn't work, any help? ty!
tasks/main.yml
This is the part with the relevant code
- name: Add or modify ansible.example.org A to 192.168.1.1"
community.general.nsupdate:
server: "10.0.0.40"
zone: "ben.com."
record: "ansible"
value: "192.168.1.1"
when: ansible_eth1.ipv4.address == '10.0.0.40'
The error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: invalid syntax
fatal: [10.0.0.40]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 10.0.0.40 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.nsupdate', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_community.general.nsupdate_payload_xAhaGd/ansible_community.general.nsupdate_payload.zip/ansible_collections/community/general/plugins/modules/nsupdate.py\", line 189, in <module>\r\n File \"build/bdist.linux-x86_64/egg/dns/update.py\", line 21, in <module>\r\n File \"/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py\", line 201\r\n s.write(f';{name}\\n')\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
relevant line in traceback
File "build/bdist.linux-x86_64/egg/dns/update.py", line 21, in <module>
File "/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py", line 201
s.write(f';{name}\n')
^
SyntaxError: invalid syntax
Problem was that it was prolly running on the wrong interpeter on the host machine like #Patrick mentioned..
Fixed it by adding host group vars like so:
[DNS_Master:vars]
ansible_python_interpreter=/usr/local/bin/python3.9

Does QPython2 supports Selenium?

I did install selenium on my qpython2 by using:
pip install selenium
And it downloaded selenium
I had a Firefox browser on my android phone so I tested if it will work like it did on my pc:
from selenium import webdriver
browser=webdriver.Firefox()
browser.get('http://www.somewebsite.com/)
print 'done'
But it gives me an error like this:
ipts/.last_tmp.py" && exit <
Traceback (most recent call last):
File "/storage/sdcard0/qpython/scripts/.last_tmp.py", line 3, in
browser=webdriver.Firefox()
File "/data/data/org.qpython.qpy/files/lib/python2.7/site-packages/selenium-3.3.1-py2.7.egg/selenium/webdriver/firefox/webdriver.py", line 144, in init
self.service = Service(executable_path, log_path=log_path)
File "/data/data/org.qpython.qpy/files/lib/python2.7/site-packages/selenium-3.3.1-py2.7.egg/selenium/webdriver/firefox/service.py", line 44, in init
log_file = open(log_path, "a+") if log_path is not None and log_path != "" else None
IOError: [Errno 30] Read-only file system: 'geckodriver.log'
1|u0_a125#A536:/ $
[UPD]: I was wondering if my mistake was not installing 'geckodriver' on android? But I cannot find geckodriver for android.
Can anyone help me with this?

Can't deploy scrapy to scrapyd server

I'm trying to deploy spider created via Portia. Portia, scrapyd - all is latest versions.
I'm running scrapyd server just by command:
scrapyd
I'm getting this result on my machine locally:
$> cd PROJECT_PATH_HERE;
$> scrapyd-deploy
Packing version 1463420166
Deploying to project "retail" in http://X.X.X.X:6800/addversion.json
Server response (200):
{"status": "error", "message": "AttributeError: 'list' object has no attribute 'iteritems'", "node_name": "vaua0048313.online-vm.com"}
On the server side (Ubuntu 12.04.5 LTS):
root#server:~# scrapyd
2016-05-16 17:35:42+0000 [-] Log opened.
2016-05-16 17:35:42+0000 [-] twistd 16.1.1 (/usr/bin/python 2.7.3) starting up.
2016-05-16 17:35:42+0000 [-] reactor class: twisted.internet.epollreactor.EPollReactor.
2016-05-16 17:35:42+0000 [-] Site starting on 6800
2016-05-16 17:35:42+0000 [-] Starting factory <twisted.web.server.Site instance at 0x2f5d950>
2016-05-16 17:35:42+0000 [Launcher] Scrapyd 1.1.0 started: max_proc=12, runner='scrapyd.runner'
2016-05-16 17:36:06+0000 [_GenericHTTPChannelProtocol,0,78.111.181.96] Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 1767, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 768, in requestReceived
self.process()
File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 183, in process
self.render(resrc)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 234, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 17, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 19, in render
r = resource.Resource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 79, in render_POST
spiders = get_spider_list(project, version=version)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 116, in get_spider_list
raise RuntimeError(msg.splitlines()[-1])
exceptions.RuntimeError: AttributeError: 'list' object has no attribute 'iteritems'
2016-05-16 17:36:06+0000 [-] "78.111.181.96" - - [16/May/2016:17:36:05 +0000] "POST /addversion.json HTTP/1.1" 200 135 "-" "Python-urllib/2.7"
What am I doing wrong ?
i think the problem comes from your setting.py. The ITEM_PIPELINES should be specified by the dict format below, when you are using scrapy version of 1.10.0,
ITEM_PIPELINES = {
'coolscrapy.pipelines.ArticleDataBasePipeline': 5,
}
Modify ITEM_PIPELINES in Portia_Projects/spiders/setting.py to
ITEM_PIPELINES = {'slybot.dupefilter.DupeFilterPipeline':1}

Projects were not shown in scrapyd

I am new to scrapyd,
I have insert the below code into scrapy.cfg file.
[settings]
default = uk.settings
[deploy:scrapyd]
url = http://localhost:6800/
project=ukmall
[deploy:scrapyd2]
url = http://scrapyd.mydomain.com/api/scrapyd/
username = john
password = secret
If I run below code code
$scrapyd-deploy -l
I can get
scrapyd2 http://scrapyd.mydomain.com/api/scrapyd/
scrapyd http://localst:6800/
To see all available projects
scrapyd-deploy -L scrapyd
But it shows nothing in my machine?
Ref: http://scrapyd.readthedocs.org/en/latest/deploy.html#deploying-a-project
If Did
$ scrapy deploy scrapyd2
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$ scrapy deploy scrapyd2
Packing version 1412322816
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 103, in run
egg, tmpdir = _build_egg()
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 228, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/lib/pymodules/python2.7/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-VLM6W7']' returned non-zero exit status 1
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$
If I do this for another project means it shows.
$ scrapy deploy scrapyd
Packing version 1412325181
Deploying to project "project2" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "[Errno 13] Permission denied: 'eggs'"}
You'll only be able to list the spiders that have been deployed. If you haven't deployed anything yet then to deploy your spider you simply use scrapy deploy:
scrapy deploy [ <target:project> | -l <target> | -L ]
vagrant#portia:~/takeovertheworld$ scrapy deploy scrapyd2
Packing version 1410145736
Deploying to project "takeovertheworld" in http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/addversion.json
Server response (200):
{"status": "ok", "project": "takeovertheworld", "version": "1410145736", "spiders": 1}
Verify that the project was installed correctly by accessing the scrapyd API:
vagrant#portia:~/takeovertheworld$ curl http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/listprojects.json
{"status": "ok", "projects": ["takeovertheworld"]}
I had same error too. As #hugsbrugs said,because a folder inside the scrapy project had root rights.So, I do that.
sudo scrapy deploy scrapyd2