When i try to run the following .yml role i get an error with nsupdate.
I am using centos7 and the machine is running bind.
When i do nsupdate with either the original DNS server or the ansible master i can update the records, only when i use the nsupdate module it doesn't work, any help? ty!
tasks/main.yml
This is the part with the relevant code
- name: Add or modify ansible.example.org A to 192.168.1.1"
community.general.nsupdate:
server: "10.0.0.40"
zone: "ben.com."
record: "ansible"
value: "192.168.1.1"
when: ansible_eth1.ipv4.address == '10.0.0.40'
The error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: invalid syntax
fatal: [10.0.0.40]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 10.0.0.40 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.nsupdate', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_community.general.nsupdate_payload_xAhaGd/ansible_community.general.nsupdate_payload.zip/ansible_collections/community/general/plugins/modules/nsupdate.py\", line 189, in <module>\r\n File \"build/bdist.linux-x86_64/egg/dns/update.py\", line 21, in <module>\r\n File \"/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py\", line 201\r\n s.write(f';{name}\\n')\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
relevant line in traceback
File "build/bdist.linux-x86_64/egg/dns/update.py", line 21, in <module>
File "/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py", line 201
s.write(f';{name}\n')
^
SyntaxError: invalid syntax
Problem was that it was prolly running on the wrong interpeter on the host machine like #Patrick mentioned..
Fixed it by adding host group vars like so:
[DNS_Master:vars]
ansible_python_interpreter=/usr/local/bin/python3.9
Related
I got issue when running openstack horizon.
I failed to log in horizon after setting all requirements and i got SyntaxError in /var/log/apache/error.log.
mod_wsgi (pid=5342): Failed to exec Python script file '/usr/share/openstack-dashboard/openstack_dashboard/wsgi.py'.
mod_wsgi (pid=5342): Exception occurred processing WSGI script '/usr/share/openstack-dashboard/openstack_dashboard/wsgi.py'.
File "/usr/lib/python3/dist-packages/openstack_dashboard/settings.py", line 239, in <module>
from local.local_settings import * # noqa: F403,H303
File "/usr/lib/python3/dist-packages/openstack_dashboard/local/local_settings.py", line 137
'enable_router': False,
SyntaxError: invalid syntax
Why the SyntaxError : invalid syntax occurs?
I solve the problem after i checked the /etc/openstack-dashboard/local_setting.py.
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_fip_topology_check': False,
}
I didn't remove the ... in OPENSTACK_NEUTRON_NETWORK.
After i remove it, the horizon works fine.
When I run the train.py script from https://github.com/tensorflow/models/tree/master/official/nlp, I got a 403 permission error.
python3 official/nlp/train.py --tpu=con-bert1 --experiment=bert/pretraining --mode=train --model_dir=gs://con_bioberturk/general/ --config_file=gs://con_bioberturk/bert_base.yaml --config_file=gs://con_bioberturk/pretrain.yaml --params_override="task.init_checkpoint=gs://con_bioberturk/bert-base-turkish-cased-tf/model.ckpt"`
and my output is below:
I1115 07:49:02.847452 139877506112576 train_utils.py:368] Saving experiment configuration to gs://con_bioberturk/general/params.yaml
Traceback (most recent call last):
File "/usr/share/tpu/models/official/modeling/hyperparams/params_dict.py", line 349, in save_params_dict_to_yaml
yaml.dump(params.as_dict(), f, default_flow_style=False)
File "/usr/local/lib/python3.8/dist-packages/yaml/__init__.py", line 290, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/local/lib/python3.8/dist-packages/yaml/__init__.py", line 278, in dump_all
dumper.represent(data)
File "/usr/local/lib/python3.8/dist-packages/yaml/representer.py", line 28, in represent
self.serialize(node)
File "/usr/local/lib/python3.8/dist-packages/yaml/serializer.py", line 55, in serialize
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 115, in emit
self.state()
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 220, in expect_document_end
self.flush_stream()
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 790, in flush_stream
self.stream.flush()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 219, in flush
self._writable_file.flush()
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "Access denied.",
"errors": [
{
"message": "Access denied.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
when initiating an upload to gs://con_bioberturk/general/params.yaml
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "official/nlp/train.py", line 82, in <module>
app.run(main)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "official/nlp/train.py", line 47, in main
train_utils.serialize_config(params, model_dir)
File "/usr/share/tpu/models/official/core/train_utils.py", line 370, in serialize_config
hyperparams.save_params_dict_to_yaml(params, params_save_path)
File "/usr/share/tpu/models/official/modeling/hyperparams/params_dict.py", line 349, in save_params_dict_to_yaml
yaml.dump(params.as_dict(), f, default_flow_style=False)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 197, in __exit__
self.close()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 239, in close
self._writable_file.close()
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "Access denied.",
"errors": [
{
"message": "Access denied.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
'
Here is my settings:
tpu-vm name:con-bert1
TPU software version: tpu-vm-tf-2.10.0-pod
cloud bucket (con_bioberturk) and tpu-vm are in the same location
Looks like you need to add the service account that is currently active on your TPU VM to the GCS IAM. Instructions here - https://github.com/google-research/text-to-text-transfer-transformer/issues/1003
If that fails, try running gcloud auth login --update-adc on your TPU VM to add your credentials.
Hope this resolves your issue.
I am trying to export all data of AWX to JSON File , with following command, and this command is a part of gitlab cicd, So self hosted gitlab runner executing this command. I tried running the same command on other machine, which works fine. The version of python is same on both side.
awx --conf.host http://{AWX_URL} --conf.token {AWX_TOKEN} --conf.insecure export -k --job-template > job_tempalte.json;
DEBUG:awxkit.api.pages.page:get_page: /api/v2/workflow_job_templates/
DEBUG:awxkit.api.pages.page:set_page: <class 'awxkit.api.pages.workflow_job_templates.WorkflowJobTemplates'> /api/v2/workflow_job_templates/
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/awxkit/cli/__init__.py", line 25, in run
cli.parse_resource()
File "/usr/lib/python3.9/site-packages/awxkit/cli/client.py", line 152, in parse_resource
self.resource = parse_resource(self, skip_deprecated=skip_deprecated)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 220, in parse_resource
response = command.handle(client, parser)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 179, in handle
data = client.v2.export_assets(**kwargs)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/api.py", line 201, in export_assets
endpoint = getattr(self, resource)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/page.py", line 115, in __getattr__
raise AttributeError("{!r} object has no attribute {!r}".format(self.__class__.__name__, name))
AttributeError: 'ApiV2' object has no attribute 'execution_environments'
Downgrade the awxkit version to 17.1.0
pip install awxkit==17.1.0
Running the following task on my playbook:
- name: "Check if hold file exists..."
amazon.aws.aws_s3:
region: <region>
bucket: <bucket>
prefix: "hold_jobs_{{ env }}"
mode: list
register: s3_files
I Got the following error:
{
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_amazon.aws.aws_s3_payload_b92r42e3/ansible_amazon.aws.aws_s3_payload.zip/ansible_collections/amazon/aws/plugins/modules/aws_s3.py\", line 427, in bucket_check\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 661, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (400) when calling the HeadBucket operation: Bad Request\n",
"boto3_version": "1.9.223",
"botocore_version": "1.12.253",
"error": {
"code": "400",
"message": "Bad Request"
}
I am using this configuration:
ansible [core 2.11.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
jinja version = 2.10.1
libyaml = True
I need help to solve this, thanks in advance. For this moment I am using a paliative solution, with shell module.
I am new to scrapyd,
I have insert the below code into scrapy.cfg file.
[settings]
default = uk.settings
[deploy:scrapyd]
url = http://localhost:6800/
project=ukmall
[deploy:scrapyd2]
url = http://scrapyd.mydomain.com/api/scrapyd/
username = john
password = secret
If I run below code code
$scrapyd-deploy -l
I can get
scrapyd2 http://scrapyd.mydomain.com/api/scrapyd/
scrapyd http://localst:6800/
To see all available projects
scrapyd-deploy -L scrapyd
But it shows nothing in my machine?
Ref: http://scrapyd.readthedocs.org/en/latest/deploy.html#deploying-a-project
If Did
$ scrapy deploy scrapyd2
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$ scrapy deploy scrapyd2
Packing version 1412322816
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 103, in run
egg, tmpdir = _build_egg()
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 228, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/lib/pymodules/python2.7/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-VLM6W7']' returned non-zero exit status 1
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$
If I do this for another project means it shows.
$ scrapy deploy scrapyd
Packing version 1412325181
Deploying to project "project2" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "[Errno 13] Permission denied: 'eggs'"}
You'll only be able to list the spiders that have been deployed. If you haven't deployed anything yet then to deploy your spider you simply use scrapy deploy:
scrapy deploy [ <target:project> | -l <target> | -L ]
vagrant#portia:~/takeovertheworld$ scrapy deploy scrapyd2
Packing version 1410145736
Deploying to project "takeovertheworld" in http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/addversion.json
Server response (200):
{"status": "ok", "project": "takeovertheworld", "version": "1410145736", "spiders": 1}
Verify that the project was installed correctly by accessing the scrapyd API:
vagrant#portia:~/takeovertheworld$ curl http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/listprojects.json
{"status": "ok", "projects": ["takeovertheworld"]}
I had same error too. As #hugsbrugs said,because a folder inside the scrapy project had root rights.So, I do that.
sudo scrapy deploy scrapyd2