Http error 400 during authentication when adding a group on a checkpoint R81 Firewall - automation

When I run ansible-playbook -i Inventory/host_file createGroup.yml I get the output:
PLAY [Create Groupt in SMS] ********************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************
ok: [192.168.19.5]
TASK [set-group] *******************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: Server returned response without token info during connection authentication: 400
fatal: [192.168.19.5]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-38146d7ypqb5/ansible-tmp-1672395718.4953148-3880-262337703017219/AnsiballZ_cp_mgmt_group.py\", line 107, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-38146d7ypqb5/ansible-tmp-1672395718.4953148-3880-262337703017219/AnsiballZ_cp_mgmt_group.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-38146d7ypqb5/ansible-tmp-1672395718.4953148-3880-262337703017219/AnsiballZ_cp_mgmt_group.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.check_point.mgmt.plugins.modules.cp_mgmt_group', init_globals=dict(_module_fqn='ansible_collections.check_point.mgmt.plugins.modules.cp_mgmt_group', _modlib_path=modlib_path),\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_cp_mgmt_group_payload_h8pp29fy/ansible_cp_mgmt_group_payload.zip/ansible_collections/check_point/mgmt/plugins/modules/cp_mgmt_group.py\", line 139, in <module>\n File \"/tmp/ansible_cp_mgmt_group_payload_h8pp29fy/ansible_cp_mgmt_group_payload.zip/ansible_collections/check_point/mgmt/plugins/modules/cp_mgmt_group.py\", line 134, in main\n File \"/tmp/ansible_cp_mgmt_group_payload_h8pp29fy/ansible_cp_mgmt_group_payload.zip/ansible_collections/check_point/mgmt/plugins/module_utils/checkpoint.py\", line 317, in api_call\n File \"/tmp/ansible_cp_mgmt_group_payload_h8pp29fy/ansible_cp_mgmt_group_payload.zip/ansible_collections/check_point/mgmt/plugins/module_utils/checkpoint.py\", line 71, in send_request\n File \"/tmp/ansible_cp_mgmt_group_payload_h8pp29fy/ansible_cp_mgmt_group_payload.zip/ansible/module_utils/connection.py\", line 200, in __rpc__\nansible.module_utils.connection.ConnectionError: Server returned response without token info during connection authentication: 400\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *************************************************************************************************************************
192.168.19.5 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
This is the playbook:
---
- name: Create Group in SMS
hosts: gvSMS
connection: httpapi
gather_facts: yes
vars:
ansible_network_os: checkpoint
mgmt_user: ansible
mgmt_password: *****
mgmt_server: 192.168.19.5
mgmt_fingerprint: RITE OS NELL WEAR SOY FLAW LOY DENT DENY SOUR PIE PEN
policy_name: standard
ansible_httpapi_validate_certs: no
ansible_httpapi_use_ssl: yes
tasks:
- name: set-group
cp_mgmt_group:
name: EXT-PermittedToCOM
state: present
auto_publish_session: yes
notify: publishPolicy
handlers:
- name: publishPolicy
cp_mgmt_publish:
This is my host file (Inventory/host_file):
[gvSMS]
192.168.19.5 ansible_user=ansible ansible_password=*****
These are the vars for my object(Inventory/group_vars/gvSMS.yml ):
#group_vars/gvSMS.yml
ansible_httpapi_validate_certs: no
ansible_httpapi_use_ssl: yes
ansible_network_os: checkpoint
mgmt_server: 192.168.19.5
mgmt_user: ansible
mgmt_password: *****
mgmt_fingerprint: RITE OS NELL WEAR SOY FLAW LOY DENT DENY SOUR PIE PEN
policy_name: Standard
This is my ansible.cfg file :
# Since Ansible 2.12 (core):
# To generate an example config file (a "disabled" one with all default settings, commented out):
# $ ansible-config init --disabled > ansible.cfg
#
# Also you can now have a more complete file by including existing plugins:
# ansible-config init --disabled -t all > ansible.cfg
# For previous versions of Ansible you can check for examples in the 'stable' branches of each version
# Note that this file was always incomplete and lagging changes to configuration settings
# for example, for 2.9: https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg
[defaults]
inventory = Inventory/host_file
host_key_checking = true
retry_files_enabled = false
interpreter_python = /usr/bin/python3
[galaxy]
server=https://galaxy.ansible.com
There is connectivity between the firewall and the Server, I also enabled API management on the firewall.
I tried different playbooks however i get the same result. The ansible ping works, so I think the problems is with the playbook.

Related

Ansible: How do you properly skip ssh first connection to fresh host?

Context: I'm trying to automate the provision of a fresh new server, but when a new machine is spawned and my ansible playbook is played against it from my provisioning server the usual message pops out:
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I am aware this question has been answered a couple times already but, I do not want to add this line to my .cfg file or give the relative argument when I launch an ansible-playbook command.
Problem: So this answer came to my attention https://stackoverflow.com/a/54735937/18647199
I copy pasted the two tasks in my playbook and if they're by themselves the script runs properly. Skipping the aforementioned prompt (even though it skips it on one server that I still have to made the first connection) see:
TASK [Check known_hosts for 192.168.1.14] **************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [Ignore host key for 192.168.1.14 on first run] ***************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
PLAY RECAP *********************************************************************
192.168.1.14 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.16 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.25 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
But if I add just one more task to it, it asks again for the auth prompt that I'm trying to skip.
p.s. using OpenSSH, latest current version.
What I'm trying to run:
---
#all
- hosts: all
#connection: local
become: true
gather_facts: false #otherwise ssh prompt appears
tasks:
- name: Check known_hosts
local_action: shell ssh-keygen -F "{{ inventory_hostname }}"
register: is_known
failed_when: false
changed_when: false
ignore_errors: yes
- name: debug message
debug:
msg: the "{{ inventory_hostname }}"" was tested with output "{{ is_known }}"
- name: Ignore host key for "{{ inventory_hostname }}" on first run
when: is_known.rc == 1
set_fact:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap_result
[..] more code
Debug output:
ansible-playbook debug-bootstrap.yml
PLAY [all] *********************************************************************
TASK [Check known_hosts] *******************************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [debug message] ***********************************************************
ok: [192.168.1.14] => {
"msg": "the \"192.168.1.14\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.14\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.940041', 'end': '2022-04-02 12:30:50.943287', 'delta': '0:00:00.003246', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.16] => {
"msg": "the \"192.168.1.16\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.16\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.937097', 'end': '2022-04-02 12:30:50.941015', 'delta': '0:00:00.003918', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.25] => {
"msg": "the \"192.168.1.25\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.25\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.978944', 'end': '2022-04-02 12:30:50.982119', 'delta': '0:00:00.003175', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
TASK [Ignore host key for "192.168.1.14" on first run] *************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
TASK [Bootstrap check] *********************************************************
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? ok: [192.168.1.16]
ok: [192.168.1.14]
So it seems like the command shell ssh-keygen -F "{{ inventory_hostname }}" isn't doing what it's supposed to do as if we had to launch that via terminal.
Question: Does anyone know how to implement that "one-time skip" or has a better way to do this for a fully automated provisioning / deploy?
(I tried to create an unique .yml file with scarce results, I hit a wall and have not many ideas left on how to continue a fully automated provisioning)
Just added mine answer to How to ignore ansible SSH authenticity checking? which list lots of options.
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Maybe this could help?

I can't manage files from s3 using ansible aws_s3 module

Running the following task on my playbook:
- name: "Check if hold file exists..."
amazon.aws.aws_s3:
region: <region>
bucket: <bucket>
prefix: "hold_jobs_{{ env }}"
mode: list
register: s3_files
I Got the following error:
{
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_amazon.aws.aws_s3_payload_b92r42e3/ansible_amazon.aws.aws_s3_payload.zip/ansible_collections/amazon/aws/plugins/modules/aws_s3.py\", line 427, in bucket_check\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 661, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (400) when calling the HeadBucket operation: Bad Request\n",
"boto3_version": "1.9.223",
"botocore_version": "1.12.253",
"error": {
"code": "400",
"message": "Bad Request"
}
I am using this configuration:
ansible [core 2.11.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
jinja version = 2.10.1
libyaml = True
I need help to solve this, thanks in advance. For this moment I am using a paliative solution, with shell module.

Nsupdate module in ansible causes an error

When i try to run the following .yml role i get an error with nsupdate.
I am using centos7 and the machine is running bind.
When i do nsupdate with either the original DNS server or the ansible master i can update the records, only when i use the nsupdate module it doesn't work, any help? ty!
tasks/main.yml
This is the part with the relevant code
- name: Add or modify ansible.example.org A to 192.168.1.1"
community.general.nsupdate:
server: "10.0.0.40"
zone: "ben.com."
record: "ansible"
value: "192.168.1.1"
when: ansible_eth1.ipv4.address == '10.0.0.40'
The error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: invalid syntax
fatal: [10.0.0.40]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 10.0.0.40 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.nsupdate', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_community.general.nsupdate_payload_xAhaGd/ansible_community.general.nsupdate_payload.zip/ansible_collections/community/general/plugins/modules/nsupdate.py\", line 189, in <module>\r\n File \"build/bdist.linux-x86_64/egg/dns/update.py\", line 21, in <module>\r\n File \"/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py\", line 201\r\n s.write(f';{name}\\n')\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
relevant line in traceback
File "build/bdist.linux-x86_64/egg/dns/update.py", line 21, in <module>
File "/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py", line 201
s.write(f';{name}\n')
^
SyntaxError: invalid syntax
Problem was that it was prolly running on the wrong interpeter on the host machine like #Patrick mentioned..
Fixed it by adding host group vars like so:
[DNS_Master:vars]
ansible_python_interpreter=/usr/local/bin/python3.9

Meld error with Datastax Enterprise

Provisioning a DSE cluster with the lifecycle manager fails consitently. Master node (also the one OpsCenter is running on) installed correctly. Each one of the other nodes fails the install (also config) task. Have double-checked the SSH credentials and ports. Any ideas on how to investigate further and fix the issue would be great.
Please excuse the length - trying to provide all of the relevant info.
Ubuntu 14.04.4,
JRE: 1.8.0.91,
DSE 5.0.0
job events:
...
"results": [
{
"event-subtype": "start",
"event-type": "milestone",
"message": "job started...",
...
},
{
"event-subtype": "invocation",
"event-type": "shell-command",
"message": "Invoked command: if [ -x $(which yum) ] && [ -f /etc/redhat-release -o -f /etc/SuSE-release ]; then echo -n yum; elif [ -x $(which apt-get) ]; then echo -n apt; fi"
...
},
{
"event-subtype": "uploaded-facts",
"event-type": "milestone",
"message": "Uploaded facts to OpsCenter server",
...
},
{
"event-subtype": "meld-error",
"event-type": "error",
"message": "Unexpected error executing meld",
...
},
{
"event-subtype": "MeldError",
"event-type": "error",
"message": "Meld failed on: name=\"NODE-2\" ssh-management-address=\"<IP>\" node-id=\"<node-id>\" job-id=\"<job-id>\" stdout=\"\r\n\" stderr=\"\"",
...
}
]
opscenterd.log
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,848 [opscenterd] INFO: Install job started for node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,850 [opscenterd] INFO: using ssh-private-key (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,478 [opscenterd] INFO: Received milestone from node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" message="Uploaded facts to OpsCenter server" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" (MainThread)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:18,675 [opscenterd] ERROR: Received error from node event-subtype="meld-error" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" name="NODE-2" traceback="Traceback (most recent call last):
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 3313, in run
/var/log/opscenter/opscenterd.log- rc = engine.go()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 2991, in go
/var/log/opscenter/opscenterd.log- self.file_manager.get_config_files()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 1280, in get_config_files
/var/log/opscenter/opscenterd.log- {\"accept\": \"application/json\"})
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 598, in get
/var/log/opscenter/opscenterd.log- return json.loads(response.read())
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/socket.py\", line 351, in read
/var/log/opscenter/opscenterd.log- data = self._sock.recv(rbufsize)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 549, in read
/var/log/opscenter/opscenterd.log- return self._read_chunked(amt)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 609, in _read_chunked
/var/log/opscenter/opscenterd.log- value.append(self._safe_read(amt))
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 666, in _safe_read
/var/log/opscenter/opscenterd.log- raise IncompleteRead(''.join(s), amt)
/var/log/opscenter/opscenterd.log:IncompleteRead: IncompleteRead(4153 bytes read, 4039 more expected)" ssh-management-address="<IP>" node-id="<node-id>" event-type="error" message="Unexpected error executing meld" (MainThread)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,892 [opscenterd] ERROR: Install job a630c081-6ac1-4b00-ac08-18fef320e0d5 failed! (async-thread-macro-54)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:19,105 [opscenterd] ERROR: Meld failed on: name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" stdout="
/var/log/opscenter/opscenterd.log-" stderr="" (async-thread-macro-53)
Thank you
EDIT: Captured the HTTP traffic between NODE2 and master. The error occurs while transferring config files. One of them is not transferred completely for some reason. The json looks resonable until some gibberish appears.
{"filename": "dse.yaml", "contents": {"internode_messaging_options": {"client_worker_threads": 16, "port": 8609, "server_worker_threads": 16, "server_acceptor_thread
Yvatv+~UK{.kMI4^QOrqQTDX_3"DPm,v!"H&M$!1M7
LRYCs{l>-df;cj
W6C9dq
The config files are valid and do work on the master node. Only the replication fails.
OpsCenter LCM developer here. Your issue is caused by OPSC-8851 in the LCM known issues list: http://docs.datastax.com/en/opscenter/6.0/opsc/release_notes/opscReleaseNotes600.html
This is only triggered under certain network conditions and was discovered too close to release to get fixed in 6.0.0. It's a high priority though, and will be fixed in a subsequent release soon. Unfortunately, I don't think there's anything you can do to work around this in the field. If you're a DataStax customer, you could contact support and potentially get a patch now to workaround the issue... otherwise the only thing I can suggest is to watch the upcoming release notes.
Edit: I should also note that in our tests the issue is intermittent. LCM is designed so you can rerun failed jobs safely (aka it's idempotent) so in all but the most extreme cases you can also work around this just by rerunning your job.
You can specify the private IP for Listen Address and 0.0.0.0 for broadcast address and LCM should be able to provision appropriately.

Projects were not shown in scrapyd

I am new to scrapyd,
I have insert the below code into scrapy.cfg file.
[settings]
default = uk.settings
[deploy:scrapyd]
url = http://localhost:6800/
project=ukmall
[deploy:scrapyd2]
url = http://scrapyd.mydomain.com/api/scrapyd/
username = john
password = secret
If I run below code code
$scrapyd-deploy -l
I can get
scrapyd2 http://scrapyd.mydomain.com/api/scrapyd/
scrapyd http://localst:6800/
To see all available projects
scrapyd-deploy -L scrapyd
But it shows nothing in my machine?
Ref: http://scrapyd.readthedocs.org/en/latest/deploy.html#deploying-a-project
If Did
$ scrapy deploy scrapyd2
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$ scrapy deploy scrapyd2
Packing version 1412322816
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 103, in run
egg, tmpdir = _build_egg()
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 228, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/lib/pymodules/python2.7/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-VLM6W7']' returned non-zero exit status 1
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$
If I do this for another project means it shows.
$ scrapy deploy scrapyd
Packing version 1412325181
Deploying to project "project2" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "[Errno 13] Permission denied: 'eggs'"}
You'll only be able to list the spiders that have been deployed. If you haven't deployed anything yet then to deploy your spider you simply use scrapy deploy:
scrapy deploy [ <target:project> | -l <target> | -L ]
vagrant#portia:~/takeovertheworld$ scrapy deploy scrapyd2
Packing version 1410145736
Deploying to project "takeovertheworld" in http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/addversion.json
Server response (200):
{"status": "ok", "project": "takeovertheworld", "version": "1410145736", "spiders": 1}
Verify that the project was installed correctly by accessing the scrapyd API:
vagrant#portia:~/takeovertheworld$ curl http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/listprojects.json
{"status": "ok", "projects": ["takeovertheworld"]}
I had same error too. As #hugsbrugs said,because a folder inside the scrapy project had root rights.So, I do that.
sudo scrapy deploy scrapyd2