OpenStack Cinder replication with Ceph - replication

I set up two clusters ceph (version 12.2.9 luminous). The first cluster has the name of the "primary", the second "secondary". Two-way replication is configured between the two clusters using an RBD mirror. Images are created and replicated successfully. I want to configure Volume Replication in Cinder. I'm having trouble configuring Сinder. Help me please.
My settings:
file /etc/cinder/cinder.conf
.....
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
replication_device = backend_id:secondary,conf:/etc /ceph/secondary.conf,user:cinder,pool:volumes
rbd_cluster_name = primary
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/primary.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
volume_backend_name = ceph
rbd_user = cinder
rbd_secret_uuid = 5e54739d-37cf-49e8-93f5-5489f51d8ef0
List /etc/ceph/
$ ls /etc/ceph
primary.client.cinder.keyring primary.client.glance.keyring primary.conf secondary.client.cinder.keyring secondary.client.glance.keyring secondary.conf
Cinder Type Configuration
$ cinder type-create replicated
+--------------------------------------+------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------------+-------------+-----------+
| f839f331-4af0-4626-b377-e9baa9789c9d | replicated | - | True |
+--------------------------------------+------------+-------------+-----------+
$ cinder type-key replicated set volume_backend_name=ceph
$ cinder type-key replicated set replication_enabled='<is> True'
$ cinder extra-specs-list
+--------------------------------------+------------+---------------------------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+------------+---------------------------------------------------------------------+
| 1fef1cf4-3cbc-4f8a-9f06-ef07444c2570 | type2 | {} |
| 709a93bc-b8fb-4b45-8af4-d4422d10fc73 | type1 | {} |
| f839f331-4af0-4626-b377-e9baa9789c9d | replicated | {'replication_enabled': '<is> True', 'volume_backend_name': 'ceph'} |
+--------------------------------------+------------+---------------------------------------------------------------------+
Replicated Volume Creation
$ cinder create --volume-type replicated --name fingers-crossed 1
$ cinder list
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| e2ed712d-fd47-4c5b-ae97-75e5834120e4 | error | fingers-crossed | 1 | replicated | false | |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
volume log
$ cat /var/log/cinder/volume.log
.......
2018-11-08 13:34:47.369 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (62ad1df9-9f38-4e4a-af1d-688f8fd793b1) transitioned into state 'FAILURE' from state 'RUNNING'
5 predecessors (most recent first):
Atom 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'volume': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2018-11-08T10:34:44Z,deleted=False,deleted_at=None,display_description=None,display_name='fingers-crossed',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='server.loc#ceph#ceph',id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='e92f59855c986',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2018-11-08T10:34:46Z,service_uuid=None,shared_targets=True,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2018-11-08T10:34:46Z,user_id='e92f563fa9',volume_attachment=<?>,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d),volume_type_id=f839f331-4af0-4626-b377-e9baa9789c9d), 'context': <cinder.context.RequestContext object at 0x7fc0fe180390>}, 'provides': None}
|__Atom 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'volume': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2018-11-08T10:34:44Z,deleted=False,deleted_at=None,display_description=None,display_name='fingers-crossed',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='server.loc#ceph#ceph',id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='e92f59855c986',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2018-11-08T10:34:46Z,service_uuid=None,shared_targets=True,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2018-11-08T10:34:46Z,user_id='e92f563fa9',volume_attachment=<?>,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d),volume_type_id=f839f331-4af0-4626-b377-e9baa9789c9d), 'request_spec': RequestSpec(CG_backend=<?>,backup_id=None,cgsnapshot_id=None,consistencygroup_id=None,group_backend=<?>,group_id=None,image_id=None,resource_backend=<?>,snapshot_id=None,source_replicaid=<?>,source_volid=None,volume=Volume(e2ed712d-fd47-4c5b-ae97-75e5834120e4),volume_id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,volume_properties=VolumeProperties,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d)), 'context': <cinder.context.RequestContext object at 0x7fc0fe180390>}, 'provides': {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-e2ed712d-fd47-4c5b-ae97-75e5834120e4', 'type': 'raw', 'volume_id': u'e2ed712d-fd47-4c5b-ae97-75e5834120e4'}}
|__Atom 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'volume': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2018-11-08T10:34:44Z,deleted=False,deleted_at=None,display_description=None,display_name='fingers-crossed',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='server.loc#ceph#ceph',id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='e92f59855c986',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2018-11-08T10:34:46Z,service_uuid=None,shared_targets=True,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2018-11-08T10:34:46Z,user_id='e92f563fa9',volume_attachment=<?>,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d),volume_type_id=f839f331-4af0-4626-b377-e9baa9789c9d), 'filter_properties': {u'config_options': {}, u'request_spec': {u'backup_id': None, u'volume_properties': {u'status': u'creating', u'volume_type_id': u'f839f331-4af0-4626-b377-e9baa9789c9d', u'project_id': u'e92f59855c986', u'user_id': u'e92f563fa9', u'availability_zone': u'nova', u'reservations': [u'c2f86481-cc7f-47e2-91f5-6081cf4fc75d', u'e58874c0-7e52-4af2-9b92-1f14b27ceb1e', u'f30eee62-9f37-4d01-9459-5e61255dfef1', u'f2e6d81d-4728-42e3-87cc-df2687947cab'], u'multiattach': False, u'attach_status': u'detached', u'source_volid': None, u'cgsnapshot_id': None, u'metadata': {}, u'qos_specs': None, u'encryption_key_id': None, u'display_description': None, u'snapshot_id': None, u'display_name': u'fingers-crossed', u'group_id': None, u'consistencygroup_id': None, u'size': 1}, u'source_volid': None, u'cgsnapshot_id': None, u'volume': {u'migration_status': None, u'provider_id': None, u'availability_zone': u'nova', u'terminated_at': None, u'updated_at': None, u'provider_geometry': None, u'replication_extended_status': None, u'replication_status': None, u'snapshot_id': None, u'ec2_id': None, u'deleted_at': None, u'id': u'e2ed712d-fd47-4c5b-ae97-75e5834120e4', u'size': 1, u'display_name': u'fingers-crossed', u'display_description': None, u'cluster_name': None, u'name_id': u'e2ed712d-fd47-4c5b-ae97-75e5834120e4', u'volume_admin_metadata': [], u'project_id': u'e92f59855c986', u'launched_at': None, u'scheduled_at': None, u'status': u'creating', u'volume_type_id': u'f839f331-4af0-4626-b377-e9baa9789c9d', u'multiattach': False, u'deleted': False, u'service_uuid': None, u'provider_location': None, u'volume_glance_metadata': [], u'admin_metadata': {}, u'host': None, u'glance_metadata': {}, u'consistencygroup_id': None, u'source_volid': None, u'provider_auth': None, u'previous_status': None, u'group_id': None, u'name': u'volume-e2ed712d-fd47-4c5b-ae97-75e5834120e4', u'user_id': u'e92f563fa9', u'bootable': False, u'shared_targets': True, u'attach_status': u'detached', u'volume_metadata': [], u'_name_id': None, u'encryption_key_id': None, u'replication_driver_data': None, u'metadata': {}, u'created_at': u'2018-11-08T10:34:44.000000'}, u'image_id': None, u'snapshot_id': None, u'consistencygroup_id': None, u'volume_type': {u'name': u'replicated', u'qos_specs_id': None, u'deleted': False, u'created_at': u'2018-11-08T10:33:34.000000', u'updated_at': None, u'extra_specs': {u'replication_enabled': u'<is> True', u'volume_backend_name': u'ceph'}, u'is_public': True, u'deleted_at': None, u'id': u'f839f331-4af0-4626-b377-e9baa9789c9d', u'projects': [], u'description': None}, u'volume_id': u'e2ed712d-fd47-4c5b-ae97-75e5834120e4', u'resource_properties': {u'status': u'creating', u'volume_type_id': u'f839f331-4af0-4626-b377-e9baa9789c9d', u'project_id': u'e92f59855c986', u'user_id': u'e92f563fa9', u'availability_zone': u'nova', u'reservations': [u'c2f86481-cc7f-47e2-91f5-6081cf4fc75d', u'e58874c0-7e52-4af2-9b92-1f14b27ceb1e', u'f30eee62-9f37-4d01-9459-5e61255dfef1', u'f2e6d81d-4728-42e3-87cc-df2687947cab'], u'multiattach': False, u'attach_status': u'detached', u'source_volid': None, u'cgsnapshot_id': None, u'metadata': {}, u'qos_specs': None, u'encryption_key_id': None, u'display_description': None, u'snapshot_id': None, u'display_name': u'fingers-crossed', u'group_id': None, u'consistencygroup_id': None, u'size': 1}, u'group_id': None}, u'user_id': u'e92f563fa9', u'availability_zone': u'nova', u'volume_type': VolumeType(created_at=2018-11-08T10:33:34Z,deleted=False,deleted_at=None,description=None,extra_specs={replication_enabled='<is> True',volume_backend_name='ceph'},id=f839f331-4af0-4626-b377-e9baa9789c9d,is_public=True,name='replicated',projects=[],qos_specs=<?>,qos_specs_id=None,updated_at=None), u'qos_specs': None, u'retry': {u'num_attempts': 3, u'backends': [u'server.loc#ceph#ceph', u'server.loc#ceph#ceph', u'server.loc#ceph#ceph'], u'hosts': [u'server.loc#ceph#ceph', u'server.loc#ceph#ceph', u'server.loc#ceph#ceph']}, u'metadata': {}, u'resource_type': VolumeType(created_at=2018-11-08T10:33:34Z,deleted=False,deleted_at=None,description=None,extra_specs={replication_enabled='<is> True',volume_backend_name='ceph'},id=f839f331-4af0-4626-b377-e9baa9789c9d,is_public=True,name='replicated',projects=[],qos_specs=<?>,qos_specs_id=None,updated_at=None), u'size': 1}, 'context': <cinder.context.RequestContext object at 0x7fc0fe180390>, 'request_spec': RequestSpec(CG_backend=<?>,backup_id=None,cgsnapshot_id=None,consistencygroup_id=None,group_backend=<?>,group_id=None,image_id=None,resource_backend=<?>,snapshot_id=None,source_replicaid=<?>,source_volid=None,volume=Volume(e2ed712d-fd47-4c5b-ae97-75e5834120e4),volume_id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,volume_properties=VolumeProperties,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d))}, 'provides': None}
|__Atom 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'volume': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2018-11-08T10:34:44Z,deleted=False,deleted_at=None,display_description=None,display_name='fingers-crossed',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='server.loc#ceph#ceph',id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='e92f59855c986',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2018-11-08T10:34:46Z,service_uuid=None,shared_targets=True,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2018-11-08T10:34:46Z,user_id='e92f563fa9',volume_attachment=<?>,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d),volume_type_id=f839f331-4af0-4626-b377-e9baa9789c9d), 'context': <cinder.context.RequestContext object at 0x7fc0fe180390>}, 'provides': Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2018-11-08T10:34:44Z,deleted=False,deleted_at=None,display_description=None,display_name='fingers-crossed',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='server.loc#ceph#ceph',id=e2ed712d-fd47-4c5b-ae97-75e5834120e4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='e92f59855c986',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2018-11-08T10:34:46Z,service_uuid=None,shared_targets=True,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2018-11-08T10:34:46Z,user_id='e92f563fa9',volume_attachment=<?>,volume_type=VolumeType(f839f331-4af0-4626-b377-e9baa9789c9d),volume_type_id=f839f331-4af0-4626-b377-e9baa9789c9d)}
|__Flow 'volume_create_manager': ReplicationError: \u041e\u0448\u0438\u0431\u043a\u0430 \u0440\u0435\u043f\u043b\u0438\u043a\u0430\u0446\u0438\u0438 \u0442\u043e\u043c\u0430 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager Traceback (most recent call last):
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager result = task.execute(**arguments)
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1011, in execute
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager model_update = self._create_raw_volume(volume, **volume_spec)
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 978, in _create_raw_volume
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager ret = self.driver.create_volume(volume)
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 795, in create_volume
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager volume_id=volume.id)
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager ReplicationError: \u041e\u0448\u0438\u0431\u043a\u0430 \u0440\u0435\u043f\u043b\u0438\u043a\u0430\u0446\u0438\u0438 \u0442\u043e\u043c\u0430 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication
2018-11-08 13:34:47.369 2348 ERROR cinder.volume.manager
2018-11-08 13:34:47.376 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (62ad1df9-9f38-4e4a-af1d-688f8fd793b1) transitioned into state 'REVERTED' from state 'REVERTING'
2018-11-08 13:34:47.379 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' (df155d8f-7861-4859-b3e0-927388c786d9) transitioned into state 'REVERTED' from state 'REVERTING'
2018-11-08 13:34:47.382 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (1d6823ac-a749-4af8-be3b-4bab16f402b4) transitioned into state 'REVERTED' from state 'REVERTING'
2018-11-08 13:34:47.485 2348 INFO cinder.volume.drivers.rbd [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume volume-e2ed712d-fd47-4c5b-ae97-75e5834120e4 no longer exists in backend
2018-11-08 13:34:47.491 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (dbf019d7-c936-4c03-a5da-32800d91f80e) transitioned into state 'REVERTED' from state 'REVERTING'
2018-11-08 13:34:47.494 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (c4b82318-880b-4d64-912e-3786e4b1784f) transitioned into state 'REVERTED' from state 'REVERTING'
2018-11-08 13:34:47.496 2348 WARNING cinder.volume.manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Flow 'volume_create_manager' (756bc982-2823-4d78-a803-b680e419ce01) transitioned into state 'REVERTED' from state 'RUNNING'
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Exception during message handling: ReplicationError: \u041e\u0448\u0438\u0431\u043a\u0430 \u0440\u0435\u043f\u043b\u0438\u043a\u0430\u0446\u0438\u0438 \u0442\u043e\u043c\u0430 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "<string>", line 2, in create_volume
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/objects/cleanable.py", line 207, in wrapper
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 681, in create_volume
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server _run_flow()
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 673, in _run_flow
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server flow_engine.run()
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout):
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server failures[0].reraise()
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server six.reraise(*self._exc_info)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server result = task.execute(**arguments)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1011, in execute
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server model_update = self._create_raw_volume(volume, **volume_spec)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 978, in _create_raw_volume
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server ret = self.driver.create_volume(volume)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 795, in create_volume
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server volume_id=volume.id)
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server ReplicationError: \u041e\u0448\u04 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication
2018-11-08 13:34:47.497 2348 ERROR oslo_messaging.rpc.server
scheduler log
$ cat scheduler.log
2018-11-08 13:34:45.991 889 ERROR cinder.scheduler.filter_scheduler [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Error scheduling None from last vol-service: server.loc#ceph#ceph : [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task\n result = task.execute(**arguments)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1011, in execute\n model_update = self._create_raw_volume(volume, **volume_spec)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 978, in _create_raw_volume\n ret = self.driver.create_volume(volume)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 795, in create_volume\n volume_id=volume.id)\n', u'ReplicationError: \\u043b\\u0438\\u043a\\u0430\\u0446\\u0438\\u0438 \\u0442\\u043e\\u043c\\u0430 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication\n']
2018-11-08 13:34:46.006 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#lvm)
2018-11-08 13:34:46.006 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#lvm2)
2018-11-08 13:34:46.007 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#ceph_2)
2018-11-08 13:34:46.680 889 ERROR cinder.scheduler.filter_scheduler [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Error scheduling None from last vol-service: server.loc#ceph#ceph : [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task\n result = task.execute(**arguments)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1011, in execute\n model_update = self._create_raw_volume(volume, **volume_spec)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 978, in _create_raw_volume\n ret = self.driver.create_volume(volume)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 795, in create_volume\n volume_id=volume.id)\n', u'ReplicationError: \\u041e\\u0448\\u0438\\u0431\\u043a\\u0430 \\u0440\\u0435\\u043f\\u043b\\u0438\\u043a\\u0430\\u0446\\u0438\\u0438 \\u0442\\u043e\\u043c\\u0430 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication\n']
2018-11-08 13:34:46.700 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#lvm)
2018-11-08 13:34:46.701 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#lvm2)
2018-11-08 13:34:46.701 889 WARNING cinder.scheduler.host_manager [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] volume service is down. (host: server.loc#ceph_2)
2018-11-08 13:34:47.490 889 ERROR cinder.scheduler.filter_scheduler [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Error scheduling None from last vol-service: server.loc#ceph#ceph : [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task\n result = task.execute(**arguments)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1011, in execute\n model_update = self._create_raw_volume(volume, **volume_spec)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 978, in _create_raw_volume\n ret = self.driver.create_volume(volume)\n', u' File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 795, in create_volume\n volume_id=volume.id)\n', u'ReplicationError: \\u041e\\u0448\\u0438\\u0431\\u043a\\u0430 \\u0440\\u0435\\u043f\\u043b\\u0438\\u043a\\u043 e2ed712d-fd47-4c5b-ae97-75e5834120e4: Failed to enable image replication\n']
2018-11-08 13:34:47.491 889 INFO cinder.message.api [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Creating message record for request_id = req-7f374fa5-7d95-4dae-81c1-81b839beb083
2018-11-08 13:34:47.545 889 ERROR cinder.scheduler.flows.create_volume [req-7f374fa5-7d95-4dae-81c1-81b839beb083 e92f563fa9 e92f59855c986 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded maximum number of scheduling attempts 3 for volume None: NoValidBackend: No valid backend was found. \u041f\u0440e \u043c\u043d\u043e\u0435

Related

Can't create a database with Odoo: Database creation error: OperationalError()

I'm trying to create my first database here, but I get this error:(error image)
I also checked the log and this is what I found:
022-08-14 23:10:15,799 6576 INFO ? odoo.sql_db: Connection to the database failed
2022-08-14 23:10:15,839 6576 INFO None odoo.service.db: Create database `odoo_db`.
2022-08-14 23:10:15,874 6576 INFO None odoo.sql_db: Connection to the database failed
2022-08-14 23:10:15,875 6576 ERROR None odoo.http:
Traceback (most recent call last):
File "C:\Program Files\Odoo\server\odoo\http.py", line 141, in dispatch_rpc
result = dispatch(method, params)
File "C:\Program Files\Odoo\server\odoo\service\db.py", line 462, in dispatch
return g[exp_method_name](*params)
File "<decorator-gen-14>", line 2, in exp_create_database
File "C:\Program Files\Odoo\server\odoo\service\db.py", line 41, in if_db_mgt_enabled
return method(self, *args, **kwargs)
File "C:\Program Files\Odoo\server\odoo\service\db.py", line 130, in exp_create_database
_create_empty_database(db_name)
File "C:\Program Files\Odoo\server\odoo\service\db.py", line 99, in _create_empty_database
with closing(db.cursor()) as cr:
File "C:\Program Files\Odoo\server\odoo\sql_db.py", line 709, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "C:\Program Files\Odoo\server\odoo\sql_db.py", line 259, in __init__
self._cnx = pool.borrow(dsn)
File "C:\Program Files\Odoo\server\odoo\sql_db.py", line 592, in _locked
return fun(self, *args, **kwargs)
File "C:\Program Files\Odoo\server\odoo\sql_db.py", line 660, in borrow
**connection_info)
File "C:\Program Files\Odoo\python\lib\site-packages\psycopg2\__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError
2022-08-14 23:10:15,939 6576 INFO None odoo.sql_db: Connection to the database failed
2022-08-14 23:10:15,964 6576 INFO None werkzeug: 127.0.0.1 - - [14/Aug/2022 23:10:15] "POST /web/database/create HTTP/1.1" 200 - 0 0.000 5.881
2022-08-14 23:10:16,162 6576 INFO ? odoo.sql_db: Connection to the database failed
2022-08-14 23:10:16,165 6576 INFO ? werkzeug: 127.0.0.1 - - [14/Aug/2022 23:10:16] "GET /web/database/Roboto-Regular.ttf HTTP/1.1" 404 - 0 0.000 0.047

ERROR IN CELEB-A dataset DOWNLOAD BY tensorflow_datasets

I am trying to download CELEB-A by tensorflow_datasets (version: 4.5.2) and getting an API error. How can I fix it?
I have update the tensorflow_datasets but the issue does does not fix.
My code is:
import tensorflow_datasets as tf ds
dataset_builder = tfds.builder('celeb_a')
dataset_builder.download_and_prepare()
I am getting the following error:
Downloading and preparing dataset 1.38 GiB (download: 1.38 GiB, generated: 1.62 GiB, total: 3.00 GiB) to /root/tensorflow_datasets/celeb_a/2.0.1...
Dl Size...: 0 MiB [00:00, ? MiB/s] | 0/4 [00:00<?, ? url/s]
Dl Completed...: 0%| | 0/4 [00:00<?, ? url/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/dataset_builder.py", line 464, in download_and_prepare
download_config=download_config,
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/dataset_builder.py", line 1158, in _download_and_prepare
dl_manager, **optional_pipeline_kwargs)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/image/celeba.py", line 129, in _split_generators
"landmarks_celeba": LANDMARKS_DATA,
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/download_manager.py", line 549, in download
return _map_promise(self._download, url_or_urls)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/download_manager.py", line 767, in _map_promise
res = tf.nest.map_structure(lambda p: p.get(), all_promises) # Wait promises
File "/miniconda/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 867, in map_structure
structure[0], [func(*x) for x in entries],
File "/miniconda/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 867, in <listcomp>
structure[0], [func(*x) for x in entries],
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/download_manager.py", line 767, in <lambda>
res = tf.nest.map_structure(lambda p: p.get(), all_promises) # Wait promises
File "/miniconda/lib/python3.7/site-packages/promise/promise.py", line 512, in get
return self._target_settled_value(_raise=True)
File "/miniconda/lib/python3.7/site-packages/promise/promise.py", line 516, in _target_settled_value
return self._target()._settled_value(_raise)
File "/miniconda/lib/python3.7/site-packages/promise/promise.py", line 226, in _settled_value
reraise(type(raise_val), raise_val, self._traceback)
File "/miniconda/lib/python3.7/site-packages/six.py", line 703, in reraise
raise value
File "/miniconda/lib/python3.7/site-packages/promise/promise.py", line 844, in handle_future_result
resolve(future.result())
File "/miniconda/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/miniconda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/downloader.py", line 216, in _sync_download
with _open_url(url, verify=verify) as (response, iter_content):
File "/miniconda/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/downloader.py", line 276, in _open_with_requests
url = _get_drive_url(url, session)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/downloader.py", line 298, in _get_drive_url
_assert_status(response)
File "/miniconda/lib/python3.7/site-packages/tensorflow_datasets/core/download/downloader.py", line 310, in _assert_status
response.url, response.status_code))
tensorflow_datasets.core.download.downloader.DownloadError: Failed to get url https://drive.google.com/uc?export=download&id=0B7EVK8r0v71pZjFTYXZWM3FlRnM. HTTP code: 404.
It seems the link is broken hence this error is shown while fetching this celeb_a tensorflow dataset. However you can download this dataset manually using this link till we fix that error, .

Unable to convert VGG-16 to IR

I have truncated version of vgg16 in .pb format. I am unable to convert to IR using OpenVino Model Optimizer getting following error:
[ ANALYSIS INFO ] It looks like there is IteratorGetNext as input
Run the Model Optimizer with:
--input "IteratorGetNext:0[-1 224 224 3]"
And replace all negative values with positive values
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (): Graph contains 0 node after executing . It considered as error because resulting IR will be empty which is not usual
python3 /opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo_tf.py --input_model model.pb
With *.meta
python3 /opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo_tf.py --input_meta_graph model.meta --log_level DEBUG
[ 2020-06-11 10:59:34,182 ] [ DEBUG ] [ main:213 ] Placeholder shapes : None
'extensions.back.ScalarConstNormalize.RangeInputNormalize'>
| 310 | True | <class 'extensions.back.AvgPool.AvgPool'>
| 311 | True | <class 'extensions.back.ReverseInputChannels.ApplyReverseChannels'>
| 312 | True | <class 'extensions.back.split_normalizer.SplitNormalizer'>
| 313 | True | <class 'extensions.back.ParameterToPlaceholder.ParameterToInput'>
| 314 | True | <class 'extensions.back.GroupedConvWeightsNormalize.GroupedConvWeightsNormalize'>
| 315 | True | <class 'extensions.back.ConvolutionNormalizer.DeconvolutionNormalizer'>
| 316 | True | <class 'extensions.back.StridedSliceMasksNormalizer.StridedSliceMasksNormalizer'>
| 317 | True | <class 'extensions.back.ConvolutionNormalizer.ConvolutionWithGroupsResolver'>
| 318 | True | <class 'extensions.back.ReshapeMutation.ReshapeMutation'>
| 319 | True | <class 'extensions.back.ForceStrictPrecision.ForceStrictPrecision'>
| 320 | True | <class 'extensions.back.I64ToI32.I64ToI32'>
| 321 | True | <class 'extensions.back.ReshapeMutation.DisableReshapeMutationInTensorIterator'>
| 322 | True | <class 'extensions.back.ActivationsNormalizer.ActivationsNormalizer'>
| 323 | True | <class 'extensions.back.pass_separator.BackFinish'>
| 324 | False | <class 'extensions.back.SpecialNodesFinalization.RemoveConstOps'>
| 325 | False | <class 'extensions.back.SpecialNodesFinalization.CreateConstNodesReplacement'>
| 326 | True | <class 'extensions.back.kaldi_remove_memory_output.KaldiRemoveMemoryOutputBackReplacementPattern'>
| 327 | False | <class 'extensions.back.SpecialNodesFinalization.RemoveOutputOps'>
| 328 | True | <class 'extensions.back.blob_normalizer.BlobNormalizer'>
| 329 | False | <class 'extensions.middle.MulFakeQuantizeFuse.MulFakeQuantizeFuse'>
| 330 | False | <class 'extensions.middle.AddFakeQuantizeFuse.AddFakeQuantizeFuse'>
[ 2020-06-11 10:59:34,900 ] [ DEBUG ] [ class_registration:282 ] Run replacer <class 'extensions.load.tf.loader.TFLoader'>
[ INFO ] Restoring parameters from %s
[ WARNING ] From %s: %s (from %s) is deprecated and will be removed %s.
Instructions for updating:
%s
[ WARNING ] From %s: %s (from %s) is deprecated and will be removed %s.
Instructions for updating:
%s
[ FRAMEWORK ERROR ] Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
[ 2020-06-11 10:59:35,760 ] [ DEBUG ] [ main:328 ] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/front/tf/loader.py", line 220, in load_tf_graph_def
outputs)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/graph_util_impl.py", line 330, in convert_variables_to_constants
returned_variables = sess.run(variable_names)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 288, in apply_transform
for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
func(graph)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/load/loader.py", line 27, in find_and_replace_pattern
self.load(graph)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/load/tf/loader.py", line 58, in load
saved_model_tags=argv.saved_model_tags)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/front/tf/loader.py", line 231, in load_tf_graph_def
raise FrameworkError('Cannot load input model: {}', e) from e
mo.utils.error.FrameworkError: Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 312, in main
ret_code = driver(argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 273, in driver
ret_res = emit_ir(prepare_ir(argv), argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 238, in prepare_ir
graph = unified_pipeline(argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/pipeline/unified.py", line 29, in unified_pipeline
class_registration.ClassType.BACK_REPLACER
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 334, in apply_replacements
apply_replacements_list(graph, replacers_order)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 324, in apply_replacements_list
num_transforms=len(replacers_order))
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/logger.py", line 124, in wrapper
function(*args, **kwargs)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 306, in apply_transform
raise FrameworkError('{}'.format(str(err))) from err
mo.utils.error.FrameworkError: Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The problem is that models trained in TensorFlow have some shapes undefined. In your case, it looks like batch of the input is not defined. To fix it, please add an additional argument to the command line: -b 1. The option sets batch to 1. It should fix this particular issue.
After that, I guess, you may encounter other issues so I would leave the following link: Converting a TensorFlow Model.
There are some tips about how to convert TensorFlow model to IR.

ODOO Installation on mac osx

I have try of differents ways to install odoo 10 in my mac osx (10.11.6).
always receive the same error.
I did the instalation using a virtual env.
The installation ended fine, but I haven't be able to run it.
it seems a issue with db connection but where and how
(odoo-env)Nelsons-MacBook-Air:odoo nelsondiaz$ ./odoo-bin
2017-08-29 14:44:22,963 18435 INFO ? odoo: Odoo version 10.0
2017-08-29 14:44:22,963 18435 INFO ? odoo: addons paths: ['/Users/nelsondiaz/Library/Application Support/Odoo/addons/10.0', u'/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/addons', u'/Users/nelsondiaz/Sites/odoo-env/odoo/addons']
2017-08-29 14:44:22,963 18435 INFO ? odoo: database: default#default:default
2017-08-29 14:44:22,978 18435 INFO ? odoo.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
This is the error after running http://127.0.0.1:8069/ or http://localhost:8069/
I try updating pg_hba.conf and odoo.conf
nothing work, the same error
2017-08-29 14:50:25,948 18435 INFO ? odoo.sql_db: Connection to the database failed
2017-08-29 14:50:25,952 18435 INFO ? werkzeug: 127.0.0.1 - - [29/Aug/2017 14:50:25] "GET / HTTP/1.1" 500 -
2017-08-29 14:50:25,965 18435 ERROR ? werkzeug: Error on request:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 193, in run_wsgi
execute(self.server.app)
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 181, in execute
application_iter = app(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/server.py", line 249, in app
return self.app(e, s)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/wsgi_server.py", line 186, in application
return application_unproxied(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/wsgi_server.py", line 172, in application_unproxied
result = handler(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1308, in __call__
return self.dispatch(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1282, in __call__
return self.app(environ, start_wrapped)
File "/Library/Python/2.7/site-packages/werkzeug/wsgi.py", line 599, in __call__
return self.app(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1446, in dispatch
self.setup_db(httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1368, in setup_db
httprequest.session.db = db_monodb(httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1524, in db_monodb
dbs = db_list(True, httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1498, in db_list
dbs = odoo.service.db.list_dbs(force)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/db.py", line 325, in list_dbs
with closing(db.cursor()) as cr:
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 635, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 177, in __init__
self._cnx = pool.borrow(dsn)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 518, in _locked
return fun(self, *args, **kwargs)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 586, in borrow
**connection_info)
File "/Library/Python/2.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: fe_sendauth: no password supplied
2017-08-29 14:50:26,238 18435 INFO ? odoo.sql_db: Connection to the database failed
2017-08-29 14:50:26,242 18435 INFO ? werkzeug: 127.0.0.1 - - [29/Aug/2017 14:50:26] "GET /favicon.ico HTTP/1.1" 500 -
2017-08-29 14:50:26,249 18435 ERROR ? werkzeug: Error on request:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 193, in run_wsgi
execute(self.server.app)
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 181, in execute
application_iter = app(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/server.py", line 249, in app
return self.app(e, s)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/wsgi_server.py", line 186, in application
return application_unproxied(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/wsgi_server.py", line 172, in application_unproxied
result = handler(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1308, in __call__
return self.dispatch(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1282, in __call__
return self.app(environ, start_wrapped)
File "/Library/Python/2.7/site-packages/werkzeug/wsgi.py", line 599, in __call__
return self.app(environ, start_response)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1446, in dispatch
self.setup_db(httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1368, in setup_db
httprequest.session.db = db_monodb(httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1524, in db_monodb
dbs = db_list(True, httprequest)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/http.py", line 1498, in db_list
dbs = odoo.service.db.list_dbs(force)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/service/db.py", line 325, in list_dbs
with closing(db.cursor()) as cr:
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 635, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 177, in __init__
self._cnx = pool.borrow(dsn)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 518, in _locked
return fun(self, *args, **kwargs)
File "/Users/nelsondiaz/Sites/odoo-env/odoo/odoo/sql_db.py", line 586, in borrow
**connection_info)
File "/Library/Python/2.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: fe_sendauth: no password supplied
I couldn't run the server without error only with the command ./odoo-bin
For any reason that I don't know it does not find the conf file.
That I did it was put the conf file in the parameter and it works:
option 1:
./odoo-bin --addons-path=/Users/myuser/Sites/odoo-env/odoo/addons -c debian/odoo.conf
Option 2
./odoo-bin -d odoo --db_user odoo --db_password odoo

Numpy error in printing a RDD in Spark with Ipython

I am trying to print a RDD using Spark in Ipython and when I do that I get this error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-4-77015cd18335> in <module>()
---> 24 print inputData.collect()
25
26
/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/pyspark/rdd.pyc in collect(self)
771 """
772 with SCCallSiteSync(self.context) as css:
--> 773 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
774 return list(_load_from_socket(port, self._jrdd_deserializer))
775
/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/pyspark/sql/utils.pyc in deco(*a, **kw)
34 def deco(*a, **kw):
35 try:
---> 36 return f(*a, **kw)
37 except py4j.protocol.Py4JJavaError as e:
38 s = e.java_exception.toString()
/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 1 times, most recent failure: Lost task 0.0 in stage 7.0 (TID 56, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 421, in loads
return pickle.loads(obj)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.collect(RDD.scala:904)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:373)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 421, in loads
return pickle.loads(obj)
File "/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
My current code is:
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
from pyspark.mllib.util import MLUtils
import os.path
import numpy as np
print np.version.version
def extract(line):
return (line[1])
inputPath = os.path.join('file1.csv')
fileName = os.path.join(inputPath)
Data = sc.textFile(fileName).zipWithIndex().filter(lambda (line,rownum): rownum>0).map(lambda (line, rownum): line)
inputData = (Data
.map(lambda line: line.split(";"))
.filter(lambda line: len(line) >1 )
.map(extract)) # Map to tuples
# error comes a this line
print inputData.collect()
I have numpy already installed (sudo apt-get install python-numpy) and can print numpy version in Ipython using numpy.version.version
Why is this error coming and how to resolve it?
NOTE 1: My current bash_profile:
# Set the Spark Home as an environment variable.
export SPARK_HOME="$HOME/spark-1.5.0-bin-hadoop2.6"
# Define your Spark arguments for when running Spark.
export PYSPARK_SUBMIT_ARGS="--master local[2]"
# IPython alias for the use with SPARK.
alias IPYSPARK='PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook --profile=pyspark --ip=0.0.0.0" $SPARK_HOME/bin/pyspark'
I have also added following to my spark-env.sh.template file:
#!/usr/bin/env bash
# This file is sourced when running various Spark programs.
export PYSPARK_PYTHON=/usr/bin/python2.7
export PYSPARK_DRIVER_PYTHON=/usr/bin/ipython
NOTE 2: I am launching the Ipython notebook from inside a virtual environment.
NOTE 3: I have Spark 1.5.0 and numpy 1.8.2
UPDATE: output from sc.parallelize([],1).mapPartitions(lambda _: [(sys.executable, sys.path)]).first()
('/home/vagrant/pyEnv/bin/python2.7', ['', u'/tmp/spark-dbbcfd0b-413e-4406-8bd5-37de29d3fcc5/userFiles-6296ba2d-4ec5-4956-9904-828bda0c6424', '/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip', '/home/vagrant/spark-1.5.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip', '/home/vagrant/spark-1.5.0-bin-hadoop2.6/lib/spark-assembly-1.5.0-hadoop2.6.0.jar', '/home/vagrant/spark-1.5.0-bin-hadoop2.6/python', '/home/vagrant', '/home/vagrant/pyEnv/lib/python2.7', '/home/vagrant/pyEnv/lib/python2.7/plat-x86_64-linux-gnu', '/home/vagrant/pyEnv/lib/python2.7/lib-tk', '/home/vagrant/pyEnv/lib/python2.7/lib-old', '/home/vagrant/pyEnv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/vagrant/pyEnv/local/lib/python2.7/site-packages', '/home/vagrant/pyEnv/lib/python2.7/site-packages'])