Problem with connecting the Storage Domain (Host host2 cannot access the Storage Domain(s) <UNKNOWN>) - virtual-machine

Problem with connecting the Storage Domain (Host host2 cannot access the Storage Domain(s) )
Hello, everyone! I need specialist help, because I'm already desperate. My company has four hosts that are connected to the storage. Each host has its own IP to access the storage, which means host 1 has an ip 10.42.0.10 and 10.42.1.10 -> host 2 has an ip 10.42.0.20 and 10.42.0.20 respectively. Host 1 cannot ping the address 10.42.0.20. Hardware I tried to explain in more detail.
Host 1 has ovirt node 4.3.9 installed and hosted-engine deployed.
When trying to add host 2 to a cluster it is installed, but not activated. There is an error in ovirt manager - "Host **host2** cannot access the Storage Domain(s) <UNKNOWN>" and host 2 goes to "Not operational" status. On host 2, it writes "connect to 10.42.1.10:3260 failed (No route to host)" in the logs and repeats indefinitely. I manually connected host 2 to the storage using iscsiadm to ip 10.42.0.20. But the error is not missing(. At the same time, when the host tries to activate it, I can run virtual machines on it until the host shows an error message. VMs that have been run on host 2 continue to run even when the host has Non-operational status.
I assume that when adding host 2 to a cluster, ovirt tries to connect it to the same repository host 1 is connected to from ip 10.42.1.10. There may be a way to get ovirt to connect to another ip address instead of the ip domain address for the first host. I'm attaching logs:
/var/log/ovirt-engine/engine.log
2020-03-31 09:13:03,866+03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7fa128f4] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host2.school34.local cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center DataCenter. Setting Host state to Non-Operational.
2020-03-31 10:40:04,883+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] START, ConnectStorageServerVDSCommand(HostName = host2.school34.local, StorageServerConnectionManagementVDSParameters:{hostId='d82c3a76-e417-4fe4-8b08-a29414e3a9c1', storagePoolId='6052cc0a-71b9-11ea-ba5a-00163e10c7e7', storageType='ISCSI', connectionList='[StorageServerConnections:{id='c8a05dc2-f8a2-4354-96ed-907762c29761', connection='10.42.0.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d', connection='10.42.1.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 2c1a22b5
2020-03-31 10:43:05,061+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host2.school34.local command ConnectStorageServerVDS failed: Message timeout which can be caused by communication issues
vdsm.log
2020-03-31 09:34:07,264+0300 ERROR (jsonrpc/5) [storage.HSM] Could not connect to storageServer (hsm:2420)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260] (multiple)'], ['iscsiadm: Could not login to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260].', 'iscsiadm: initiator reported error (8 - connection timed out)', 'iscsiadm: Could not log into all portals'])
2020-03-31 09:36:01,583+0300 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/0 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'port': u'3260', u'connection': u'10.42.0.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'2', u'ipv6_enabled': u'false', u'password': '********', u'id': u'c8a05dc2-f8a2-4354-96ed-907762c29761'}, {u'port': u'3260', u'connection': u'10.42.1.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'password': '********', u'id': u'0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d'}], u'storagepoolID': u'6052cc0a-71b9-11ea-ba5a-00163e10c7e7', u'domainType': 3}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'64cc0385-3a11-474b-98f0-b0ecaa6c67c8'} at 0x7fe1ac1ff510> timeout=60, duration=60.00 at 0x7fe1ac1ffb10> task#=316 at 0x7fe1f0041ad0>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 260, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod
result = fn(*methodArgs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1102, in connectStorageServer
connectionParams)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare
result = self._run(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File: "<string>", line 2, in connectStorageServer
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 327, in node_login
portal, "-l"])
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 122, in _runCmd
return misc.execCmd(cmd, printable=printCmd, sudo=True, sync=sync)
File: "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 213, in execCmd
(out, err) = p.communicate(data)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 924, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1706, in _communicate
orig_timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1779, in _communicate_with_poll
ready = poller.poll(self._remaining_time(endtime)) (executor:363)
Thanks a lot!

I'm struggling trying to figure out your architecture. It seems like you've configured your Storage Domain pointing to 10.42.1.10:3260 as iSCSI portal.
If I've understood correctly, your cluster should be something like:
+-------------+
|HostedEngine |
+------+------+
|
| Management network
+-------+------+ 10.42.0.0/24
| |
+ +
10.42.0.10 10.42.0.20
+--------+ +--------+
| host1 | | host2 |
+--------+ +--------+
10.42.1.10 10.42.1.20
+ +
| |
+-------+------+
|
| Storage network
+---+---+ 10.42.1.0/24
|Storage|
+-------+
Provided my guess is correct, it seems you've configured your iSCSI target on host1 instead of a proper, external, storage device. Otherwise, you've messed up with IP addressing.

Related

Celery erroring out on runtime due to rabbitmq password rotation using Hashicorp Vault

Im trying to use rabbitmq as a broker for celery and using Hashicorp Vault for root credential rotation. How do I use vault's rabbitmq secret engine with celery?
def get_broker_url():
url = "http://localhost:8200/v1/rabbitmq/creds/dev-role"
payload = {}
headers = {
'X-Vault-Token': 'VAULT_TOKEN'
}
response = requests.get(url, headers=headers, data=payload).json()
return "amqp://{username}:{password}#127.0.0.1:5672/".format(username=response["data"]["username"], password=response["data"]["password"])
And while initialising celery
app = Celery(app_name, broker=get_broker_url(), backend="rpc://")
Im getting this error when lease expires
[2022-11-24 13:05:54,489: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 628, in start
c.loop(*c.loop_args())
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
next(loop)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 362, in create_loop
cb(*cbargs)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/transport/base.py", line 235, in on_readable
reader(loop)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/transport/base.py", line 217, in _read
drain_events(timeout=0)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 525, in drain_events
while not self.blocking_read(timeout):
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 531, in blocking_read
return self.on_inbound_frame(frame)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/method_framing.py", line 53, in on_frame
callback(channel, method_sig, buf, None)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 538, in on_inbound_method
method_sig, payload, content,
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/abstract_channel.py", line 156, in dispatch_method
listener(*args)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 668, in _on_close
(class_id, method_id), ConnectionError)
amqp.exceptions.ConnectionForced: (0, 0): (320) CONNECTION_FORCED - user 'root-3a978f79-e2fb-2889-08d4-0c69d2bcbffd' is deleted
[2022-11-24 13:05:54,498: WARNING/MainProcess] /Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:367: CPendingDeprecationWarning:
In Celery 5.1 we introduced an optional breaking change which
on connection loss cancels all currently executed tasks with late acknowledgement enabled.
These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue.
You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting.
In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0.
warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning)
[2022-11-24 13:05:54,505: CRITICAL/MainProcess] Unrecoverable error: AccessRefused(403, 'ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.', (0, 0), '')
Traceback (most recent call last):
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/connection.py", line 21, in start
c.connection = c.connect()
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 428, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 435, in connection_for_read
self.app.connection_for_read(heartbeat=heartbeat))
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 462, in ensure_connected
callback=maybe_shutdown,
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/connection.py", line 381, in ensure_connection
self._ensure_connection(*args, **kwargs)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/connection.py", line 437, in _ensure_connection
callback, timeout=timeout
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
return fun(*args, **kwargs)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/connection.py", line 877, in _connection_factory
self._connection = self._establish_connection()
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/connection.py", line 812, in _establish_connection
conn = self.transport.establish_connection()
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
conn.connect()
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 329, in connect
self.drain_events(timeout=self.connect_timeout)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 525, in drain_events
while not self.blocking_read(timeout):
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 531, in blocking_read
return self.on_inbound_frame(frame)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/method_framing.py", line 53, in on_frame
callback(channel, method_sig, buf, None)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 538, in on_inbound_method
method_sig, payload, content,
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/abstract_channel.py", line 156, in dispatch_method
listener(*args)
File "/Users/vg/celery-mq-demo/venv/lib/python3.7/site-packages/amqp/connection.py", line 668, in _on_close
(class_id, method_id), ConnectionError)
amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
Since broker root user's lease expires based on the ttl, How to add new broker credential for celery to use?
celery version - 5.2.7

Selenium"Can't connect to HTTPS URL because the SSL module is not available

I have an anaconda environment with selenium installed. When I try to run I get this error:
Traceback (most recent call last):
File "c:\Users\Nick\Desktop\Code\product-scraper\sephora-scraper\scraper.py", line 31, in <module>
ChromeDriverManager().install(), options=options)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\chrome.py", line 34, in install
driver_path = self._get_driver_path(self.driver)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\manager.py", line 21, in _get_driver_path
driver_version = driver.get_version()
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\driver.py", line 40, in get_version
return self.get_latest_release_version()
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\webdriver_manager\driver.py", line 63, in get_latest_release_version
resp = requests.get(f"{self._latest_release_url}_{self.browser_version}")
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Nick\anaconda3\envs\web-scraper\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='chromedriver.storage.googleapis.com', port=443): Max retries exceeded with url: /LATEST_RELEASE_88.0.4324 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
I'm new to anaconda so I don't know what else to provide. Please leave a comment if I need to anything and I will add it right away. Thanks.
Try to add this path to your environment variable:
..\Anaconda3
..\Anaconda3\scripts
..\Anaconda3\Library\bin
You might need to restart windows after set up environment path

lyft flyte errors on startup following docs?

Getting the follow running the example from the docs. Anyone have ideas why?
Looks like gprc. Maybe this is related? https://github.com/grpc/grpc/issues/15933
docker run --network host -e FLYTE_PLATFORM_URL='127.0.0.1:30081' lyft/flytesnacks:v0.1.0 pyflyte -p flytesnacks -d development -c sandbox.config register workflows
Using configuration file at /app/sandbox.config
Flyte Admin URL 127.0.0.1:30081
Running task, workflow, and launch plan registration for flytesnacks, development, ['workflows'] with version 46045e6383611da1cb763d64d846508806fce1a4
Registering Task: workflows.edges.edge_detection_canny
Traceback (most recent call last):
File "/app/venv/bin/pyflyte", line 11, in <module>
sys.exit(main())
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 86, in workflows
register_all(project, domain, pkgs, test, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 24, in register_all
o.register(project, domain, name, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 158, in system_entry_point
return wrapped(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 141, in register
_engine_loader.get_engine().get_task(self).register(id_to_register)
File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 234, in register
self.sdk_task
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 50, in create_task
spec=task_spec.to_flyte_idl()
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler
return fn(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 77, in create_task
return self._stub.CreateTask(task_create_request)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 604, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1578482148.715995870","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1578482148.715993301","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}"
I think the problem is in FLYTE_PLATFORM_URL='127.0.0.1:30081'.Minikube runs in VM on OSX.
When you run docker run --network host, the docker daemon shares the networking namespace with this host - https://docs.docker.com/network/host/
Thus if you have the docker daemon and Kubernetes running on the same host (not a VM), you should be able to refer to services (nodeport) or ingress on Kubernetes using 127.0.0.1.
But, since minikube runs in a VM on osx, you will need to use minikube ip to find the ip address of the minikube machine and then set
FLYTE_PLATFORM_URL=':30081'

RQ Redis : Connection Refused after a successful connection

My redis server is running under ubuntu 16.04 and I have RQ Dashboard running to monitor the queue. The redis server has a password which I supply for the initial connection. Here's my code:
from rq import Queue, Connection, Worker
from redis import Redis
from dblogger import DbLogger
def _redisCon():
redis_host = "192.168.1.169"
redis_port = "6379"
redis_password = "SecretPassword"
return Redis(host=redis_host, port=redis_port, password=redis_password)
rcon = _redisCon()
if rcon is not None:
with Connection(rcon):
DbLogger.log("rqworker", 0, "Launching Worker", "launching an RQ Worker - default Queue")
worker = Worker(list(map(Queue, 'default'))) # this works - I see the worker registered in RQ dashboard
worker.work() # this eventually fails with the Connection error:
"""
16:28:49 RQ worker 'rq:worker:steve-imac.95379' started, version 0.12.0
16:28:49 *** Listening on default...
16:28:49 Cleaning registries for queue: default
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 177, in _read_from_socket
raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
OSError: Connection closed by server.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/client.py", line 668, in execute_command
return self.parse_response(connection, command_name, **options)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/client.py", line 680, in parse_response
response = connection.read_response()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 624, in read_response
response = self._parser.read_response()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 284, in read_response
response = self._buffer.readline()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 216, in readline
self._read_from_socket()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 191, in _read_from_socket
(e.args,))
redis.exceptions.ConnectionError: Error while reading from socket: ('Connection closed by server.',)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to 192.168.1.169:6379. Connection refused.
"""
I've tried removing the password and enabling the unixsocket in the redis.conf -- neither seemed to help. This seems to be happening in some sort of timeout, since in other testing the worker actually loads a task and executes it before eventually dying with this error.

How to set the settings of scrapy-redis to connect the remote redis-server with password?

I have tried two solutions:
1)`REDIS_HOST = '111.111.111.111'
REDIS_PORT = 12000
REDIS_PASSWORD = 'aaaaaaaa'`
but it will raise:
2017-11-23 15:03:13 [twisted] CRITICAL:
Traceback (most recent call last):
File "/home/yuyanggo/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1386, in _inlineCallbacks
result = g.send(result)
File "/home/yuyanggo/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 79, in crawl
yield self.engine.open_spider(self.spider, start_requests)
redis.exceptions.ResponseError: NOAUTH Authentication required.
2)
REDIS_URL = 'redis://:aaaaaaaa#111.111.111.111:12000/0'
but I find that the data of redis is saved into localhost,not the remote redis-server.
try to put the following code in your setting.py file
REDIS_URL = 'redis://:{psw}#{host}:{port}'.format(
host='xx.xx.xx.xx', # your server ip
port='xxx',
psw='xxxx',
)