I'm working in a project that is mostly a freshly generated web app from phx.new and phx.gen.auth. I have an index page that is non-liveview. After login, the user is redirected to the main page, which is a liveview.
The expectation: After clicking the generated Log out link, the user should be redirected to the / index page, which is not a liveview. This behavior is specified by the generated authentication.
The experience: The problem is that when I click the generated Log out link, instead of being redirected to the logged out index splash page, as the generated authentication is written to do, instead I'm redirected to the login page, where I see two flash messages: one :info flash indicating successful logout, and a second :error flash complaining ""You must be logged in to access this page." I don't want users to see that :error flash on the login page, and worse, I think, is the fact that the reason that :error flash is appearing is because the PageLive liveview, which is not present on the index page, is running its mount/3 function again (a third time), which is causing the liveview authentication to run again, and cause a second redirect. Importantly, this issue occurs intermittently, i.e. sometimes the redirect works correctly and sends the user to the index page without issue, and other times the second, redundant redirect and the mistaken flash message is displayed. I think this indicates some kind of race condition.
I have a relatively newly generated project with these routes (among others):
router.ex
scope "/", MyappWeb do
pipe_through :browser
live_session :default do
live "/dash", PageLive, :index
end
end
scope "/", MyappWeb do
pipe_through [:browser, :redirect_if_user_is_authenticated]
get "/", PageController, :index
end
scope "/", MyappWeb do
pipe_through [:browser]
delete "/users/log_out", UserSessionController, :delete
end
The authentication was generated by phx.gen.auth. The delete action in the generated UserSessionController triggers the generated UserAuth.log_out_user/1 to fire.
user_session_controller.ex
def delete(conn, _params) do
conn
|> put_flash(:info, "Logged out successfully.")
|> UserAuth.log_out_user()
end
user_auth.ex
def log_out_user(conn) do
user_token = get_session(conn, :user_token)
user_token && Accounts.delete_session_token(user_token)
if live_socket_id = get_session(conn, :live_socket_id) do
MyappWeb.Endpoint.broadcast(live_socket_id, "disconnect", %{})
end
conn
|> renew_session()
|> delete_resp_cookie(#remember_me_cookie)
|> redirect(to: "/")
end
The live route in the router to /dash routes through a liveview called PageLive, which simply mounts over some authentication, as recommended in liveview docs:
page_live.ex
defmodule MyappWeb.PageLive do
use MyappWeb, :live_view
alias MyappWeb.Live.Components.PackageSearch
alias MyappWeb.Live.Components.Tabs
alias MyappWeb.Live.Components.Tabs.TabItem
on_mount MyappWeb.UserLiveAuth
end
user_live_auth.ex
defmodule MyappWeb.UserLiveAuth do
import Phoenix.LiveView, only: [assign_new: 3, redirect: 2]
alias Myapp.Accounts
alias Myapp.Accounts.User
alias MyappWeb.Router.Helpers, as: Routes
def mount(_params, session, socket) do
socket =
assign_new(socket, :current_user, fn ->
find_current_user(session)
end)
case socket.assigns.current_user do
%User{} ->
{:cont, socket}
_ ->
socket =
socket
|> put_flash(:error, "You must be logged in to access this page.")
|> redirect(to: Routes.user_session_path(socket, :new))
{:halt, socket}
end
end
defp find_current_user(session) do
with user_token when not is_nil(user_token) <- session["user_token"],
%User{} = user <- Accounts.get_user_by_session_token(user_token),
do: user
end
end
Here's the log of the process after the user clicks log out:
**[info] POST /users/log_out**
[debug] Processing with MyappWeb.UserSessionController.delete/2
Parameters: %{"_csrf_token" => "ET8xMSU5KSEedycKEAcJfX0JCl45LmcF_VEHANhinNqHcaz6MFRkIqWu", "_method" => "delete"}
Pipelines: [:browser]
[debug] QUERY OK source="users_tokens" db=1.8ms idle=389.7ms
SELECT u1."id", u1."email", u1."hashed_password", u1."confirmed_at", u1."first_name", u1."last_name", u1."username", u1."inserted_at", u1."updated_at" FROM "users_tokens" AS u0 INNER JOIN "users" AS u1 ON u1."id" = u0."user_id" WHERE ((u0."token" = $1) AND (u0."context" = $2)) AND (u0."inserted_at" > $3::timestamp + (-(60)::numeric * interval '1 day')) [<<159, 144, 113, 83, 223, 12, 183, 119, 50, 248, 83, 234, 128, 237, 129, 112, 138, 147, 148, 100, 67, 163, 50, 244, 127, 26, 254, 184, 102, 74, 11, 52>>, "session", ~U[2021-10-06 22:13:44.080128Z]]
[debug] QUERY OK source="users_tokens" db=1.7ms idle=391.8ms
DELETE FROM "users_tokens" AS u0 WHERE ((u0."token" = $1) AND (u0."context" = $2)) [<<159, 144, 113, 83, 223, 12, 183, 119, 50, 248, 83, 234, 128, 237, 129, 112, 138, 147, 148, 100, 67, 163, 50, 244, 127, 26, 254, 184, 102, 74, 11, 52>>, "session"]
**[info] Sent 302 in 6ms**
**[info] CONNECTED TO Phoenix.LiveView.Socket in 64µs
Transport: :websocket
Serializer: Phoenix.Socket.V2.JSONSerializer
Parameters: %{"_csrf_token" =>** "ET8xMSU5KSEedycKEAcJfX0JCl45LmcF_VEHANhinNqHcaz6MFRkIqWu", "_mounts" => "0", "_track_static" => %{"0" => "http://localhost:4000/assets/app.css", "1" => "http://localhost:4000/assets/app.js"}, "vsn" => "2.0.0"}
[debug] QUERY OK source="users_tokens" db=1.6ms idle=422.5ms
SELECT u1."id", u1."email", u1."hashed_password", u1."confirmed_at", u1."first_name", u1."last_name", u1."username", u1."inserted_at", u1."updated_at" FROM "users_tokens" AS u0 INNER JOIN "users" AS u1 ON u1."id" = u0."user_id" WHERE ((u0."token" = $1) AND (u0."context" = $2)) AND (u0."inserted_at" > $3::timestamp + (-(60)::numeric * interval '1 day')) [<<159, 144, 113, 83, 223, 12, 183, 119, 50, 248, 83, 234, 128, 237, 129, 112, 138, 147, 148, 100, 67, 163, 50, 244, 127, 26, 254, 184, 102, 74, 11, 52>>, "session", ~U[2021-10-06 22:13:44.110158Z]]
**[info] GET /users/log_in**
[debug] Processing with MyappWeb.UserSessionController.new/2
Parameters: %{}
Pipelines: [:browser, :redirect_if_user_is_authenticated]
[info] Sent 200 in 6ms
Notice how in the logs above, the 302 redirect occurs, and then immediately the socket reconnects and mount/3 runs, which then triggers another redirect, this time to the /users/log_in route. As far as I understand, the socket should not be trying to reconnect here, and I can't see what's triggering this.
Why is the PageLive mount being triggered again after the 302 redirect to a non-liveview page upon logout, thus triggering a second redirect to the login page?
They key is this code
MyappWeb.Endpoint.broadcast(live_socket_id, "disconnect", %{})
in log_out_user/1. Here you disconnect the live view socket via an Erlang message.
This triggers the socket to shutdown server side (see Phoenix.Socket)
def __info__(%Broadcast{event: "disconnect"}, state) do
{:stop, {:shutdown, :disconnected}, state}
end
But then it is a live view, which will reconnect after a disconnect happened via client side javascript and then remount the live view, which causes MyappWeb.UserLiveAuth to add the flash message again.
For reference, check the LiveView docs:
Once a LiveView is disconnected, the client will attempt to reestablish the connection and re-execute the mount/3 callback. In this case, if the user is no longer logged in or it no longer has access to the current resource, mount/3 will fail and the user will be redirected.
A potential solution could be, to do the flash + redirect already in a plug inside of the routing pipeline instead of the mount, so non logged in users will be redirected when loading the page, and then just {:halt, socket} in the live view mount, so there is no redirect on logout. Or alternatively send the broadcast for disconnecting after the logout request has already redirected (maybe spinning off an async Task could help).
So maybe wrap the broadcast like this, to make the browser close the liveview itself (while still redirecting all other open live views):
Task.async(fn ->
:timer.sleep(1000) # Give the browser some time to process the response and close the LV
MyappWeb.Endpoint.broadcast(live_socket_id, "disconnect", %{})
end)
In user_auth.ex, I have these lines, which delete the user session from the db:
def log_out_user(conn) do
user_token = get_session(conn, :user_token)
user_token && Accounts.delete_session_token(user_token)
The reason the problem is occurring is because the live mount is failing authentication, since the session token no longer exists in the database. This triggers a redirect before the regular UserAuth.log)out_user/1 redirect has a chance to fire.
Inspired by #smallbutton's solution, I'm using ensuring that the session token is not deleted until after the log_out_user/1 redirect to avoid the race condition:
def log_out_user(conn) do
if live_socket_id = get_session(conn, :live_socket_id) do
MyAppWeb.Endpoint.broadcast(live_socket_id, "disconnect", %{})
end
# TODO is there a better way to handle this issue?
Task.async(fn ->
:timer.sleep(1000)
user_token = get_session(conn, :user_token)
user_token && Accounts.delete_session_token(user_token)
end)
conn
|> renew_session()
|> delete_resp_cookie(#remember_me_cookie)
|> redirect(to: "/")
end
This is an unsatisfying workaround though, because I don't like to have to manually coordinate events like this.
I'm trying to connect AWS elasticache(redis in cluster mode) with TLS enabled, the library versions and django cache settings as below
====Dependencies======
redis==3.0.0
redis-py-cluster==2.0.0
django-redis==4.11.0
======settings=======
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com:6379/0",
'OPTIONS': {
'PASSWORD': '<password>',
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True,
"ssl_cert_reqs": False,
"ssl": True
}
}
}
}
It doesn't seem to be a problem with client-class(provided by redis-py-cluster) since I'm able to access
from rediscluster import RedisCluster
startup_nodes = [{"host": "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com", "port": "6379"}]
rc = RedisCluster(startup_nodes=startup_nodes, ssl=True, ssl_cert_reqs=False, decode_responses=True, skip_full_coverage_check=True, password='<password>')
rc.set("foo", "bar")
rc.get('foo')
'bar'
but I'm seeing this error when django service is trying to access the cache, is there any configuration detail that I might be missing?
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 32, in _decorator
return method(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 81, in get
client=client)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 194, in get
client = self.get_client(write=False)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 90, in get_client
self._clients[index] = self.connect(index)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 103, in connect
return self.connection_factory.connect(self._server[index])
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 64, in connect
connection = self.get_connection(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 75, in get_connection
pool = self.get_or_create_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 94, in get_or_create_connection_pool
self._pools[key] = self.get_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 107, in get_connection_pool
pool = self.pool_cls.from_url(**cp_params)
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 916, in from_url
return cls(**kwargs)
File "/usr/lib/python3.6/site-packages/rediscluster/connection.py", line 146, in __init__
self.nodes.initialize()
File "/usr/lib/python3.6/site-packages/rediscluster/nodemanager.py", line 172, in initialize
raise RedisClusterException("ERROR sending 'cluster slots' command to redis server: {0}".format(node))
rediscluster.exceptions.RedisClusterException: ERROR sending 'cluster slots' command to redis server: {'host': 'xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com', 'port': '6379'}
I also tried passing "ssl_ca_certs": "/etc/ssl/certs/ca-certificates.crt" to CONNECTION_POOL_KWARGS and setting the location scheme to rediss still no luck
you need to change ssl_cert_reqs=False to ssl_cert_reqs=None
Here's the link to the redis Python git repo that points to this:
https://github.com/andymccurdy/redis-py#ssl-connections
I am trying to automate the deployment of a rabbitmq processing chain with Ansible.
The rabbitmq user setup below works perfectly
rabbitmq_user:
user: admin
password: supersecret
read_priv: .*
write_priv: .*
configure_priv: .*
But the queue setup below crashes
rabbitmq_queue:
name: feedfiles
login_host: 127.0.0.1
login_user: admin
login_password: admin
login_port: 5672
the crash log looks like this:
{"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.
", "module_stdout": "
Traceback (most recent call last):
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 285, in <module>
main()
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 178, in main
r = requests.get(url, auth=(module.params['login_user'], module.params['login_password']))
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 487,in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=15672): Max retries exceeded with url: /api/queues/%2F/feedfiles (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fcfdaf8e7d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
", "msg": "MODULE FAILURE", "rc": 1}
I have voluntarily removed the guest/guest default setup, which is why I am using the admin credentials
Any idea where the issue could come from?
EDIT:
setting the admin user tag "administrator" doesn't help
I am getting following error when I am trying to add a new consumer(queue) from the current_app imported from celery by issuing the control command.
The details of logged error are as follows:
reply = current_app.control.add_consumer(queue_name, destination = WORKER_PROCESSES, reply = True)
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 232, in add_consumer
**kwargs
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 300, in _broadcast
channel=chan)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 336, in _collect
with consumer:
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 396, in __enter__
self.consume()
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 445, in consume
self._basic_consume(T, no_ack=no_ack, nowait=False)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 567, in _basic_consume
no_ack=no_ack, nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/entity.py", line 611, in consume
nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/librabbitmq/__init__.py", line 81, in basic_consume
no_local, no_ack, exclusive, arguments or {},
ChannelError: basic.consume: server channel error 404, message: NOT_FOUND - no queue '2795c73e-2b6a-34d6-bd1f-13de0d1e5497.reply.celery.pidbox' in vhost '/'
I didn't understand the error. I am passing a queue name different from the one mentioned in logs.
Any help will be appreciated. Thanks.
Note: This issue starts occurring after setting MAX_TASK_PER_CHILD value. Is this related to the error ?
Question is straightfoward, but some context may help.
I'm trying to deploy scrapy while using selenium and phantomjs as downloader. But the problem is that it keeps on saying permission denied when trying to deploy. So I want to change the path of ghostdriver.log or just disable it. Looking at phantomjs -h and ghostdriver github page I couldn't find the answer, my friend google let me down also.
$ scrapy deploy
Building egg of crawler-1370960743
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
tests.fake_responses.__init__: module references __file__
Deploying crawler-1370960743 to http://localhost:6800/addversion.json
Server response (200):
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 18, in render
return JsonResource.render(self, txrequest)
File "/usr/lib/pymodules/python2.7/scrapy/utils/txweb.py", line 10, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib/python2.7/dist-packages/twisted/web/resource.py", line 216, in render
return m(request)
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 66, in render_POST
spiders = get_spider_list(project)
File "/usr/lib/pymodules/python2.7/scrapyd/utils.py", line 65, in get_spider_list
raise RuntimeError(msg.splitlines()[-1])
RuntimeError: IOError: [Errno 13] Permission denied: 'ghostdriver.log
When using the PhantomJS driver add the following parameter:
driver = webdriver.PhantomJS(service_log_path='/var/log/phantomjs/ghostdriver.log')
Related code, would be nice to have an option to turn off logging though, seems thats not supported:
selenium/webdriver/phantomjs/service.py
class Service(object):
"""
Object that manages the starting and stopping of PhantomJS / Ghostdriver
"""
def __init__(self, executable_path, port=0, service_args=None, log_path=None):
"""
Creates a new instance of the Service
:Args:
- executable_path : Path to PhantomJS binary
- port : Port the service is running on
- service_args : A List of other command line options to pass to PhantomJS
- log_path: Path for PhantomJS service to log to
"""
self.port = port
self.path = executable_path
self.service_args= service_args
if self.port == 0:
self.port = utils.free_port()
if self.service_args is None:
self.service_args = []
self.service_args.insert(0, self.path)
self.service_args.append("--webdriver=%d" % self.port)
if not log_path:
log_path = "ghostdriver.log"
self._log = open(log_path, 'w')
#Reduce logging level
driver = webdriver.PhantomJS(service_args=["--webdriver-loglevel=SEVERE"])
#Remove logging
import os
driver = webdriver.PhantomJS(service_log_path=os.path.devnull)