I am working on a Laravel project that uses dingo package to manage some APIs. I changed CACHE_DRIVER=array variable in my .env file to CACHE_DRIVER=redis because dingo no longer support array for CACHE_DRIVER. I therefore installed redis on my system and included the package in my Laravel project by adding "predis/predis": "~1.0" in my composer.json and updating with the command composer update. Up until now everything works just fine. However, to create database table and seed them using php artisan migrate --seed, I get the error:
[Predis\Connection\ConnectionException]
SELECT failed: NOAUTH Authentication required. [tcp://127.0.0.1:6379]
Note: when I was installing redis, I added a password. I also authenticated using the two commands redis-cli to switch to redis and then AUTH mypassword. Yet when I try to seed, it still throws the same error. Please what am I doing wrong?
Thanks for any help.
I would start with setting redis password in REDIS_PASSWORD environment variable (e.g. in .env file). See https://laravel.com/docs/5.3/redis#configuration for more details about redis configuration in Laravel.
Make sure you are running your redis server when you are seeding.
Run: redis-server
Related
I am a newbie in pulumi. I am having an issue. When I do pulumi login in GCP backend It appears an error:
stderr: error: getting secrets manager: passphrase must be set with
PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE environment
variables
When I do pulumi logout the deployment works - pulumi api automation. Does anyone have an idea how to fix this?
Tried to set pulumi_config_passphrase.
When using the self-managed backends for Pulumi, you need to provide a pass phrase to encrypt secret values.
This can be done by setting a global environment variable which will depend on the operating system you're using. In Unix like environments (eg MacOs or Linux) you can do:
export PULUMI_CONFIG_PASSPHRASE=<a password you can remember>
In Windows on Powershell this can be done using:
$env:PULUMI_CONFIG_PASSPHRASE=<a password you can remember>
If you don't wish to use a passphrase, you can leverage the Pulumi service as your state store, or configure a cloud secrets provider.
This is done when initializing your stack, more information on that can be found here
I have 2 nodes rabbit2 and rabbit3 everything is working fine until i start cluster
then I did the command
scp -r rabbit2:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie .
and after successfully transferring the failed nodes
enter image description here
Maybe the cookie file not used because the file permission setting or other cookie file has been used for priority reason.
Do you know how to run rabbitmq in erlang console mode?
If you can, enter the console first, check the problem by command.
erlang cookie check function
I'm trying to setting up a Zeek IDS cluster (v.3.2.0-dev.271) on 3 Ubuntu 18.04 LTS hosts to no avail - running zeek deploy command fails with the following output:
fatal error: problem with interface ens3 (pcap_error: socket: Operation not permitted (pcap_activate))
I have followed the official documentation (which is pretty generic at best) and set up passwordless SSH authentication between the zeek nodes.
I also preemptively created the /usr/local/zeek path on all hosts and gave the zeek user full permissions on that directory. The documentation says The Zeek user must be able to either create this directory or, where it already exists, must have write permission inside this directory on all hosts.
The documentation also says that on the worker nodes this user must have access to the target network interface in promiscuous mode.
My zeek user is a sudoer AND a member of netdev group on all 3 nodes. Yet, the cluster deployment fails. Apparently, when zeekctl establishes the SSH connection to the workers it cannot get a hold of the network interfaces and set caps.
Eventually I was able to successfully run the cluster by following this article - however it requires you to set up the entire cluster as root, which I would like to avoid if at all possible.
So my question is, is there anything blatantly obvious that I am missing? To the best of my knowledge this setup should work, otherwise I don't know how to force zeekctl to run 'sudo' in front of every SSH command it is supposed to run on the workers, or how to satisfy this requirement.
Any guidance will be greatly appreciated, thanks!
I was experiencing the same error for my standalone setup. Found this question from googling it. More googling the error brought me to a few blogs including one in which the comments mentioned the same error. The author mentioned giving the binaries permissions using setcap:
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeek
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeekctl
After running them both, my instance of zeek is now running successfully.
Source: https://www.ericooi.com/zeekurity-zen-part-i-how-to-install-zeek-on-centos-8/#comment-1586
So, just in case someone else stumbles upon the same issue - I figured out what was happening.
I streamlined the cluster deployment with Ansible (using 'become' directive at task level) and did not elevate when running the handlers responsible for issuing the zeekctl deploy command.
Once I did, the Zeek Cluster deployment succeeded.
I installed cassandra from cassandra hub and its running successfully.
root#localhost$ docker ps | grep cassandra
2925664e3391 cassandra:2.1.14 "/docker-entrypoin..." 5 months ago Up 23 minutes 0.0.0.0:7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp, 0.0.0.0:32779->7000/tcp, 0.0.0.0:32778->7001/tcp, 0.0.0.0:32777->7199/tcp, 0.0.0.0:32776->9042/tcp, 0.0.0.0:32775->9160/tcp
I am connected my application with this cassandra. I need to use password authentication to connect to cassandra form my application.
I have to unable password authentication for this, I get the /etc/cassandra/cassandra.yaml file in docker image. I have to follow Authentication Config to enable this.
Is there way to override this changes with docker start or docker run command ?
It is not included into the piece generating the cassandra.yml file, so no. You can submit a PR modifying the relevant piece of the generation script to allow to specify auth via env variables.
Has anyone find to use the GAE remote api but instead of connecting to AppEngine to connect to localhost?
For dev purposes of course
i was able to get this working by adding the following to the app.yaml file
builtins:
- remote_api: on
and then from the command line you can access the db, users, urlfetch or memcache modules
remote_api_shell.py -s localhost:8080
This will prompt you for the email and password but this is not important right now. the remote_api_shell.py is on my path from the google app engine directory
Have you tried the development console? To access it, go to this URL: http://localhost:8080/_ah/admin.
If you really want to use the remote API, have a look at this article. I believe you can use the dev_server by passing the local host url to the interactive console script.
For Java see this document which explains both local and remote access
https://developers.google.com/appengine/docs/java/tools/remoteapi#Configuring_Remote_API_on_the_Client
If there are some like me who prefer to execute from a python script rather than a shell:
from google.appengine.ext.remote_api import remote_api_stub
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:8081', '/_ah/remote_api', secure=False)
os.environ['SERVER_SOFTWARE'] = 'Development'
os.environ['HTTP_HOST'] = 'localhost:8080'
... do stuff ...
I run the dev server with the option "--api_port 8081" otherwise just look at the port used in the dev server logs ("Starting API server at ...").
The environ tweaks are to be able to use cloudstorage api against the dev server too.