getting “db: SQLSTATE[HY000] [2002] Connection refused” error on Mac after updating MAMP - sql

I was running an older version of MAMP and found I couldn't install a current version of Wordpress because it required at least PHP version 5.2 so I updated MAMP which now runs 7.4.2. Things seemed to be fine with the update (sites are running) until I tried using interconnect's Search-Replace-DB on a project. Using the GUI, I got an AJAX error. The docs stated in the event of an AJAX error, I should follow the cli instructions; but when I do, I get either one of two errors no matter which approach I take:
db: SQLSTATE[HY000] [2002] Connection refused
db: SQLSTATE[HY000] [2002] No such file or directory
My wp-config.php looks like:
/** MySQL database username */
define('DB_USER', 'root');
/** MySQL database password */
define('DB_PASSWORD', 'root');
/** MySQL hostname */
define('DB_HOST', '127.0.0.1');
I read a SO post getting "db: SQLSTATE[HY000] [2002] Connection refused" error on Mac with MAMP and am trying the recommendation to replace the line '#!/usr/bin/php -q' with '#!/usr/bin/env php -q' in the rdb.cli.php file.
The cli commands I've tried:
//posts suggest using 8889 but my mamp seems to run on :8888?
//using localhost string
php srdb.cli.php -h localhost -n test -u root -proot -s oldname.org -r localhost:8889 -v true -z
//using ip
php srdb.cli.php -h 127.0.0.1 -n test -u root -proot -s oldname.org -r localhost:8889 -v true -z
//using localhost string
php srdb.cli.php -h localhost -n test -u root -proot -s oldname.org -r localhost:8888 -v true -z
//using ip
php srdb.cli.php -h 127.0.0.1 -n test -u root -proot -s oldname.org -r localhost:8888 -v true -z
//Using path explicitly with ip
/Applications/MAMP/bin/php/php7.4.2/bin/php srdb.cli.php -h 127.0.0.1 -u root -n test -proot -s oldname.org -r localhost:/8888
After changing the line srdb.cli.php, I still can't connect. At this point, I don't know if it's php or mysql that is having an issue, if the db is corrupted, or if the environmental variables/paths/links are off after updating MAMP or how to go about determining these things. Any insights would be greatly appreciated.

For anyone else encountering this issue, specifying the full location paths for both the MAMP php binary (which will execute the script) and the search-replace-db script in the cli solved the problem. I put the strings to search for and replace with in quotes. I also increased the php timeout limit in wp-config.php with: set_time_limit(3000);
Note that how you specify localhost should be consistent between the options passed to the script and what's in your wp-config.php file (if you use localhost in wp-config, use localhost in the script as well)
/Applications/MAMP/bin/php/php7.4.2/bin/php /Applications/MAMP/htdocs/test/Search-Replace-DB-master/srdb.cli.php -h localhost -u root -proot --port 8889 -n test -s "http://olddomain.com" -r "http://localhost:8888/test" -v true```

Related

PID recv: short read in CRIU

I am receiving PID recv: short read error while using lazy pages migration with CRIU.
At the source, I run the following command:
memhog -r1000 64m
cd /tmp/dump sudo -H -E criu dump -t $(pidof memhog) -D /tmp/dump --lazy-pages --address 10.237.23.102 --port 1234 --shell-job --display-stats -vvvv -o d.log
Then, in a separate terminal on the source machine itself:
scp -r /tmp/dump/ dst:/tmp/
Now, on the destination machine I start the daemon:
cd /tmp/dump criu lazy-pages --page-server --address $(gethostip -d src) --port 1234 --display-stats -vvvvv
And finally, the restore command:
cd /tmp/dump criu restore -D /tmp/dump/ --shell-job --lazy-pages -vvvv --display-stats -o restore.log -vvvv
The error is thrown by the lazy server daemon on the destination machine.
Furthermore, it works fine for the memhog installed from numactl. However, it does not if I build it from the source.
Any suggestions for solving this will be appreciated.
::Update:: Solved. See answer
Found the issue:
I was building them separately on two different machines due to which their "build-id" was not matching. Solution: Build on one machine and then just scp it over to the other machine.

Why do I have to spawn a new shell when doing remote sudo ssh commands to get proper file permissions?

I'm using password-less key based login with sudo to execute remote commands. I have figured out that I have to spawn a new shell to execute commands that write to root areas of the remote file system. But, I would like a clear explanation of exactly why this is the case?
This fails:
sudo -u joe ssh example.com "sudo echo test > /root/echo_test"
with:
bash: /root/echo_test: Permission denied
This works fine:
sudo -u joe ssh example.com "sudo bash -c 'echo test > /root/echo_test'"
It's the same reason that a local sudo echo test >/root/echo_test will fail (if you are not root) -- the redirection is done by the shell (not the sudo or echo command) which is running as the normal user. sudo only runs the echo command as root.
With sudo -u joe ssh example.com "sudo echo test > /root/echo_test", the remote shell is running as a normal user (probably joe) and does not have permission to write to the file. Using an extra bash invokation works, because sudo then runs bash as root (rather than echo), and that bash can open the file and do the redirect.

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!

Localhost has 403 error.

I am new to Apache but I found a tutorial online of how to configure Apache, PHP, and MySQL. Everything was fine until I tried to download CakePHP to my localhost. I did the following commands:
$ cd /Users/myusername/Sites/
$ curl -0 -L https://www.github.com/cakephp/archive/2.4.7.zip
$ unzip 2.4.7.zip
$ rm 2.4.7.zip
$ shopt -s dotglob nullglob
$ mv cakephp-2.4.7/* .
$ rmdir cakephp-2.4.7/
Then I decided I didn't want to use CakePHP and deleted the files from my localhost.
Now I am getting a 403 forbidden error stating "You don't have permission to access /~myusername/ on this server."
Can anymore help me get my localhost working again? Thanks!
Try removing the .htaccess file in /Users/myusername/Sites/