wal-g restore issue on PG-11 - restore

I am running postgresql-11 and trying to restore database using wal-g. I get the below error despite creating recovery.conf since this is version 11 but some reason when i bring up the database it's still reading from postgresql.conf.
$ cat recovery.conf
recovery_command='WALG_FILE_PREFIX="file://localhost/testbackup" /home/pgadmco/.local/bin/wal-g wal-fetch %f %p'
standby_mode=on
recovery_target_timeline='2022-05-31 21:30'
WALG_FILE_PREFIX="file://localhost/nas/pgbackup" wal-g backup-list
name last_modified wal_segment_backup_start
base_0000000100000001000000F7 2022-05-31T21:30:03-05:00 0000000100000001000000F7
WALG_FILE_PREFIX="file://localhost/testbackup" wal-g backup-fetch /u01/app/pgsql/11/data base_0000000100000001000000F7
INFO: 2022/06/01 10:21:29.284505 Finished extraction of part_003.tar.lz4
INFO: 2022/06/01 10:21:29.284990 Finished decompression of part_003.tar.lz4
INFO: 2022/06/01 10:21:30.765528 Finished extraction of part_001.tar.lz4
INFO: 2022/06/01 10:21:30.765971 Finished decompression of part_001.tar.lz4
INFO: 2022/06/01 10:21:30.788485 Finished extraction of pg_control.tar.lz4
INFO: 2022/06/01 10:21:30.788501
Backup extraction complete
$ pg_ctl -D /u01/app/pgsql/11/data start
waiting for server to start....2022-06-01 17:42:41.472 GMT [7645] LOG: unrecognized configuration parameter "restore_command" in file "/u01/app/pgsql/11/data/postgresql.conf" line 64
2022-06-01 17:42:41.472 GMT [7645] LOG: unrecognized configuration parameter "recovery_target_timeline" in file "/u01/app/pgsql/11/data/postgresql.conf" line 65
2022-06-01 17:42:41.472 GMT [7645] FATAL: configuration file "/u01/app/pgsql/11/data/postgresql.conf" contains errors
stopped waiting
pg_ctl: could not start server
Any suggestions?

Related

Can't backup db: libpq.so.5: cannot open shared object file: No such file or directory

Given:
Linux Mint 20.3
DB Client: DBeaver 22.1.4
I try to backup my Postgresql's db (my_db) by DBeaver.
But I get error:
/run/user/1000/doc/65139af1/bin/pg_dump --verbose --host=localhost --port=5432 --username=postgres --format=p --file /home/my_user/dev/BACKUP/my_db_local/dump-my_db-202211161718.sql -n public my_db
Task 'PostgreSQL dump' started at Wed Nov 16 17:18:29 EET 2022
/run/user/1000/doc/65139af1/bin/pg_dump: error while loading shared libraries: libpq.so.5: cannot open shared object file: No such file or directory
Task 'PostgreSQL dump' finished at Wed Nov 16 17:18:29 EET 2022
2022-11-16 17:18:29.831 - IO error: Process failed (exit code = 127). See error log.
2022-11-16 17:18:29.832 - java.io.IOException: Process failed (exit code = 127). See error log.
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.validateErrorCode(AbstractNativeToolHandler.java:242)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.executeProcess(AbstractNativeToolHandler.java:223)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.doExecute(AbstractNativeToolHandler.java:262)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.lambda$0(AbstractNativeToolHandler.java:83)
at org.jkiss.dbeaver.runtime.RunnableContextDelegate.lambda$0(RunnableContextDelegate.java:39)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:122)
Check if you have installed lib libpq.so.5 running the command:
dconfig -p | grep libpq.so.5
If installed, something similar to this will appear....
libpq.so.5 (libc6,x86-64) => /lib/x86_64-linux-gnu/libpq.so.5
Make sure you are not using the flatpak version, I had the same problem with this version, if you are using it, uninstall and download the .deb package and install from there.
https://dbeaver.io/download/

● libvirtd.service ,Active: inactive (dead), Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu'

When I check status of libvirtd using the cmd: sudo systemctl status libvirtd the o/p is as follows:
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2021-07-22 18:00:59 EDT; 1min 4s ago
TriggeredBy: ● libvirtd-admin.socket
● libvirtd.socket
● libvirtd-ro.socket
Docs: man:libvirtd(8)
https://libvirt.org
Process: 1717 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS)
Main PID: 1717 (code=exited, status=0/SUCCESS)
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: Starting Virtualization daemon...
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: Started Virtualization daemon.
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: libvirt version: 6.0.0, package: 0ubuntu8.11 (Christian Ehrhardt <christian.ehrhardt#canonical.com> Tue, 05 Jan 2021 13:48:48 +0100)
Jul 22 18:00:59 libvirtd[1717]: hostname: eb2-2259-lin04
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: invalid argument: Failed to parse user 'libvirt-qemu'
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu'
Jul 22 18:00:59 eb2-2259-lin04 libvirtd[1717]: Driver state initialization failed
Jul 22 18:00:59 eb2-2259-lin04 systemd[1]: libvirtd.service: Succeeded.
The status is always inactive(dead). And I get lines invalid argument: Failed to parse user 'libvirt-qemu' , Initialization of QEMU state driver failed: invalid argument: Failed to parse user 'libvirt-qemu' and Driver state initialization failed
I also tried sudo systemctl start libvirtd
and sudo systemctl status libvirtd but the issue doesn't get resolved.
I was actually installing KVM2 driver for GPU support within minikube following the link [https://help.ubuntu.com/community/KVM/Installation][1]. According to the link KVM2 is successfully installed in Ubuntu if virsh list --all cmd does not return an error. For me all the steps returned no error except for the virsh list --all returned the following error.
error: failed to connect to the hypervisor
error: Cannot recv data: Connection reset by peer
and sometimes returns the following error
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
When I start minikube with kvm2 as the driver using the command minikube start --driver kvm2I get the following error
😄 minikube v1.15.1 on Ubuntu 20.04
💢 Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "virtualbox" driver, which is incompatible with requested "kvm2" driver.
💡 Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=virtualbox'
Please suggest me how to start minikube with kvm2 as driver.

Making Dockerized Flask server concurrent

I have a Flask server that I'm running on AWS Fargate. My task has 2 vCPUs and 8 GB of memory. My server is only able to respond to one request at a time. If I run 2 API requests at the same, each that takes 7 seconds, the first request will take 7 seconds to return and the second will take 14 seconds to return.
This is my Docker file (using this repo):
FROM tiangolo/uwsgi-nginx-flask:python3.7
COPY ./requirements.txt requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
RUN python3 -m spacy download en
RUN apt-get update
RUN apt-get install wkhtmltopdf -y
RUN apt-get install poppler-utils -y
RUN apt-get install xvfb -y
COPY ./ /app
I have the following config file:
[uwsgi]
module = main
callable = app
enable-threads = true
These are my logs when I start the server:
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-10-05 06:29:53,438 CRIT Supervisor running as root (no user in config file)
2019-10-05 06:29:53,438 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2019-10-05 06:29:53,446 INFO RPC interface 'supervisor' initialized
2019-10-05 06:29:53,446 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-10-05 06:29:53,446 INFO supervisord started with pid 1
2019-10-05 06:29:54,448 INFO spawned: 'nginx' with pid 9
2019-10-05 06:29:54,450 INFO spawned: 'uwsgi' with pid 10
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
;uWSGI instance configuration
[uwsgi]
cheaper = 2
processes = 16
ini = /app/uwsgi.ini
module = main
callable = app
enable-threads = true
ini = /etc/uwsgi/uwsgi.ini
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
hook-master-start = unix_signal:15 gracefully_kill_them_all
need-app = true
die-on-term = true
show-config = true
;end of configuration
*** Starting uWSGI 2.0.18 (64bit) on [Sat Oct 5 06:29:54 2019] ***
compiled with version: 6.3.0 20170516 on 09 August 2019 03:11:53
os: Linux-4.14.138-114.102.amzn2.x86_64 #1 SMP Thu Aug 15 15:29:58 UTC 2019
nodename: ip-10-0-1-217.ec2.internal
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.7.4 (default, Jul 13 2019, 14:20:24) [GCC 6.3.0 20170516]
Python main interpreter initialized at 0x55e1e2b181a0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
2019-10-05 06:29:55,483 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-10-05 06:29:55,484 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Failure when creating Content Runtime

When I tried to deploy the Content Runtime it failed with the error:
null_resource.singlenode (remote-exec): ERROR: CONFIGURATION ERROR:Specified config file /etc/opscode/pivotal.rb does not exist
null_resource.singlenode (remote-exec): Creating admin user: chef-admin
null_resource.singlenode: Still creating... (8m30s elapsed)
null_resource.singlenode (remote-exec): ERROR: CONFIGURATION ERROR:Specified config file /etc/opscode/pivotal.rb does not exist
Error applying plan:
I see this error when I try to create content-runtime using vSphere or Other template. What could be the cause?
This looks like an issue while installing chef. Run the command on the failed vm:
~/advanced-content-runtime/verify-installation.sh
And check the results.
The command will indicate a failure with chef, but will show you the location of the chef install log:
~/advanced-content-runtime/.advanced-runtime-config/chef-install.log
Then check the log for pivotal:
egrep pivotal ~/advanced-content-runtime/.advanced-runtime-config/chef-install.log
On a system where chef installed correctly, the result seen with the commands above are:
[2017-11-07T16:44:10-06:00] INFO: Storing updated cookbooks/private-chef/templates/default/pivotal.rb.erb in the cache.
[2017-11-07T16:44:13-06:00] INFO: Processing file[/etc/opscode/pivotal.pem] action create (private-chef::private_keys line 33)
[2017-11-07T16:44:13-06:00] INFO: file[/etc/opscode/pivotal.pem] created file /etc/opscode/pivotal.pem
[2017-11-07T16:44:13-06:00] INFO: file[/etc/opscode/pivotal.pem] updated file contents /etc/opscode/pivotal.pem
[2017-11-07T16:44:13-06:00] INFO: file[/etc/opscode/pivotal.pem] owner changed to 999
[2017-11-07T16:44:13-06:00] INFO: file[/etc/opscode/pivotal.pem] group changed to 0
[2017-11-07T16:44:13-06:00] INFO: file[/etc/opscode/pivotal.pem] mode changed to 600
[2017-11-07T16:47:51-06:00] INFO: Processing template[/etc/opscode/pivotal.rb] action create (private-chef::ctl_config line 32)
[2017-11-07T16:47:51-06:00] INFO: template[/etc/opscode/pivotal.rb] created file /etc/opscode/pivotal.rb
[2017-11-07T16:47:51-06:00] INFO: template[/etc/opscode/pivotal.rb] updated file contents /etc/opscode/pivotal.rb
[2017-11-07T16:47:51-06:00] INFO: template[/etc/opscode/pivotal.rb] owner changed to 0
[2017-11-07T16:47:51-06:00] INFO: template[/etc/opscode/pivotal.rb] group changed to 0
[2017-11-07T16:47:51-06:00] INFO: template[/etc/opscode/pivotal.rb] mode changed to 644
From review of the chef logs, you might find a failed chef config, specifically this issue: https://github.com/chef/chef-server/issues/987.
You can clean the chef install (chef-server-ctl cleanse), and launch the command line to then successfully complete the install. From CAM perspective, you can resubmit the other template create of the content runtime, and the VM will then become usable and known to CAM.

Need your help to troubleshoot an Aerospike restore issue

I am new to Aerospike and need your help to troubleshoot a restore issue. I have Aerospike running on my mac and it seem to work all fine except that it do not allow me to restore from .asb file. I took backup from an aerospike instance running on an Ubuntu machine using asbackup utility. But when I try to restore the .asb file using asrestore command on my mac instance, it throws following exception:
asrestore -d ~
restoring: host 127.0.0.1 port 3000 bin_list (null) from directory /home/vagrant
2015-08-25 15:13:43 INFO Add node BB9A9EAAB270008 127.0.0.1:3000
Aug 25 2015 15:13:43 GMT: starting restore: filename: /home/vagrant/BB9A3F5AA1ED512_00000.asb FILE 0x7f63680008c0
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
restore: too many consecutive put failure
Aug 25 2015 15:13:44 GMT: expired 0 : skipped 0 : attempted 0 : [updated 0 not-updated (existed 0 gen-old 0)]
I tried using -t option to restrict the thread count, but no respite.
Has any one faced a similar issue?
Looking forward to your help.
Error 20 indicates a bad namespace parameter. Check your server errorlog for more details. Seems like the namespace that is there in the backup file is not defined in the configuration of the cluster where you are trying to load using asrestore.
Two options
Create a namespace with the same namespace name as in the backup file
Write a script to change the namespace name in the backup files to the intended name which is valid in the cluster where you are trying to load.
The backup file format is documented at http://www.aerospike.com/docs/tools/backup/file_format.html