ssh key authenticated user unable to use apt-get update - authentication

Ubuntu Linux 16.04.5
I have configured a server for SSH Keys Authentication With PuTTY.
Key authentication works fine for the user account accessing the server, the problem is that when attempting to utilize "apt-get update" the system returns:
tornado#freeradius:~$ apt-get update
Reading package lists... Done
W: chmod 0700 of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
I have seen on multiple posts to similar topics that the solution is the use "sudo apt-get update" - the issue with this is the user "tornado" has no password.
Additionally, I have configured "sshd_config":
ChallengeResponseAuthentication no
PasswordAuthentication no
The user "tornado" was created using "adduser tornado --disabled-password" option, so when attempting to "sudo apt-get update" I am prompted for the user "tornado's" non-existent password.
The permission of "home/tornado" was changed to 755 which made no change with the issue at hand, so I reverted the CHMOD.
"stat /home/tornado" output
File: '/home/tornado'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 19267592 Links: 4
Access: (2700/drwx--S---) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:28:01.035536814 -0700
Modify: 2018-09-14 10:50:00.864863347 -0700
Change: 2018-09-14 11:33:13.685020607 -0700
Birth: -
"stat /home/tornado/.ssh" output
File: '/home/tornado/.ssh'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 19267599 Links: 2
Access: (2700/drwx--S---) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:28:01.035536814 -0700
Modify: 2018-09-14 10:44:34.251451611 -0700
Change: 2018-09-14 11:28:01.035536814 -0700
Birth: -
"stat /home/tornado/authorized_keys" output
File: '/home/tornado/.ssh/authorized_keys'
Size: 209 Blocks: 8 IO Block: 4096 regular file
Device: fc00h/64512d Inode: 19267590 Links: 1
Access: (0600/-rw-------) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:33:41.053150514 -0700
Modify: 2018-09-14 10:44:34.127451092 -0700
Change: 2018-09-14 11:28:01.035536814 -0700
Birth: -
Root privileges to the user "tornado" were granted as well ...
visudo
added:
# User privilege specification
tornado ALL=(ALL:ALL) ALL
I also attempted: "chown tornado:root /home/tornado"
which made no change to the situation.
Solved
A password was added to the user "tornado" which allows me to sudo into the root user which provides functionality.

Related

Can't backup db: libpq.so.5: cannot open shared object file: No such file or directory

Given:
Linux Mint 20.3
DB Client: DBeaver 22.1.4
I try to backup my Postgresql's db (my_db) by DBeaver.
But I get error:
/run/user/1000/doc/65139af1/bin/pg_dump --verbose --host=localhost --port=5432 --username=postgres --format=p --file /home/my_user/dev/BACKUP/my_db_local/dump-my_db-202211161718.sql -n public my_db
Task 'PostgreSQL dump' started at Wed Nov 16 17:18:29 EET 2022
/run/user/1000/doc/65139af1/bin/pg_dump: error while loading shared libraries: libpq.so.5: cannot open shared object file: No such file or directory
Task 'PostgreSQL dump' finished at Wed Nov 16 17:18:29 EET 2022
2022-11-16 17:18:29.831 - IO error: Process failed (exit code = 127). See error log.
2022-11-16 17:18:29.832 - java.io.IOException: Process failed (exit code = 127). See error log.
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.validateErrorCode(AbstractNativeToolHandler.java:242)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.executeProcess(AbstractNativeToolHandler.java:223)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.doExecute(AbstractNativeToolHandler.java:262)
at org.jkiss.dbeaver.tasks.nativetool.AbstractNativeToolHandler.lambda$0(AbstractNativeToolHandler.java:83)
at org.jkiss.dbeaver.runtime.RunnableContextDelegate.lambda$0(RunnableContextDelegate.java:39)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:122)
Check if you have installed lib libpq.so.5 running the command:
dconfig -p | grep libpq.so.5
If installed, something similar to this will appear....
libpq.so.5 (libc6,x86-64) => /lib/x86_64-linux-gnu/libpq.so.5
Make sure you are not using the flatpak version, I had the same problem with this version, if you are using it, uninstall and download the .deb package and install from there.
https://dbeaver.io/download/

RabbitMQ is dead

I don't actually know how to describe my problem in my title well. The issue I'm having is I have a local install of RabbitMQ through Homebrew (Mac), and it suddenly just died. I suddenly became unable to send messages to the queue. Unfortunately I can't post the error message, because I tried a few other steps, including resetting, uninstalling, and reinstalling Rabbit. After the reinstall, I can't start my Rabbit server; it gets stuck after this:
sudo rabbitmq-server start
Password:
2022-08-05 11:14:17.972308-04:00 [info] <0.221.0> Feature flags: list of feature flags found:
2022-08-05 11:14:17.979492-04:00 [info] <0.221.0> Feature flags: [x] classic_mirrored_queue_version
2022-08-05 11:14:17.979531-04:00 [info] <0.221.0> Feature flags: [x] implicit_default_bindings
2022-08-05 11:14:17.979546-04:00 [info] <0.221.0> Feature flags: [x] maintenance_mode_status
2022-08-05 11:14:17.979568-04:00 [info] <0.221.0> Feature flags: [x] quorum_queue
2022-08-05 11:14:17.979583-04:00 [info] <0.221.0> Feature flags: [x] stream_queue
2022-08-05 11:14:17.979599-04:00 [info] <0.221.0> Feature flags: [x] user_limits
2022-08-05 11:14:17.979611-04:00 [info] <0.221.0> Feature flags: [x] virtual_host_metadata
2022-08-05 11:14:17.979672-04:00 [info] <0.221.0> Feature flags: feature flag states written to disk: yes
2022-08-05 11:14:18.203961-04:00 [notice] <0.44.0> Application syslog exited with reason: stopped
2022-08-05 11:14:18.204012-04:00 [notice] <0.221.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
## ## RabbitMQ 3.10.7
## ##
########## Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
###### ##
########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Erlang: 25.0.3 [jit]
TLS Library: OpenSSL - OpenSSL 1.1.1q 5 Jul 2022
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: /usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost_upgrade.log
<stdout>
Config file(s): (none)
Starting broker... completed with 7 plugins.
After this it just hangs forever.
I would like to completely uninstall Rabbit from my computer and reinstall it from fresh; when I first installed it it worked like a charm, but since them somehow something has gone belly-up. Can someone help me with this?
Also, yes, the obvious thing to do is brew rm rabbitmq but that's what got me into this situation. It can't be that simple.
I got it working. Some combination of these steps worked:
Change the permission of the service:
sudo chown -R $(whoami) $(brew --prefix)/*
Reload the launchctl config:
launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.rabbitmq.plist
launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.rabbitmq.plist
Restart the service:
brew services restart rabbitmq
And then it worked.

Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS

Hiveserver2 does not start after installing HDP 2.6.4.0-91 using cloudbreak on AWS.
Start the hiveserver2 in the Ambari UI and check the contents of /var/log/hive/hiveserver2.log.
Below is the error log.
Any help would be appreciated.
Contents of hiveserver2.log
2018-03-08 04:41:53,345 WARN [main-EventThread]: server.HiveServer2 (HiveServer2.java:process(343)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2018-03-08 04:41:53,347 INFO [main]: zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x16203aad5af0040 closed
2018-03-08 04:41:53,347 INFO [main]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:stop(405)) - Shutting down HiveServer2
2018-03-08 04:41:53,348 INFO [main-EventThread]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(361)) - Server instance removed from ZooKeeper.
2018-03-08 04:41:53,348 INFO [main-EventThread]: zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2018-03-08 04:41:53,348 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1520480101488_0046 failed 2 times due to AM Container for appattempt_1520480101488_0046_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://ip-10-0-91-7.ap-northeast-2.compute.internal:8088/cluster/app/application_1520480101488_0046 Then click on links to logs of each attempt.
Diagnostics: ExitCodeException exitCode=2: tar: Removing leading `/' from member names
tar: Skipping to next header
gzip: /hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz: invalid compressed data--format violated
tar: Exiting with failure status due to previous errors
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
I had exactly the same issue with HDP on AWS. FYI, In my case the issue was with HDP version 2.6.4.5-2. I'm going to show how I fixed using this version because it is the latest at this time.
As the error log shows the problem is with tez.tar.gz that is corrupted then YARN is unable to decompress it in the YARN container.
This tez.tar.gz file is copied from the hdfs:///hdp/apps/<hdp_version>/tez/tez.tar.gz.
To reproduce the error and confirm that this file is corrupted, you can run the following command:
sudo su
su hdfs
hdfs dfs -get /hdp/apps/2.6.4.5-2/tez.tar.gz
tar -xvzf tez.tar.gz
You will get the following error:
gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
The fix is pretty simple, you must just replace the HDFS file with the one that you have on your local file-system running the following command:
hdfs dfs -rm /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
hdfs dfs -put /usr/hdp/current/tez-client/lib/tez.tar.gz /hdp/apps/2.6.4.5-2/tez/tez.tar.gz
Now restart Hive Server 2 service and done!
NOTE: If something similar happens with other services you can do the same thing. Please check the following link that has more details: https://community.hortonworks.com/articles/30096/foxing-broken-targz-and-jar-files-in-hdp-24.html
Hope this helps!

Ceph S3 / Swift bucket create failed / error 416

I am getting 416 errors while creating buckets using S3 or Swift. How to solve this?
swift -A http://ceph-4:7480/auth/1.0 -U testuser:swift -K 'BKtVrq1...' upload testas testas
Warning: failed to create container 'testas': 416 Requested Range Not Satisfiable: InvalidRange
Object PUT failed: http://ceph-4:7480/swift/v1/testas/testas 404 Not Found b'NoSuchBucket'
Also S3 python test:
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 621, in create_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 416 Requested Range Not Satisfiable
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidRange</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000000002a-005a69b12d-1195-default</RequestId><HostId>1195-default-default</HostId></Error>
Here is my ceph status:
cluster:
id: 1e4bd42a-7032-4f70-8d0c-d6417da85aa6
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-2,ceph-3,ceph-4
mgr: ceph-1(active), standbys: ceph-2, ceph-3, ceph-4
osd: 3 osds: 3 up, 3 in
rgw: 2 daemons active
data:
pools: 7 pools, 296 pgs
objects: 333 objects, 373 MB
usage: 4398 MB used, 26309 MB / 30708 MB avail
pgs: 296 active+clean
I am using CEPH Luminous build with bluestore
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
User created:
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
Logs on osd:
2018-01-25 12:19:45.383298 7f03c77c4700 1 ====== starting new request req=0x7f03c77be1f0 =====
2018-01-25 12:19:47.711677 7f03c77c4700 1 ====== req done req=0x7f03c77be1f0 op status=-34 http_status=416 ======
2018-01-25 12:19:47.711937 7f03c77c4700 1 civetweb: 0x55bd9631d000: 192.168.109.47 - - [25/Jan/2018:12:19:45 +0200] "PUT /mybucket/ HTTP/1.1" 1 0 - Boto/2.38.0 Python/2.7.12 Linux/4.4.0-51-generic
Linux ubuntu, 4.4.0-51-generic
set default pg_num and pgp_num to lower value(8 for example), or set mon_max_pg_per_osd to a high value in ceph.conf

Vagrant - Ansible error installing Apache

I'm working on a project with Vagrant and Ansible and Virtualbox.
When I try to install Apache on an ubuntu precise (14.04) box, Vagrant fails. I improved the answer after.
It seems a known bug, but even if I'm installing a newer version, the error shows up.
I tried also as stated here, but with no luck.
How can I resolve this issue?
Thank you.
UPDATED ANSWER
This is the Ansible task.
Version 1:
- name: Install Apache
sudo: yes
apt: pkg=apache2 state=latest
register: apache2_apt
Output:
failed: [default] => {"failed": true}
stderr: E: Sub-process /usr/bin/dpkg returned an error code (1)
stdout: Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine apache2-suexec-custom
The following NEW packages will be installed:
apache2
0 upgraded, 1 newly installed, 0 to remove and 183 not upgraded.
Need to get 0 B/146 kB of archives.
After this operation, 460 kB of additional disk space will be used.
(Reading database ... 52932 files and directories currently installed.)
Unpacking apache2 (from .../apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb) ...
dpkg: error processing /var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb (--unpack):
error setting ownership of `/var/www/html.dpkg-new': Operation not permitted
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Errors were encountered while processing:
/var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb
msg: '/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'apache2'' failed: E: Sub-process /usr/bin/dpkg returned an error code (1)
FATAL: all hosts have already failed -- aborting
Version 2:
- name: Install Apache
command: "sudo apt-get install apache2"
register: apache2_apt
Output:
failed: [default] => {"changed": true, "cmd": ["sudo", "apt-get", "install", "apache2"], "delta": "0:00:07.745095", "end": "2015-06-09 11:08:53.726031", "rc": 100, "start": "2015-06-09 11:08:45.980936", "warnings": []}
stderr: E: Sub-process /usr/bin/dpkg returned an error code (1)
stdout: Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine apache2-suexec-custom
The following NEW packages will be installed:
apache2
0 upgraded, 1 newly installed, 0 to remove and 183 not upgraded.
Need to get 0 B/146 kB of archives.
After this operation, 460 kB of additional disk space will be used.
(Reading database ... 52932 files and directories currently installed.)
Unpacking apache2 (from .../apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb) ...
dpkg: error processing /var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb (--unpack):
error setting ownership of `/var/www/html.dpkg-new': Operation not permitted
Processing triggers for man-db ...
Processing triggers for ureadahead ...
ureadahead will be reprofiled on next reboot
Errors were encountered while processing:
/var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb
FATAL: all hosts have already failed -- aborting
There are few possible issues for this
You need to disable apparmor or better add a rule to apparmor service for ability to have access by the script to /var/www within guest machine
There is a trouble with host machine permissions for /var/www folder. Try to check if the user has access to local folder, mapped as shared folder from host to guest - possibly you need to add permissions for local user at host machine.
try to use ansible-galaxy and search for already created role with fixes for both previous issues