Hi!
I have Ubuntu 16.04.1 LTS
And my bacula services:
$ service --status-all | grep bacula
[ + ] bacula-director
[ + ] bacula-fd
[ + ] bacula-sd
Bacula release 5.2.6 (21 February 2012) -- ubuntu 14.04
Bacula director cofig:
Director { # define myself
Name = ubuntu-dir
DIRport = 9101 # where we listen for UA connections
QueryFile = "/etc/bacula/scripts/query.sql"
WorkingDirectory = "/var/lib/bacula"
PidDirectory = "/var/run/bacula"
Maximum Concurrent Jobs = 10
Password = "password-bacula-dir" # Console password
Messages = Daemon
DirAddress = localhost
# DirAddresses = {
# ip = { addr = 127.0.0.1; port = 9101; }
# ip = { addr = 10.0.5.71; port = 9101; }
# }
}
bconsole cofig:
Director {
Name = ubuntu-dir
DIRport = 9101
address = localhost
Password = "password-bacula-dir"
}
$ netstat -anp | grep LISTEN | grep bacula
tcp 0 0 127.0.0.1:9101 0.0.0.0:* LISTEN 5532/bacula-dir
tcp 0 0 127.0.0.1:9102 0.0.0.0:* LISTEN 1091/bacula-fd
tcp 0 0 127.0.0.1:9103 0.0.0.0:* LISTEN 1072/bacula-sd
So, when i use
$ bconsole
get error:
Connecting to Director localhost:9101
Director authorization problem.
Most likely the passwords do not agree.
If you are using TLS, there may have been a certificate validation error during the TLS handshake.
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
where the error can be?
P.S. ufw is disabled
My commands related to RHEL\CentOS and Bacula 5.2.13 (19 February 2013), i don't have Ubuntu instance right now
Try to check your config at first:
bacula-dir -tc path-to-bacula-dir.config
bconsole -tc path-to-bacula-console.config
Then restart bacula director service:
systemctl restart bacula-dir
and check bconsole again
I suppose your bacula director instance has stale configuration (not current from file)
I had similar problem and the reason was that the password in the director section of bconsole.conf and bacula-dir.conf were not the same.
edit the passwords and then restart the services. It should be ok now
I have had this problem before, the solution I found, was to remove the quotes from the password in bacula-dir.conf configuration
Director {
Name = ubuntu-dir
DIRport = 9101
QueryFile = "/etc/bacula/scripts/query.sql"
WorkingDirectory = "/var/lib/bacula"
PidDirectory = "/var/run/bacula"
Maximum Concurrent Jobs = 10
Password = password-bacula-dir
Messages = Daemon
DirAddress = localhost
}
Then bacula's bconsole.conf, leave the quotes surrounding the password:
Director {
Name = ubuntu-dir
DIRport = 9101
address = localhost
Password = "password-bacula-dir"
}
Then finally restart all the bacula services:
sudo systemctl restart bacula-dir
sudo systemctl restart bacula-sd
sudo systemctl restart bacula-fd
After the restart, you should be able to connect to bconsole using:
sudo bconsole
This worked for me (Ubuntu 16.04)
Related
I have a problem with my gitea version 1.15.5 running on my raspberry pi. I appears that the built in ssh server is not starting:
ssh -p 2222 git#myaddress.com
ssh: connect to host myaddress.com port 2222: Connection refused
I already assured that "myaddress.com" points to the correct machine and that the firewall rules are adapted. The web interface works just fine.
When I checked, if the port is actually used by gitea, I realized the built-in ssh server is not running:
sudo lsof -i -P -n | grep LISTEN
sshd [...] root [...] TCP *:22 (LISTEN)
sshd [...] root [...] TCP *:22 (LISTEN)
[...]
gitea [...] git [...] TCP *:3000 (LISTEN)
As you can see, there is no process listening on port 2222.
I have an internal sshd server running on that machine at port 22 and I would like to keep those two seperate, if possible. Or is the problem lying there and you can't use the built-in gitea ssh server together with an sshd server?
Here is an excerpt of my app.ini configuration:
APP_NAME = gitea
RUN_USER = git
RUN_MODE = prod
[server]
SSH_DOMAIN = myaddress.com
DOMAIN = myaddress.com
HTTP_PORT = 3000
ROOT_URL = https://myaddress.com/
DISABLE_SSH = false
SSH_PORT = 2222
After some more googling, I found the solution myself:
If there is an sshd server running, gitea does not automatically start its built-in ssh server. Instead, you have to force it by adding this line under [server] in the app.ini configuration:
[server]
START_SSH_SERVER = true
Since, according to the gitea config cheat sheet:
START_SSH_SERVER: false: When enabled, use the built-in SSH server.
I've posted this, in case anyone ever runs into the same problem.
I am working on Centos 6.8 with Apache 2.4 and I had no trouble installing varnish and configured everything correctly for my Magento 2 site.
I have installed a SLL Certificate so I added the -p feature=+esi_ignore_https to my sysconfig file.
everything looks good at first
> service varnish restart
> Stopping Varnish Cache: [ OK ]
> Starting Varnish Cache: [ OK ]
Then when I Start the Varnish CLI with
> varnishd -d -f /etc/varnish/default.vcl
>
> Type 'help' for command list.
> Type 'quit' to close CLI session.
> Type 'start' to launch worker process.
>
> **socket(): Address family not supported by protocol**
and when I enter start I get an endless Loop of this
Child cleanup complete
socket(): Address family not supported by protocol
child (30530) Started
Child (30530) said Child starts
Child (30530) died signal=6
Child (30530) Panic message:
Assert error in vca_acct(), cache/cache_acceptor.c line 386:
Condition((listen(ls->sock, cache_param->listen_depth)) == 0) not true.
errno = 98 (Address already in use)
thread = (cache-acceptor)
version = varnish-4.0.4 revision 386f712
ident = Linux,2.6.32-042stab113.11,x86_64,-smalloc,-smalloc,-hcritbit,epoll
Backtrace:
0x432425: varnishd() [0x432425]
0x40d71d: varnishd() [0x40d71d]
0x7f105722aaa1: /lib64/libpthread.so.0(+0x7aa1) [0x7f105722aaa1]
0x7f1056f77bcd: /lib64/libc.so.6(clone+0x6d) [0x7f1056f77bcd]
Also when try I to log into varnishlog
Can't open VSM file (Abandoned VSM file (Varnish not running?)
/var/lib/varnish/patriciasouths.com/_.vsm )
service varnish start
Starting Varnish Cache: Error: Cannot open socket: :6081: Address family not supported by protocol
So this is really dumb.. the default install is this..
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-p thread_pool_min=${VARNISH_MIN_THREADS} \
-p thread_pool_max=${VARNISH_MAX_THREADS} \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}"
Here we have a variable to use ${VARNISH_LISTEN_ADDRESS}... but its NOT DEFINED PREVIOUSLY!
# Telnet admin interface listen address and port
VARNISH_ADMIN_LISTEN_PORT=6082
DOH! So add it.
# Telnet admin interface listen address and port
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
Starting Varnish Cache: [ OK ]
varnish.x86_64 0:4.1.10-1.el6 FAIL from https://packagecloud.io/varnishcache/varnish41/el/6/SRPMS
I'm using two Vagrant VMs to test some things with Puppet, but when I go to request a cert, I get a cryptic error message that I can't find any information about.
I should note that in correspondence with good Linux server administration I'm use /var/ and /opt/ for storing sensitive cert info, but otherwise a standard Puppet setup.
# Client node details
IP: 192.168.250.10
Hostname: client.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# Puppet server details
IP: 192.168.250.6
Hostname: puppet-server.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# client's and server's /etc/hosts files are identical
192.168.250.5 puppetmaster.example.com
192.168.250.6 puppet.example.com puppet-server.example.com
192.168.250.7 dashserver.example.com dashboard.example.com
192.168.250.10 client.example.com
192.168.250.20 webserver.example.com
# /etc/puppetlabs/puppet/puppet.conf on both client and server
[main]
logdest = syslog
[user]
bucketdir = $clientbucketdir
vardir = /var/opt/puppetlabs/server
ssldir = $vardir/ssl
[agent]
server = puppet.example.com
[master]
certname = puppet.example.com
vardir = /var/opt/puppetlabs/puppetserver
ssldir = $vardir/ssl
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
trusted_server_facts = true
reports = store
cacert = /var/opt/puppetlabs/puppetserver/ssl/certs/ca.pem
cacrl = /var/opt/puppetlabs/puppetserver/ssl/crl.pem
hostcert = /var/opt/puppetlabs/puppetserver/ssl/certs/{puppet, client}.example.com.pem # respectively, obviously
hostprivkey = /var/opt/puppetlabs/puppetserver/ssl/private_keys/{puppet, client}.example.com.pem # respectively, obviously
Finally, the error I get:
$ sudo puppet resource service puppet ensure=stopped enable=false
Notice: /Service[puppet]/ensure: ensure changed 'running' to 'stopped'
service { 'puppet':
ensure => 'stopped',
enable => 'false',
}
$ sudo puppet resource service puppet ensure=running enable=true
Notice: /Service[puppet]/ensure: ensure changed 'stopped' to 'running'
service { 'puppet':
ensure => 'running',
enable => 'true',
}
$ puppet agent --test --server=puppet.example.com
Error: Could not request certificate: Permission denied # dir_initialize - /etc/puppetlabs/puppet/ssl/private_keys
Exiting; failed to retrieve certificate and waitforcert is disabled
First of all, with this setup Puppet should not be using /etc/puppetlabs/puppet/ssl/private_keys. It's not using my configuration file correctly:
$ puppet config print ssldir
/etc/puppetlabs/puppet/ssl
Next, I went through and regenerated the keys on BOTH the server and the client nodes as prescribed in the Puppet docs, however I still got the same error and both the client AND server still think my $ssldir is /etc/puppetlabs/puppet/ssl when it should be /var/opt/puppetlabs/puppetserver/ssl.
Any thoughts?
You need to specify the ssl and vardir config in the agent section as well as master.
the user section is only applicable to the puppet apply commands etc
I am trying to enable ssh connection to suse linux. I have sshd service running:
peeyush#linux-pohb:~/gccgo.work> systemctl status sshd.service
sshd.service - OpenSSH Daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled)
Active: active (running) since Thu 2015-03-19 18:36:05 IST; 3h 50min ago
Process: 5702 ExecStartPre=/usr/sbin/sshd-gen-keys-start (code=exited, status=0/SUCCESS)
Main PID: 6035 (sshd)
CGroup: /system.slice/sshd.service
└─6035 /usr/sbin/sshd -D
Mar 19 18:36:01 linux-pohb sshd-gen-keys-start[5702]: Checking for missing se...
Mar 19 18:36:05 linux-pohb sshd-gen-keys-start[5702]: ssh-keygen: generating ...
Mar 19 18:36:06 linux-pohb sshd[6035]: Server listening on 0.0.0.0 port 22.
Mar 19 18:36:06 linux-pohb sshd[6035]: Server listening on :: port 22.
Hint: Some lines were ellipsized, use -l to show in full.
It is listening on port 22 fine:
peeyush#linux-pohb:~/gccgo.work> netstat -an | grep :22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 :::22 :::* LISTEN
But I am not able to connect to it.
[root#lep8a peeyush]# ssh root#192.168.122.19
ssh: connect to host 192.168.122.19 port 22: Connection timed out
My head is aching with finding solutions on internet. Nothing is working.
Could you guys please help me out?
Check if your firewall accepts incoming TCP connections on port 22:
# iptables -nL | grep 22
If the result is empty, you have to add a rule in your firewall.
Open Yast and firewall configuration:
# yast firewall
Goto "Allowed Services" and add "Secure Shell Server". Save and quit Yast and try to connect.
Comment: If you have disabled your firewall completly (not recommended) this answer does not apply.
Run this command:
systemctl enable sshd.service
Then make necessary changes in your /etc/ssh/sshd_config file, and start sshd via:
systemctl start sshd.service
I was dealing with the same problem in SUSE Linux Enterprise Server 15 x86-64. Within the system I was able to # ssh 127.0.0.1 (so the sshd service was working correctly), but from other nodes I got a "Timed out" message.
First, I checked the firewall rules (see answer from xloto):
# iptables -nL | grep 22
Resulted in an empty return message, so we need to set an additional rule.
To set the the firewall rule for SSH's standard port 22, I followed another tutorial (as I do not have a GUI):
# firewall-cmd --permanent --add-service=ssh
# firewall-cmd --reload
It worked for my case, but I'm not sure whether this is best practice.
I've setup a VM on Fedora 17 with KVM and have configured a bridge network for the KVM. Both the host and the VM use manual IP configuration, with the host's IP as 192.168.0.2, the VM's 192.168.0.10.
From the VM I can connect to the host without any problems, but from the host I can't SSH to the VM,even though I still can ping the KVM from the host. Trying to ssh just gives me the result "no route to host".
Oh, I have iptables disabled so I don't think this is the problem of the firewall.
Also ensure that the kernel is configure for ip forwarding:
$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1
It should have a value of 1, not 0. If needed, enable with these commands:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
There are two ways :
* Using proxy tunnel to create a channel for host from guest :
From guest run following command :
ssh -L 2000:localhost_ip:2000 username#hostip
explore ssh man to get the inside.
* Difficult to setup, but proper configuration while running guest :
follow
http://www.cse.iitd.ernet.in/~prathmesh/random.html#Connecting_qemu_guest_to_real_network