I'm trying to set up access to server via a port (ssh is on port 222), but still although i have in my deploy.rb
set :application, 'billing'
set :repo_url, 'git#github.com:random/stat.git'
set :keep_releases, 5
set :ssh_options, {
forward_agent: true,
port: 222
}
SSHKit.config.command_map[:rake] = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"
I still get error
SSHKit::Runner::ExecuteError: Exception while executing on host IP:
Operation timed out - connect(2) for "IP" port 22
How can i solve this error? what i'm doing wrong?
I was also having issues with ssh_options option.
I switched to using server method, so it looks something like this:
# config/deploy/production.rb
server "#{server_ip_here}", user: "deploy", roles: %w{web app db}, port: 222
I also give another ans:
role :web, %w{deploy#123.456.78.9:222}
Related
I want to install a Prestashop with DDEV, but I can't connect to database.
I tried 127.0.0.1:32775 and localhost:32775, with "db" as user/db/password
But I get this error:
Database Server is not found. Please verify the login, password and server fields (DbPDO)
Database is up and running, connection via commandline is working:
mysql --host=127.0.0.1 --port=32775 --user=db --password=db --database=db
Project information:
PrestaShop 1.7.6.2 Installer (I first tried github/composer installation - error, then zip download with wizard - same error)
ddev version v1.11.2
DDEV project type: php
Host: MacOS 10.15.1
DDEV config.yaml - changes to default: router_http(s)_port
APIVersion: v1.11.2
name: prestatest
type: php
docroot: ""
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "880"
router_https_port: "8443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: false
provider: default
use_dns_when_possible: true
timezone: ""
ddev describe will show you the db connection information.
Host: db
User: db
Password: db
Database: db
Mostly people forget the hostname configuration.
I'm starting to learn Ansible but the documentation is not too helpful.
I have installed the control machine on RHEL and created the necessary hosts file and windows.yml.
But when trying to connect to the remote Windows server to get a pong back I get the following error:
[root#myd666 ansible_test]# ansible windows -i hosts -m win_ping
hostname | UNREACHABLE! => {
"changed": false,
"msg": "ssl: the specified credentials were rejected by the server",
"unreachable": true
}
After Installing python-kerberos dependencies,
I now get this Error:
hostname | UNREACHABLE! => {
"changed": false,
"msg": "Kerberos auth failure: kinit: KDC reply did not match expectations while getting initial credentials",
"unreachable": true
}
My windows.yml file contains:
# it is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_ssh_user: user#MYDOMAIN.NET
ansible_ssh_pass: password
ansible_ssh_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Am I doing anything wrong with the syntax of Domain\user? Maybe I forgot to install something on the Windows machine? I only ran the ConfigureRemotingForAnsible.ps1 script, and Python is not installed there.
This is my krb5.conf file:
[libdefaults]
default_realm = MYDOMAIN.NET
#dns_lookup_realm = true
#dns_lookup_kdc = true
[realms]
MYDOMAIN.NET = {
kdc = dc1.mydomain.net
default_domain = hpeswlab.net
}
[domain_realm]
.mydomain.net = MYDOMAIN.NET
mydomain.net = MYDOMAIN.NET
And I do get a token using Kinit:
kinit -C user#MYDOMAIN.NET
klist
Klist output:
Valid starting Expires Service principal
01/31/2017 11:25:33 01/31/2017 21:25:33 krbtgt/MYDOMAIN.NET#MYDOMAIN.NET
renew until 02/01/2017 11:25:29
In windows.yml, please double-check and ensure that the ansible_ssh_user: user#MYDOMAIN.NET line does indeed have the realm MYDOMAIN.NET in upper case. Somewhere, the realm request to the KDC is being sent in lower case instead of upper case causing the 'KDC reply did not match expectations..' error.
In krb5.conf, case-sensitivity is also important. First I'll note that since the KDC name is the name of an IP host, so it needs to be specified as a fully-qualified host name, like in the example shown below. It assumes your KDC is named "dc1.mydomain.net". Next, the domain name should only be in lower case. On the other hand, Kerberos Realm names need be in upper case - if the realm name is incorrectly specified in lower case in this file that is another reason you may get this error message. Please modify your entire krb5.conf to look like that shown below (changing only "dc1" to the actual name) and it should work. Side note: You do not necessarily need the two dns_lookup_ lines in your krb5.conf, so please comment them out per the below. Those are fallback mechanisms only as per the MIT Kerberos Documentation and may actually cause issues in your simple use case. After modifying either configuration file, make sure to restart the Ansible engine before testing again.
[libdefaults]
default_realm = MYDOMAIN.NET
#dns_lookup_realm = true
#dns_lookup_kdc = true
[realms]
MYDOMAIN.NET = {
kdc = dc1.mydomain.net
default_domain = mydomain.net
}
[domain_realm]
.mydomain.net = MYDOMAIN.NET
mydomain.net = MYDOMAIN.NET
Please refer to this MIT reference for how to properly set up the krb5.conf: Sample krb5.conf File
In the Hosts file, check to ensure your IP to name mappings are correct. Per the RFCs, Kerberos requires a properly functioning DNS, and you are at risk of shortchanging that if your Hosts file has outdated entries in it.
Finally, though I wasn't able to tell which version of Ansible you were using, I did some research and found that "Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port." This could certainly be part of the problem. See: Ansible on Windows Documentation
I have a set of parameters that needs to be initialized for elasticMQ sqs. Right now I have added in the controller as below.
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"localhost",:port=>9324,:protocol=>"http"})
what is the better way to set it in config folder and access it in controller and how to do it. Please help
Create a config file config/config.yml that will store the config variables for the different environments and load it in config/application.rb.
development:
elasticmq:
server: localhost
port: 9324
protocol: 'http'
production:
elasticmq:
server:
port:
protocol:
test:
In config/application.rb:
CONFIG = YAML.load_file("config/config.yml")[Rails.env]
The CONFIG variable is now available in the controller.So now you can do the following:
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"#{CONFIG['elasticmq']['server']}",:port=> "#{CONFIG['elasticmq']['port']}",:protocol=>"#{CONFIG['elasticmq']['protocol']}"})
I have been trying to deploy a simple rails3 app from my mac(os lion)to an amazon ec2 instance, using capistrano. When I do a cap deploy:setup, I get a connection failed for: http://ec2-xxx-xx-xx-xxx.compute-1.amazonaws.com/ (Errno::ETIMEDOUT: Operation timed out - connect(2))
Here is my config/deploy.rb
set :application, "paperclip_sample_app"
set :deploy_to, "/mnt/#{application}"
set :deploy_via, :copy
set :scm, :git
set :repository, "."
default_run_options[:pty] = true
set :location, "http://ec2-xxx-xx-xx-xxx.compute-1.amazonaws.com/"
role :web, location # Your HTTP server, Apache/etc
role :app, location # This may be the same as your `Web` server
role :db, location, :primary => true # This is where Rails migrations will run
#role :db, "your slave db-server here"
set :user, "root"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "id_rsa")]
I have also enabled ssh on the mac by going to 'System Preferences'. Under ‘Internet & Networking’, ‘Sharing’ icon and checking the ‘Remote Login’ option.
Also the security groups on the ec2 instance has the port 22 enabled. As a result I am able to ssh into the instance.
Is there anything that I am missing? Any help would be greatly appreciated.
Thanks
I needed to change
set :location, "http://ec2-xxx-xx-xx-xxx.compute-1.amazonaws.com/"
to
set :location, "ec2-xxx-xx-xx-xxx.compute-1.amazonaws.com"
This fixed the problem.
I have disabled redis listening to port 6379 and enabled the websocket. It works wonderfully from my application, but when I launch resque-web it keeps listening trough network interface and fails with message:
Can't connect to Redis! (redis://127.0.0.1:6379/0)
Someone knows if it's possible to make resque-web use the socket instead of the network?
Thanks in advance
I've been reading resque-web's code and I realized that it internally loads any path you provide as parameter to the command. So I have created a plain ruby script that connects to Redis with redis gem and then assigns this instance to Resque.redis:
Just created a file called 'resque-web-hack.rb':
require 'redis'
require 'resque'
$redis = Redis.new(:path => '/tmp/redis.sock')
Resque.redis = $redis
And then used it like this:
$ resque-web /path/to/my/file/resque-web-hack.rb
It's just a hack, but it works for me by now...
I just fixed same problem :) So here is the solution
In my ./config/resque.yml I have this line
development: /tmp/redis.sock
This is my RAILS_ROOT/config/initializers/resque.rb
rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
resque_config = YAML.load_file(rails_root + '/config/resque.yml')
if resque_config[rails_env] =~ /^\// # using unix socket
Resque.redis = Redis.new(:path => resque_config[rails_env])
else
Resque.redis = resque_config[rails_env]
end