I developed backend form mobile app using laravel homestad and domain is http://myapp.local:1050 and it work perfect in a brwoser , but when trying to call this linl from appcelerator titanium studio i got this error
{
"code": -1,
"error": "Unable to resolve host \"myapp.local\": No address associated with hostname",
"source": {
"_events": {
"disposehandle": {
}
},
"allResponseHeaders": "",
"apiName": "Ti.Network.HTTPClient",
"autoEncodeUrl": true,
"autoRedirect": true,
"bubbleParent": true,
"connected": false,
"connectionType": "GET",
"domain": null,
"location": "http://myapp.local:1050/index.php?route=api%2Fsetting&api_key=XbUP6ggYuYzbpTJ7YAsqcyoMdEglHxbfPv...",
"password": null,
"readyState": 1,
"responseData": null,
"responseText": "",
"responseXML": null,
"status": 0,
"statusText": null,
"timeout": 10000,
"username": null,
"validatesSecureCertificate": false
},
"success": false
}
Unable to resolve host \"myapp.local\": No address associated with hostname
Why can not see the hostname myapp.local ?
Edit :
Homestead.yaml file
---
ip: "192.168.10.10"
memory: 2048
cpus: 1
authorize: /Users/abdellatifhenno/.ssh/id_rsa.pub
keys:
- /Users/abdellatifhenno/.ssh/id_rsa
folders:
- map: /Users/abdellatifhenno/projects
to: /home/vagrant/Code
sites:
- map: myapp.local
to: /home/vagrant/Code/myapp
variables:
- key: APP_ENV
value: local
my machine (osx 10.9.5 ) /etc/hosts file
<pre>
127.0.0.1 localhost
127.0.0.1 activate.adobe.com
127.0.0.1 practivate.adobe.com
127.0.0.1 ereg.adobe.com
127.0.0.1 activate.wip3.adobe.com
127.0.0.1 wip3.adobe.com
127.0.0.1 3dns-3.adobe.com
127.0.0.1 3dns-2.adobe.com
127.0.0.1 adobe-dns.adobe.com
127.0.0.1 adobe-dns-2.adobe.com
127.0.0.1 adobe-dns-3.adobe.com
127.0.0.1 ereg.wip3.adobe.com
127.0.0.1 activate-sea.adobe.com
127.0.0.1 wwis-dubc1-vip60.adobe.com
127.0.0.1 activate-sjc0.adobe.com
127.0.0.1 hl2rcv.adobe.com
127.0.0.1 myapp.local
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
</pre>
edit the port your application listening to , this way you can access your app using the ip address.
sudo vim /etc/nginx/sites-available/myapp.local
change port 80 to 9080 for example
save your changes
sudo service nginx restart
now try to access it via the ip address 127.0.0.1:9080
You should be able to access your site using the new port
GOOD LUCK
Related
I have nextcloud running on bare metal 2 nodes:
node1: 192.168.1.10
node2: 192.168.1.11
In the consul I have defined nextcloud service as such on both the nodes:
{
"service": {
"name": "nextcloud",
"tags": ["nextcloud", "traefik"],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": ["ping", "-c1", "127.0.0.1"],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
now this shows up in consul fine:
static config: traefik.yaml
global:
# Send anonymous usage data
sendAnonymousUsage: true
api:
dashboard: true
debug: true
log:
level: DEBUG
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: "/config/"
watch: true
consulCatalog:
defaultRule: "Host(`{{ .Name }}.sub.mydomain.com`)"
endpoint:
address: http://127.0.0.1:8500
certificatesResolvers:
linode:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
email: myemail#domain.com
storage: acme.json
dnsChallenge:
provider: linode
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
and then dynamic /config/config.yaml:
http:
routers:
nextcloud#consulCatalog:
entryPoints:
- "https"
rule: "Host(`home.sub.mydomain.com`) && Path(`/nextcloud`)"
tls:
certResolver: linode
service: nextcloud
services:
nextcloud:
loadBalancer:
servers:
- url: http://192.168.1.10
- url: http://192.168.1.11
passHostHeader: true
but this shows up as file provider with TLS in instead in addtion to exisiting consulcatalog provider.
and not IP or domain mapped.
actual consulcatalog provider showing up but no tls
I am wondering why my dynamic configuration in http did not updated the nextcloud#consulcatalog and set the https entrypoint.
Any help will be greatly appreciated, I am struggling very hard to get this to work.
I have tried following the docs on traefik but its very confusing specially on the consulcatalog part.
Your configuration is showing up as being defined via the file provider because you are statically defining it in the file at /config/config.yaml.
In order to dynamically retrieve this configuration from Consul, you should not be defining the static config file and instead configure tags on the Consul service registrations that will instruct Traefik to route traffic to your service.
For example:
{
"service": {
"name": "nextcloud",
"tags": [
"nextcloud",
"traefik.enable=true",
"traefik.http.routers.nextcloud.entrypoints=https",
"traefik.http.routers.nextcloud.rule=(Host(`home.sub.mydomain.com`) && Path(`/nextcloud`))",
"traefik.http.routers.nextcloud.tls.certresolver=linode",
"traefik.http.services.nextcloud.loadbalancer.passhostheader=true"
],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": [
"ping",
"-c1",
"127.0.0.1"
],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
}
More info can be found on the Routing Configuration docs for Traffic's Consul catalog provider.
I'm trying to create my custom role to install nginx by Ansible.
I defined this defaults\main.yml
---
defaults:
user: nginx
group: nginx
version: "1.19.2-1"
download_path: "/tmp/nginx-1.19.2-1"
rpm: "/tmp/nginx-1.19.2-1.el7.ngx.x86_64.rpm"
directories:
log: /var/log/nginx
config: /etc/nginx
custom_config: /etc/nginx/conf.d
pid: /var/run
config:
- name: main
content: |
upstream backend {
ip_hash;
server localhost:9090;
server 127.0.0.1:9090;
}
server {
listen 9443 ssl;
ssl_certificate /etc/ssl/certs/cert.crt;
ssl_certificate_key /etc/ssl/private/cert.key;
location / {
proxy_pass http://backend;
}
}
server:
port:
listen:
- 9443
And this is my tasks/main.yml
---
- set_fact:
default_vars: "{{ defaults }}"
host_vars: "{{ hostvars[ansible_host]['nginx'] | default({}) }}"
install_nginx: true
- set_fact:
combined_vars: "{{ default_vars | combine(host_vars, recursive=True) }}"
- name: Gather package facts
package_facts:
manager: auto
- set_fact:
install_nginx: false
when: "'nginx' in ansible_facts.packages"
- name: Install NginX
yum:
name: "{{ combined_vars.rpm }}"
state: present
disable_gpg_check: true
become: true
when:
- install_nginx
- name: Make sure Ports Open
community.general.seport:
ports: "{{ port.listen }}"
proto: tcp
setype: http_port_t
state: present
loop_control:
loop_var: "port"
when: 'port.listen is defined'
with_items: "{{ combined_vars.config.server }}"
become: true
ignore_errors: true
Now I receive the error:
nginx: [emerg] bind() to 0.0.0.0:9443 failed (13: Permission denied)
when I try to start nginx, this because my playbook skip the section Make sure Ports Open where I set to open 9443 port (read from config), and nginx don't start on not default port if you don't add this port (this is the command to run on OS to allow 9443 port: semanage port -a -t http_port_t -p tcp 9443)
This is part of my log:
ok: [10.x.x.8] => {
"ansible_facts": {
"combined_vars": {
"config": [
{
"content": "upstream backend {\n ip_hash;\n server 10.x.x.:9090;\n server 10.x.x.10:9090;\n}\n\nserver {\n listen 9443 ssl;\n ssl_certificate /etc/ssl/certs/cert.crt;\n ssl_certificate_key /etc/ssl/private/cert.key;\n location / {\n proxy_pass http://backend;\n }\n}\n",
"name": "main"
}
],
"directories": {
"config": "/etc/nginx",
"custom_config": "/etc/nginx/conf.d",
"log": "/var/log/nginx",
"pid": "/var/run"
},
"download_path": "/tmp/nginx-1.19.2-1",
"group": "nginx",
"rpm": "/tmp/nginx-1.19.2-1.el7.ngx.x86_64.rpm",
"server": {
"port": {
"listen": [
9443
]
}
},
"user": "nginx",
"version": "1.19.2-1"
}
},
"changed": false
}
...
fatal: [10.x.x.8]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'port' is undefined\n\nThe error appears to be in '/opt/Developments/GitLab/harrisburg-infrastructure/roles/nginx/tasks/main.yml': line 95, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Make sure Ports Open Mod\n ^ here\n"
}
I solved in this way:
tasks/main.yml
- name: Make sure Ports Open Mod
community.general.seport:
ports: "{{ combined_vars.server.port.listen }}"
proto: tcp
setype: http_port_t
state: present
loop_control:
loop_var: "listen"
when: 'combined_vars.server.port.listen is defined'
with_items: "{{ combined_vars.server }}"
become: true
ignore_errors: true
Webdriverio Test Runner has an option
- if you are using a private Selenium backend, you should define the hostname, port, and path here.
hostname: 'localhost',
port: 4444,
path: '/',
Since version: "#wdio/selenium-standalone-service": "^6.0.0"
This "hostname" is unchangeable and always stay as localhost. It seems to autodetect that it should be localhost only and does not refer to config at all i.e. even if I update manually in wdio.conf.js as
hostname: 'selenium-hub',
port: 4445,
path: '/',
Upon execution, still the hostname stays 'localhost' instead of being 'selenium-hub' and port stays as '4444' instead of '4445'
In previous version the command line value with --hostname was getting overwritten successfully as required
i.e. ./node_modules/.bin/wdio wdio.conf.js --hostname 'selenium-hub'
would pass selenium-hub as hostname successfully....
anyone experiencing similar issue ?
add the hostname, port, and path to the capabilities array.
instead of:
hostname: '{ unique ip address}',
port: { port number },
path: {'/'},
protocol: '{http' || 'https'},
capabilities: [{
maxInstances: 5,
browserName: 'chrome',
}],
Do this:
capabilities: [{
maxInstances: 5,
browserName: 'chrome',
hostname: '{ unique ip address}',
port: { port number },
path: {'/'},
protocol: '{http' || 'https'},
}],
I'm trying to enable HTTPS on my AWS EC2 instance that is being deployed using Elastic Beanstalk. The documentation to do this requires you to add this snippet in a directory .ebextensions/https-instance.config in the root directory of your app. I had to replace certificate file contents and private key contents with my certificate and key, respectively. I initially received an incorrect format error so I converted the snippet they provided to JSON and re-uploaded it -
{
"files": {
"/etc/httpd/conf.d/ssl.conf": {
"owner": "root",
"content": "LoadModule ssl_module modules/mod_ssl.so\nListen 443\n<VirtualHost *:443>\n <Proxy *>\n Order deny,allow\n Allow from all\n </Proxy>\n\n SSLEngine on\n SSLCertificateFile \"/etc/pki/tls/certs/server.crt\"\n SSLCertificateKeyFile \"/etc/pki/tls/certs/server.key\"\n SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH\n SSLProtocol All -SSLv2 -SSLv3\n SSLHonorCipherOrder On\n \n Header always set Strict-Transport-Security \"max-age=63072000; includeSubdomains; preload\"\n Header always set X-Frame-Options DENY\n Header always set X-Content-Type-Options nosniff\n \n ProxyPass / http://localhost:8080/ retry=0\n ProxyPassReverse / http://localhost:8080/\n ProxyPreserveHost on\n \n</VirtualHost>\n",
"group": "root",
"mode": "000644"
},
"/etc/pki/tls/certs/server.crt": {
"owner": "root",
"content": "-----BEGIN CERTIFICATE-----\ncertificate file contents\n-----END CERTIFICATE-----\n",
"group": "root",
"mode": "000400"
},
"/etc/pki/tls/certs/server.key": {
"owner": "root",
"content": "-----BEGIN RSA PRIVATE KEY-----\nprivate key contents # See note below.\n-----END RSA PRIVATE KEY-----\n",
"group": "root",
"mode": "000400"
}
},
"container_commands": {
"killhttpd": {
"command": "killall httpd"
},
"waitforhttpddeath": {
"command": "sleep 3"
}
},
"packages": {
"yum": {
"mod_ssl": []
}
}
}
The deployment aborts with the error -
[Instance: i-0x012x0123x012xyz] Command failed on instance. Return code: 1 Output: httpd: no process found. container_command killhttpd in my-app-name/.ebextensions/https-instance.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I can tell that the error is being caused due to container_commands key which stops httpd after the configuration so that the new https.conf and certificate can be used. It tells me that it's trying to kill httpd but it can't find any such process running. service httpd status shows that httpd.worker (pid 0123) is running and I can also access my app online. /var/log/eb-activity.log also has nothing logged in it.
I've seen a few others post the same problem online but I could not find any solutions to it. Is there something that I'm doing wrong here?
You ebextensions is trying to execute killall httpd, but your process is called httpd.worker.
Change the line in the ebextensions to be killall httpd.worker.
See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?
Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]