This works:
$ dotnet bin/Debug/netcoreapp1.0/aspnetcoreapp.dll &
[1] 42350
Hosting environment: Production
Content root path: /home/foo/aspnetcoreapp
Now listening on: http://localhost:8080
Application started. Press Ctrl+C to shut down.
$ curl http://localhost:8080
(returns html)
While referring https://learn.microsoft.com/en-us/aspnet/core/publishing/linuxproduction , I wrote /etc/supervisord.conf as follows:
[program:aspnetcoreapp]
directory=/home/foo/aspnetcoreapp
command=/usr/local/bin/dotnet /home/foo/aspnetcoreapp/bin/Debug/netcoreapp1.0/aspnetcoreapp.dll
enviromnent=ASPNETCORE_ENVIRONMENT=Production
user=foo
stdout_logfile=/var/tmp/aspnetcoreapp.log
stderr_logfile=/var/tmp/aspnetcoreapp.log
autostart=true
Then I start supervisord.
$ supervisorctl start
The app seems running.
$ cat /var/tmp/aspnetcoreapp.log
Hosting environment: Production
Content root path: /home/foo/aspnetcoreapp
Now listening on: http://localhost:8080
Application started. Press Ctrl+C to shut down.
But the app returns 500 error.
$ curl -v http://localhost:8080
* About to connect() to localhost port 8080 (#0)
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Date: Mon, 26 Dec 2016 19:38:41 GMT
< Content-Length: 0
< Server: Kestrel
<
* Connection #0 to host localhost left intact
informations:
$ dotnet --info
.NET Command Line Tools (1.0.0-preview4-004233)
Product Information:
Version: 1.0.0-preview4-004233
Commit SHA-1 hash: 8cec61c6f7
Runtime Environment:
OS Name: centos
OS Version: 7
OS Platform: Linux
RID: centos.7-x64
Base Path: /opt/dotnet1.0.0-preview4-004233/sdk/1.0.0-preview4-004233
Related
We run a multi-node multi-master-jenkins setup with a project that triggers a project on another jenkins instance via a cURL call.
Job A on Jenkins Alpha (all CentOS6/7/8) calls Job B on Jenkins Beta (CentOS6) like this:
curl -v -k --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}'
This triggering job is run on multiple nodes and when using https:// that call fails with
Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
* subject: CN=awesomehost.awesome.net,OU=redacted,O=company,C=yeawhat
* start date: Mar 03 10:05:01 2021 GMT
* expire date: Mar 03 10:05:01 2022 GMT
* common name: awesomehost.awesome.net
* issuer: CN=nothingtoseehere,OU=movealong,O=evilcorp,L=raccooncity,ST=solid,C=yeawhat
* Server auth using Basic with user 'nick'
> POST /job/Example_XY/build HTTP/1.1
> Authorization: Basic cut==
> User-Agent: curl/7.29.0
> Host: awesomehost.awesome.net:8443
> Accept: */*
> Content-Length: 12479660
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------34737a99beef
>
< HTTP/1.1 100 Continue
} [data not shown]
* SSL write: error -5961 (PR_CONNECT_RESET_ERROR)
* TCP connection reset by peer
Now, if I run the same cURL as http://, it works every time, but using https:// results in a failure most of the time. So it's most likely an HTTPS issue (wild guess).
But: while trying to debug, I used --trace and mysteriously everything works. every. time. Trace-time is not sufficient, but --trace - fixed the issue.
curl -v -k --trace - --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}'
doesn't show the same error. Presuming some I/O related issue (all the systems share a nfs exported setup). I was curious if logfile I/O was the culprit, but running:
curl -v -k --trace - --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}' 1>/dev/null
also works every time. Writing a long logfile doesn't seem to be the issue. Maybe some race condition?
Now, I don't have a real problem, as I have two ways to get stuff to work, but fixing stuff by turning on debug feels like a cheat.
cURL SSL connect error 35 with NSS error -5961 doesn't really seem to apply, as turning on debug fixes my issue.
Does anyone have a good idea how to debug the issue further? I can't promise that I can try out everything, as I am limited to non-root access. I would have to convince the - rightfully paranoid - admins to let me tinker with their farm, which I would rather not do, as my Jenkins is not the most important part of software running there.
Any ideas?
I'm developing tests with Selenium. Currently I'm using official selenium/standalone-chrome:3.11.0 image. I'm running only Selenium inside Docker-container. The project itself is compiled on the host machine (tests connect to the container's exposed port):
$ docker run -p 4444:4444 selenium/standalone-chrome:3.11.0
$ curl -v localhost:4444
* Rebuilt URL to: localhost:4444/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 4444 (#0)
> GET / HTTP/1.1
> Host: localhost:4444
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
...
But I would like to compile and test the project entirely inside Docker-container. So I created my own image upon selenium/standalone-chrome:3.11.0. My (simplified) Dockerfile looks like this:
FROM selenium/standalone-chrome:3.11.0
RUN sudo apt-get --assume-yes --quiet update
RUN sudo apt-get --assume-yes --quiet install curl
CMD ["curl", "-v", "localhost:4444"]
As can be seen from the file, I'm trying to connect to port 4444 within container. When I run the image, e.g.:
docker build -t test . && docker run test
I get:
* Rebuilt URL to: localhost:4444/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1...
* connect to 127.0.0.1 port 4444 failed: Connection refused
* Trying ::1...
* Immediate connect fail for ::1: Cannot assign requested address
* Trying ::1...
* Immediate connect fail for ::1: Cannot assign requested address
* Failed to connect to localhost port 4444: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 4444: Connection refused
Why I'm not able to connect to Selenium which is ran inside container from the same container?
I've found the solution at last (sorry for my stupidity).
Building an image upon selenium/standalone-chrome:3.11.0 is not sufficient. You need to start Selenium explicitly.
The Dockerfile:
FROM selenium/standalone-chrome:3.11.0
WORKDIR /app
COPY . /app
RUN sudo apt-get --assume-yes --quiet update
RUN sudo apt-get --assume-yes --quiet install curl
CMD ["./acceptance.sh"]
The acceptance.sh wrapper script:
#!/bin/bash
set -x
set -e
/opt/bin/entry_point.sh &
# It will be better to look into log and wait for
# record 'Selenium Server is up and running on port 4444'.
# But in this script simplified approach is used, just for
# the sake of brevity.
sleep 30
curl -v localhost:4444
The result:
...
+ set -e
+ sleep 30
+ /opt/bin/entry_point.sh
07:51:35.092 INFO [GridLauncherV3.launch] - Selenium build info: version: '3.11.0', revision: 'e59cfb3'
07:51:35.095 INFO [GridLauncherV3$1.launch] - Launching a standalone Selenium Server on port 4444
2018-05-15 07:51:35.661:INFO::main: Logging initialized #2436ms to org.seleniumhq.jetty9.util.log.StdErrLog
07:51:36.448 INFO [SeleniumServer.boot] - Welcome to Selenium for Workgroups....
07:51:36.450 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
^[[23;5~^[[23;5~+ curl -v localhost:4444
* Rebuilt URL to: localhost:4444/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 4444 (#0)
> GET / HTTP/1.1
> Host: localhost:4444
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
...
Cheers!
I'm trying to set up a CouchDB 2.0 instance up on my CentOS 7 server.
I've got it installed and running as a systemd service and it responses with its friendly hello world message when I access it from the server using 127.0.0.1 or 0.0.0.0
$ curl 127.0.0.1:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
$ curl 0.0.0.0:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
in my local.ini file I've configed the bind_address to 0.0.0.0
[httpd]
bind_address = 0.0.0.0
My understanding was that if I had this bind address I could connect to port 5984 from any ip address open in my firewall
I'm using firewalld for my firewall and I've configured it to open port 5984
This config is confirmed by listing the configuration of the public zone:
$ sudo firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: couchdb2 dhcpv6-client http https ssh
ports: 443/tcp 5984/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
I've also created a service called couchdb2 at /etc/firewalld/services/couchdb2.xml with XML:
<service>
<short>couchdb2</short>
<description>CouchDB 2.0 Instance</description>
<port protocol="tcp" port="5984"/>
</service>
From what I know about firewalld I should be able to receive connection on 5984 now
but when I curl from my laptop my connection is refused:
$ curl my-server:5984 --verbose
* Rebuilt URL to: my-server:5984/
* Trying <my-ip>...
* connect to <my-ip> port 5984 failed: Connection refused
* Failed to connect to my-server port 5984: Connection refused
* Closing connection 0
When I connect to the couchdb instance locally via either 127.0.0.1 or 0.0.0.0 I can see the 200 response in my couchdb log:
$ sudo journalctl -u couchdb2
...
[notice] 2017-06-06T00:35:01.159244Z couchdb#localhost <0.3328.0> 222d655c69 0.0.0.0:5984 127.0.0.1 undefined GET / 200 ok 28
[notice] 2017-06-06T00:37:21.819298Z couchdb#localhost <0.5598.0> 2f8986d14b 127.0.0.1:5984 127.0.0.1 undefined GET / 200 ok 1
But when I curled from my laptop nothing shows up in the couchdb log for the Connection Refused error
This suggests to me that the problem may be the firewall and not CouchDB but I'm not sure about that.
Is Connection Refused always the firewall? Would I be getting some other error if this where the CouchDB instance having a problem?
To the best of my knowledge both CouchDB and firewalld are configured correctly, but its not working like I expected.
Any help would be appreciated, whether you know the problem or whether you can just help me discern if the problem is related to CouchDB or firewalld.
I configured Apache with SSl.
Server version: Apache/2.2.15 (Unix)
When I need to deploy my app on port 443, I get this error:
...
Play server process ID is 5941
[info] application - Application v2.8 - started on date 2016-03-30 11:51:08.332
[info] play - Application started (Prod)
Oops, cannot start the server.
If I start the app on different port, it works fine. For example:
sudo nohup app_path -Dhttp.port=9000 -Dconfig.file=config_file 2> /dev/null
But I get an error when I start the app on 443:
sudo nohup app_path -Dhttps.port=443 -Dconfig.file=config_file 2> /dev/null
My questions are:
- Am I missing something? Is there an easy fix for this?
- How can I see log of the error, because it is not descriptive at all
I have a project that includes a vagrant dev box, that works great on two (win7) computers at the office. However when I try it at home (win8.1) I can't connect to apache from the host.
Here's the blow by blow:
The project, including vagrantfile and apache config is stored in git
the VM boots fine, with no errors, I have tried reloading, and restarting the host
SSH to the VM works fine
shared folders between VM and host work
VM box is chef/centos-6.5
VM selinux is set to permissive
sudo service iptables status says firewall is disabled
disabling windows firewall does not fix the issue
a wget on the VM to itself gets the expected response
I normally use the address mydomain.127.0.0.1.xip.io:65000 to connect, but 127.0.0.1:65000 doesn't work either
the failure to get to webpage is quick (< 2s), and is ERR_CONNECTION_REFUSED in chrome
on my work pc I can telnet to port 65000, but at home I get connection refused
reload output:
C:\HashiCorp\Vagrant\bin\vagrant.exe reload
==> default: Attempting graceful shutdown of VM...
==> default: Checking if box 'chef/centos-6.5' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 80 => 65000 (adapter 1)
default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
GuestAdditions 4.3.12 running --- OK.
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => F:/Work/sites/4.0
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: to force provisioning. Provisioners marked to run always will still run.
I am leaning towards it being something to do with the host, as the VM/apache config should be identical with it coming via git; but I am at a complete loss as to what it could be.
Update - Extra Detail:
Running curl from windows host :
$ curl -Iv --connect-timeout 10 http://127.0.0.1:65000/robots.txt
* STATE: INIT =CONNECT handle 0x60002e1b0; line 1028 (connection #-5000)
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* STATE: CONNECT =WAITCONNECT handle 0x60002e1b0; line 1076 (connection #0)
* Connection timed out after 10000 milliseconds
* Closing connection 0
* The cache now contains 0 members curl: (28) Connection timed out after 10000 milliseconds
Disabling windows firewall, does not fix the issue.
And from the the VM it works :
[vagrant#localhost ~]$ time curl -Iv http://127.0.0.1/robots.txt
* About to connect() to 127.0.0.1 port 80 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD /robots.txt HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Sat, 02 Aug 2014 16:26:29 GMT
Date: Sat, 02 Aug 2014 16:26:29 GMT
< Server: Apache/2.2.15 (CentOS)
Server: Apache/2.2.15 (CentOS)
< Last-Modified: Sat, 26 Jul 2014 16:20:14 GMT
Last-Modified: Sat, 26 Jul 2014 16:20:14 GMT
< ETag: "3f-278-4ff1b10953009"
ETag: "3f-278-4ff1b10953009"
< Accept-Ranges: bytes
Accept-Ranges: bytes
< Content-Length: 632
Content-Length: 632
< Connection: close
Connection: close
< Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset=UTF-8
IP tables
[vagrant#localhost ~]$ sudo service iptables status
iptables: Firewall is not running.
vagrant#localhost ~]$ sudo iptables -L Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT) target prot opt source
destination
Chain OUTPUT (policy ACCEPT) target prot opt source
destination
1st, check if the port 80 is correctly binded by apache, run the following within the VM to confirm.
netstat -nap | grep :80
Check if any iptables rules are in place
iptables -L
2nd, if you have Cygwin on Windows, run the following and see what you get. I reckon Windows firewall may be playing dirty ;-D
curl -Is http://127.0.0.1:65000
If you don't want to troubleshoot further and just want the service to be accessible from the host, an easy workaround may be to add a 2nd NIC (network interface) using bridge mode (public network in Vagrant) and do a vagrant reload. Once it is up, vagrant ssh into it and get the IP address (should be in the same network as the host). You should be able to access the service by using PUBLIC_IP:PORT
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
I have given Terry the accepted answer because that suggestion worked, and I believe will help most people.
However in my case I believe the culprit was some combination of skype, pending windows updates, and my preference to hibernate my PC rather than shutting down. I have had various networking issues since, and all of them have been solved by disabling skype from autoloading, and installing any windows updates pending at the time I have the error and restarting!
Even if I haven't used skype in days it still seems to gum up my system in between restarts / proper shutdowns.
Not the most high brow contribution to a Stack Exchange site ever, and dangerously close to superstition, but I am convinced of the connection.
Here's an off the wall answer. Check your DHCP server for another machine taking that IP address before your vagrant box. You may actually be connecting to some friend's phone or laptop instead.
It worked for me after using auto_correct: true. Just follow the tutorial in Hashicorp docs
config.vm.box = "ubuntu/trusty64"
config.vm.provision :shell, path: "bootstrap.sh"
config.vm.network :forwarded_port, guest: 80, host: 4567,
auto_correct: true