I'm trying to enable HTTPS on my AWS EC2 instance that is being deployed using Elastic Beanstalk. The documentation to do this requires you to add this snippet in a directory .ebextensions/https-instance.config in the root directory of your app. I had to replace certificate file contents and private key contents with my certificate and key, respectively. I initially received an incorrect format error so I converted the snippet they provided to JSON and re-uploaded it -
{
"files": {
"/etc/httpd/conf.d/ssl.conf": {
"owner": "root",
"content": "LoadModule ssl_module modules/mod_ssl.so\nListen 443\n<VirtualHost *:443>\n <Proxy *>\n Order deny,allow\n Allow from all\n </Proxy>\n\n SSLEngine on\n SSLCertificateFile \"/etc/pki/tls/certs/server.crt\"\n SSLCertificateKeyFile \"/etc/pki/tls/certs/server.key\"\n SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH\n SSLProtocol All -SSLv2 -SSLv3\n SSLHonorCipherOrder On\n \n Header always set Strict-Transport-Security \"max-age=63072000; includeSubdomains; preload\"\n Header always set X-Frame-Options DENY\n Header always set X-Content-Type-Options nosniff\n \n ProxyPass / http://localhost:8080/ retry=0\n ProxyPassReverse / http://localhost:8080/\n ProxyPreserveHost on\n \n</VirtualHost>\n",
"group": "root",
"mode": "000644"
},
"/etc/pki/tls/certs/server.crt": {
"owner": "root",
"content": "-----BEGIN CERTIFICATE-----\ncertificate file contents\n-----END CERTIFICATE-----\n",
"group": "root",
"mode": "000400"
},
"/etc/pki/tls/certs/server.key": {
"owner": "root",
"content": "-----BEGIN RSA PRIVATE KEY-----\nprivate key contents # See note below.\n-----END RSA PRIVATE KEY-----\n",
"group": "root",
"mode": "000400"
}
},
"container_commands": {
"killhttpd": {
"command": "killall httpd"
},
"waitforhttpddeath": {
"command": "sleep 3"
}
},
"packages": {
"yum": {
"mod_ssl": []
}
}
}
The deployment aborts with the error -
[Instance: i-0x012x0123x012xyz] Command failed on instance. Return code: 1 Output: httpd: no process found. container_command killhttpd in my-app-name/.ebextensions/https-instance.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I can tell that the error is being caused due to container_commands key which stops httpd after the configuration so that the new https.conf and certificate can be used. It tells me that it's trying to kill httpd but it can't find any such process running. service httpd status shows that httpd.worker (pid 0123) is running and I can also access my app online. /var/log/eb-activity.log also has nothing logged in it.
I've seen a few others post the same problem online but I could not find any solutions to it. Is there something that I'm doing wrong here?
You ebextensions is trying to execute killall httpd, but your process is called httpd.worker.
Change the line in the ebextensions to be killall httpd.worker.
Related
I have an ASP.NET Core (3.1) application which is self-hosted and running as a service. I would like to expose an HTTPS endpoint for it. On the same machine there is an IIS instaled with already configured https together with certificate:
The certificate seems to stored in local computer certificate store:
I can also list it via the powershell:
> get-childitem cert:\LocalMachine\My\ | format-table NotAfter, Subject
NotAfter Subject
-------- -------
27.10.2023 07:38:45 <irrelevant>
08.03.2022 09:52:44 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
23.02.2022 21:51:53 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
27.10.2031 06:48:06 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64
26.10.2024 10:41:03 E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**
I changed the appsettings.json to use the certificate from the store:
{
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Warning"
}
},
"AllowedHosts": "*",
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://*:5000"
},
"HttpsDefaultCert": {
"Url": "https://*:5001"
}
},
"Certificates": {
"Default": {
"Subject": "E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**",
"Store": "My",
"Location": "LocalMachine",
"AllowInvalid": "true"
}
}
}
}
However this does not seem to work. I always get the following error:
System.InvalidOperationException: The requested certificate E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=** could not be found in LocalMachine/My with AllowInvalid setting: True
I do not know what could be the problem. The only thing that I think might be problematic is that the certificate subject actually contains newlines in the subject:
I do not know if this is the problem and I do not know how to enter it in the appsettings.json as multiline values can not be entered.
I've managed to track down the issue. Kestrel uses FindBySubjectName when searching for certificate.
FindBySubjectName does a sub-string search and will not match the full Subject of the certificate. If your certificate subject is something like 'CN=my-certificate' then searching for 'CN=my-certificate' will not find anything. Searching only for 'my-certificate' will work.
Additional note: In addition to specifying the correct search expression, make sure that the account under which you are running the application has sufficient permissions to read the certificate from certificate store. Certificates do have ACL so you do not have to run your app as an administrator.
I refer to the documentation for configuring the SSL certificates for Asp.NetCore app running on Kestrel.
I noticed some URL and ports settings also get stored in Properties/LaunchSettings.json file.
See Here: Configure endpoints for the ASP.NET Core Kestrel web server
Further, I noticed that you have put the Certificate under Defaults. I found other ways to configure the certificate. You could try to test them.
In the following appsettings.json example:
Set AllowInvalid to true to permit the use of invalid certificates (for example, self-signed certificates).
Any HTTPS endpoint that doesn't specify a certificate (HttpsDefaultCert in the example that follows) falls back to the cert defined under Certificates:Default or the development certificate.
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:5000"
},
"HttpsInlineCertFile": {
"Url": "https://localhost:5001",
"Certificate": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertAndKeyFile": {
"Url": "https://localhost:5002",
"Certificate": {
"Path": "<path to .pem/.crt file>",
"KeyPath": "<path to .key file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertStore": {
"Url": "https://localhost:5003",
"Certificate": {
"Subject": "<subject; required>",
"Store": "<certificate store; required>",
"Location": "<location; defaults to CurrentUser>",
"AllowInvalid": "<true or false; defaults to false>"
}
},
"HttpsDefaultCert": {
"Url": "https://localhost:5004"
}
},
"Certificates": {
"Default": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
}
}
}
Schema notes:
Endpoints names are case-insensitive. For example, HTTPS and Https are equivalent.
The Url parameter is required for each endpoint. The format for this parameter is the same as the top-level Urls configuration parameter except that it's limited to a single value.
These endpoints replace those defined in the top-level Urls configuration rather than adding to them. Endpoints defined in code via Listen are cumulative with the endpoints defined in the configuration section.
The Certificate section is optional. If the Certificate section isn't specified, the defaults defined in Certificates:Default are used. If no defaults are available, the development certificate is used. If there are no defaults and the development certificate isn't present, the server throws an exception and fails to start.
The Certificate section supports multiple certificate sources.
Any number of endpoints may be defined in Configuration as long as they don't cause port conflicts.
Reference: Replace the default certificate from configuration
Ubuntu already has self-signed certs, /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key
Is it possible to use those with a netcore app on Kestrel instead of generating a PFX?
If so, what's the required config?
Kestrel added support for pem files so you can specify them in configuration specifed here:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel/endpoints?view=aspnetcore-6.0#replace-the-default-certificate-from-configuration-1
{
"HttpsInlineCertAndKeyFile": {
"Url": "https://localhost:5002",
"Certificate": {
"Path": "<path to .pem/.crt file>",
"KeyPath": "<path to .key file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
}
}
Authentication is not enable for Mesos APIs by default.
After Install DCOS I want config Mesos API Authentication on it.
I'm going to set authentication for mesos APIs like : register_frameworks, run_task,...
the problem is after my configuration, DCOS GUI and marathon dosent work correctly.
I configured DCOS as follow:
Mesos environment variable config:
path:/opt/mesosphere/etc/mesos-master
#Authentication part
MESOS_LOG_DIR=/var/log/mesos
#Framework authentication
MESOS_AUTHENTICATORS="crammd5"
MESOS_AUTHENTICATE_FRAMEWORKS=true
MESOS_AUTHENTICATE_HTTP_FRAMEWORKS=true
MESOS_HTTP_FRAMEWORK_AUTHENTICATORS="basic"
MESOS_ACLS=/opt/mesosphere/etc/acls
MESOS_AUTHENTICATE=true
MESOS_CREDENTIALS=/opt/mesosphere/etc/mesos_credentials_auth.json
MESOS_ROLE=foo
Marathon environment variable config:
path:/opt/mesosphere/marathon
#authentication section
MARATHON_MESOS_AUTHENTICATION=enabled
#MARATHON_HTTP_CREDENTIALS=marathon:123456
MARATHON_MESOS_AUTHENTICATION_PRINCIPAL=marathon
MARATHON_MESOS_ROLE=foo
MARATHON_MESOS_AUTHENTICATION_SECRET_file=/opt/mesosphere/etc/marathon.secret
Marathon environment variable config: path:/opt/mesosphere/metronome
METRONOME_MESOS_AUTHENTICATION_ENABLED=true
METRONOME_MESOS_AUTHENTICATION_PRINCIPAL=metronome
METRONOME_MESOS_ROLE=foo
METRONOME_MESOS_AUTHENTICATION_SECRET_FILE= /opt/mesosphere/etc/metronome.secret
/opt/mesosphere/etc/metronome.secret (contain metronome secret without new line)
123456
/opt/mesosphere/etc/marathon.secret (contain marathon secret without new line)
123456
/opt/mesosphere/etc/acls
{
"run_tasks": [
{
"principals": {
"type": "ANY"
},
"users": {
"type": "ANY"
}
}
],
"register_frameworks": [
{
"principals": {
"type": "ANY"
},
"roles": {
"type": "ANY"
}
}
]
}
/opt/mesosphere/etc/mesos_credentials_auth.json
{
"credentials" : [
{
"principal": "principal1",
"secret": "secret1"
},
{
"principal": "principal2",
"secret": "secret2"
},
{
"principal": "marathon",
"secret": "123456"
},
{
"principal": "metronome",
"secret": "123456"
}
]
}
When I enable this configuration and stop and start dcos-mesos-master:
systemctl stop dcos-mesos-master.service
systemctl start dcos-mesos-master.service
systemctl stop dcos-marathon.service
systemctl start dcos-marathon.service
systemctl stop dcos-metronome.service
systemctl start dcos-metronome.service
http://IP/services page in DCOS dosnt work. I think its beacuase authentication of marathon don't set correctly. bcs this address dosent work after enable authentication configuration:\
http://IP/service/marathon/v2/deployments?_timestamp=1560449507192
I got this errors in mesos log after enable metronome authentication:
I0613 17:35:12.176092 305 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.177258 304 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.177523 304 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
I0613 17:35:12.177582 304 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.178586 302 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.178850 302 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
After search, finally I got my answer:
These security features are only available on "DC/OS mesosphere Enterprise" and you cant config it in open source version.
also I opened github issue with more details:(I hope it will be usefull)
https://github.com/mesosphere/marathon/issues/6942
I am trying to get started with a monitoring server solution. I got the Sensu Clients, RabbitMQ and Uchiwa configured but then I tried using Graphite but there were so many parts to configure I tried InfluxDB instead. I am stuck configuring Sensu to InfluxDB.
Is there a part missing in the below configuration?
Client [Sensu] > RabbitMQ <> Sensu Server <> InfluxDB <> Grafana
Any suggestions?
cat influx.json
{
"influxdb": {
"hosts" : ["192.168.1.1"],
"host" : "192.168.1.1",
"port" : "8086",
"database" : "sensumetrics",
"time_precision": "s",
"use_ssl" : false,
"verify_ssl" : false,
"initial_delay" : 0.01,
"max_delay" : 30,
"open_timeout" : 5,
"read_timeout" : 300,
"retry" : null,
"prefix" : "",
"denormalize" : true,
"status" : true
}
}
cat handler.json
{
"handlers": {
"influxdb": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/metrics-influxdb.rb"
}}}
checks1,
{
"checks": {
"check_memory_linux": {
"handlers": ["influxdb","default"],
"command": "/opt/sensu/embedded/bin/check-memory-percent.rb -w 90 -c 95",
"interval": 60,
"occurrences": 5,
"subscribers": [ "TEST" ]
}}}
checks2,
{
"checks": {
"check_cpu_linux-elkctrl-pipe": {
"type": "metric",
"command": "/opt/sensu/embedded/bin/check-cpu.rb -w 80 -c 90",
"subscribers": ["TEST"],
"interval": 10,
"handlers": ["debug","influxdb"]
}}}
To use InfluxDB to persist your data, you must have:
InfluxDB plugin installed (also, installation and usage instructions here)
Definitions for the plugin (an influxdb.json containin at least the host, port, user, password and database to be used by Sensu)
The definition, as other config files, must be in /etc/sensu/conf.d/
Handler configuration set properly (also in conf.d)
Mutator for InfluxDB (extensions)
Your checks must send results to the handler, so their definition must contain:
"handlers": [
"influxdb"
]
Or whatever name you gave your handler.
Case, if the influxdb config you provided above is the full extent of your configuration, it would seem to be missing the username/password attributes required by the influxdb configuration. If they're present, but not provided in the post, no big deal. However, I'd recommend doing the following for your Sensu logs:
grep -i influxdb /var/logs/sensu/sensu-server.log
And seeing if the check result is getting sent to your influxdb instance. If they are, you should be receiving an error that might be pointing a bit more to what's going on.
You can also check your influxdb logs to see if they're getting a post from your Sensu server:
journalctl -u influxdb.service -f
But yeah, if the username/password is missing from the configuration, that'd be the first place that I start.
The following API call to RabbitMQ:
http -a USER:PASS localhost:15001/api/queues/
Returns a list of queues:
[
{
...
"messages_unacknowledged_ram": 0,
"name": "foo_queue",
"node": "rabbit#queue-monster-01",
"policy": "",
"state": "running",
"vhost": "/"
},
...
]
Note that the vhost parameter is /.
How do I use a / vhost for the /api/queues/vhost/name call, which returns the details for a specific queue?
I have tried:
localhost:15001/api/queues/\//foo_queue
localhost:15001/api/queues///foo_queue
But both failed with 404 Object Not Found:
URL Encoding did the trick. The URL should be:
localhost:15001/api/queues/%2F/foo_queue
⬆⬆⬆
For the record, I think that REST resources should not be named /, especially not by default.