integrate caddy with datadog container on AWS - aws-fargate

I am trying to enable logging in my application that uses a caddy file and send metrics to datadog.
The datadog container that I run in the AWS Fargate task definition is running and the agent is waiting for traces but, of course, not receiving any as the logging is not enabled.
the application itself is very simple:
exec caddy run --config ./Caddyfile --adapter caddyfile
with caddyfile:
{
admin off
auto_https off
http_port 8080
https_port 8433
log {
output stdout
format json
}
}
:8080 {
file_server browse
root * /mnt/cvdupdate/databases
}
I have read that there is a datadog plugin for caddy
that sends traces to datadog but I cannot manage to set it up?
what is the correct way?

Related

In Cloud Foundry, how do I create a service to run my Apache web server?

I'm on Ubuntu 18, running the following version of Cloud Foundry ...
$ cf -v
cf version 7.4.0+e55633fed.2021-11-15
I would to set up several containers, running off Docker image. First is an Apache web server. I have the following Dockerfile
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./my-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./directory /usr/local/apache2/htdocs/directory
How do I set this up in Cloud foundry? I tried creating a service but got these errors
$ cf cups apache-service -p "localhost, 80"
FAILED
No API endpoint set. Use 'cf login' or 'cf api' to target an endpoint.
When I tried to create this API endpoint I got
$ cf api "http://my_ip_address"
Setting API endpoint to http://my_ip_address...
Request error: Get "http://my_ip_address": dial tcp my_ip_address:80: connect: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
I'm thinking I'm missing something rather substantial but don't know what the right questions to ask are.
The error message you are providing (dial tcp my_ip_address:80: connect: connection refused ) is related to the cf api $address not responding.
Ensure that your Cloud Foundry API Endpoint is still active and you don't have any firewall preventing you from accessing the API. (port is open, the process is running, and the firewall is allowing traffic from your IP if applicable)

ResourceInitializationError: failed to validate logger args: : signal: killed

Suddenly getting the message " ResourceInitializationError: failed to validate logger args: : signal: killed" while starting AWS ECS Fargate Service. Same service was running fine couple of days back.
Following is log driver configurations in related aws task:
Log Configuration
Log driver: awslogs
Key Value
awslogs-group /ecs/analytics
awslogs-region us-east-1
awslogs-stream-prefix ecs
Any idea or help?
I finally found the root cause:
The error appears if the fargate service is not able to connect to the CloudWatch api endpoint.
This might happen if you have fargate running in a private subnet without internet access.
You could either add the CloudWatch log Endpoint to your private subnet or add internet connectivity
I recently spent hours on this same issue. It turns out that the log group and stream prefix specified in my container definition didn't exist.
It would be wonderful if AWS could provide helpful error messages...
Came across this issue today. The issue was that the log group I specified didn't exist yet. If you don't want to manually create it, make sure to add the awslogs-create-group and set it to "true". You'll have to grant your ECS Task Execution role a logs:CreateLogGroup permission as well.
"logConfiguration": {
"logDriver": "awslogs",
"secretOptions": null,
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/app",
"awslogs-region": "ap-southeast-2",
"awslogs-stream-prefix": "ecs"
}
}
Reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I just experienced this. I have ECS Fargate running and I've just added a VPC endpoint for Cloudwatch Logs com.amazonaws.REGION.logs in my account. When I added the VPC endpoint my logs stopped appearing.
In order to remedy this without deleting the VPC endpoint again, for my setup with Fargate running with internet access I had to ensure that:
My ECS service had a security group rule that to allows HTTPS traffic outbound
{
type: egress
port_to: 443
port_from: 443
protocol: TCP
}
That my new VPC Endpoint had a security group rule to allow HTTPS traffic inbound from my ECS security group
{
type: ingress
port_to: 443
port_from: 443
protocol: TCP
source_security_group_id: [Your ECS SECURITY GROUP ID]
}
I got this error, checked my NAT and IG, and all is good. And I found the endpoint interface also was set up as com.amazonaws.use-ease-1.logs
Nothing seems to need to change. Finally, I deleted the interface endpoint and the error went away.
But I am still confusing what happened.

AWS CLI S3 list with a default endpoint

I'm using the following command on some ec2 instances in order to get some configuration files from an s3 bucket. The ec2 has an instance role attached with s3 full permissions:
aws s3 cp s3://bucket-name/file ./ --region eu-west-1
This work as expected on some instances provided by me with a default ami, but one some existing instances in the same region and AZ with the same instance role i'm facing the following error:
Connect timeout on endpoint URL: "https://bucket-name.eu-west-1.amazonaws.com/?list-type=2&delimiter=2%F&prefix=&encoding-type=url"
failed to run commands: exit status 255
My question is why the S3Uris is not prefixed with s3:// and returns the error with url string https:// ? it's clear that this aws cli version tries to reach s3 through https not by s3:// endpoint provided by me in the command. Is there anyway to overwrite this?
My question is why the S3Uris is not prefixed with s3:// and returns
the error with url string https:// ?
Behind the scenes aws cli call the AWS services using HTTPS so that why is why on timeout you see https://bucket-name.eu-west-1... timeout instead of s3:// .
By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443.
aws-cli-chap-using
The timeout on some instance might be because they are in private subnet without NAT gateway.
you can simply verify this by doing ping google.com if it not responding then the instance in the private subnet without NAT or has no outbound allowed traffic.

How to configure Kafka server with SASL_SSL and GSSAPI protocols

I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....

- Restcomm Olympus WebRTC WSS error,

We are trying to use RESTCOMM OLYMPUS by making few customizations as part of our application. The main customization is that we have deployed OLYMPUS war on our Apache TOmcat web server and the OUTBOUND PROXY is properly pointed to the same server where RESTCOMM is running.
So far all is good, but recently we got the issue that "getUserMedia()" deprecation issue because of insecure origin issue by chromium fix.
So, it means we need to use HTTPS and WSS. I can see that just around 7 days back OLYMPUS code has been updated on GITHUB to use WSS if HTTPS has been used in browser location bar.
So first we have installed self signed CERT and enabled SLL config on TOMCAT so that our customized OLYMPUS UI is accessed via https from Tomcat. And then we used WSS protocol to connect to OUTBOUND PROXY. Bt we got the below error
"WebSocket connection to 'wss:/:5082/' failed: Error in connection establishment: net::ERR_TIMED_OUT
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Then we thought that in addition to TOMCAT ( where WAR is deployed) we need to install self singed cert and SSL config on RESTCOMM as well. So we did it by following http://docs.telestax.com/restcomm-enable-https-secure-connector-on-jboss-as-7-or-eap-6/ and also we have used WSS protocol.
But this time also we got the error but with a different error code though
"WebSocket connection to 'wss:/:5083/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Can i request the forums to explain if we are missing any thin here?
Thanks in advance
I would suggest to use the mobicents RestComm docker image instead of using the zip bundle, because for docker image all settings are handled automatically and https/wss should work out of the box. Here are some quick steps to get you started:
Install docker in your Ubuntu if not already there
Download RestComm docker image:
$ docker pull mobicents/restcomm:latest
Start docker image:
$ docker run -e SECURE="true" -e SSL_MODE="allowall" -e USE_STANDARD_PORTS="true" -e VOICERSS_KEY="VOICERSS_KEY_HERE" --name=restcomm -d -p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65535:65000-65535/udp mobicents/restcomm:latest
Now you should be able to reach your RestComm instance Admin UI at:
https://<host ip address>/
Make sure that you don't have any servers running in your host at the ports used by the docker container above, or you'll have to use different ports (please refer to the docker hub page for such options)
Best regards,
Antonis Tsakiridis