Declaring service UI in DCOS results in broken links - dcos

I need a mini-hdfs service that can run in a single agent, so I started building one in a docker container. I then deployed it to DCOS. The Namenode UI comes up, but un-styled. It turns out that the references inside the UI we not prefixed.
My service is at http://m1.dcos/service/small-hdfs/dfshealth.html
The browser generates requests such as http://m1.dcos/static/bootstrap-3.0.2/css/bootstrap.min.css
Instead of http://m1.dcos/service/small-hdfs/static/bootstrap-3.0.2/css/bootstrap.min.css
This is my marathon.json - very basic for now - I'll expose the volumes after I get it basically working ...
How do I fix this. If I can pass the prefix into the container, I may be able to configure a Hadoop property with the prefix, but not sure if that is possible. I also did not see any documented way of passing this prefix.
{
"id": "small-hdfs",
"cmd": "/root/docker_entrypoint.sh",
"cpus": 1.5,
"mem": 4096.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "slowenthal/small-hdfs",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 9000, "hostPort": 0, "protocol": "tcp" },
{ "containerPort": 50070, "hostPort": 0, "protocol": "tcp" }
]
}
},
"labels": {
"DCOS_SERVICE_NAME": "small-hdfs",
"DCOS_SERVICE_PORT_INDEX": "1",
"DCOS_SERVICE_SCHEME": "http"
}
}

Related

dotnet-monitor and OpenTelemetry?

I'm learning OpenTelemetry and I wonder how dotnet-monitor is connected with OpenTelemetry (Meter). Are those things somehow connected or maybe dotnet-monitor is just custom MS tools that is not using standards from OpenTelemetry (API, SDK and exporters).
If you run dotnet-monitor on your machine it exposes the dotnet metrics in Prometheus format which mean you can set OpenTelemetry collector to scrape those metrics
For example in OpenTelemetry-collector-contrib configuration
receivers:
prometheus_exec:
exec: dotnet monitor collect
port: 52325
Please note that for dotnet-monitor to run you need to create a setting.json in theis path:
$XDG_CONFIG_HOME/dotnet-monitor/settings.json
If $XDG_CONFIG_HOME is not defined, create the file in this path:
$HOME/.config/dotnet-monitor/settings.json
If you want to identify the process by its PID, write this into settings.json (change Value to your PID):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
}
}
If you want to identify the process by its name, write this into settings.json (change Value to your process name):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessName",
"Value": "iisexpress"
}]
},
}
In my example I used this configuration:
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
},
"Metrics": {
"Providers": [
{
"ProviderName": "System.Net.Http"
},
{
"ProviderName": "Microsoft-AspNetCore-Server-Kestrel"
}
]
}
}

How to Consume Kafka Topic from ZeroCode Framework

I want to consume one kafka topic via ZeroCode framework. I can consume my localhost kafka server from ZeroCode scenario. Also I can consume the topic that is got from my actual remote kafka server with using kafka-consumer.bat via command line but I can not consume same topic from ZeroCode.
What Do I need to add special configuration?
{
"name": "Consume Message From doob-ship-topic",
"url": "kafka-topic:my-sample-topic",
"operation": "consume",
"request": {
"consumerLocalConfigs": {
"recordType": "JSON",
"commitSync": false,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 3
}
},
"assertions": {
"size": 1,
"records": [
{
"value": {
"key": "99930000000000260001"
}
}
]
}
}
I tried different configurations but I didn't handle with that.
I solved.
"consumerLocalConfigs": {
"recordType": "RAW",
"commitSync": true,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 2,
"pollingTime": 1595 // I added this.
}
Added pollingTime is solved my problem.

Swagger UI and Docker Container Communication

I have a docker container running Swagger UI on port 80 and I have another API running in another container on port 32788
http://127.0.0.1:80/ >>> returns swagger UI
http://127.0.0.1:32788/swagger.json >>> returns swagger API def
But when I put the json file into the Swagger UI field and hit explore, it says
NetworkError when attempting to fetch resource. http://127.0.0.1:32788/swagger.json
Any ideas on how to solve this. The docs say that they should automatically be connected to the bridge network.
Below is the result of the network inspection
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b5cc1526055297df70dc9adc4959fcee93384c412fbf90500c041b5b83ed43a",
"Created": "2018-01-17T03:48:39.2325461Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"257a15af9ab9b25c6c5622fb0ebe599e5703b2ca5f2e4eaa97a8745a21e7f9a9": {
"Name": "pensive_neumann",
"EndpointID": "22be4b781f75e071bcb0098b917b81b16ca493e9080848188dd7a811c27070ec",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"30de904a599a19075d5e20ef5d974a11be9d7e58a68d984a24f4af9e22c4d92b": {
"Name": "naughty_mirzakhani",
"EndpointID": "f704b3e103a82ca5c56d5955ac27845d8951cfe13f0bc3e1ccc8717ea9c28d39",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Edit to explain how started each:
The API is part of Azure Machine Learning so its hard to say how it gets started exactly (unless there is some command I can run in docker):
az ml service create realtime
Swagger UI was started as follows:
docker run -p 80:8080 swaggerapi/swagger-ui

Create env fails when using a daemonset to create processes in Kubernetes

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.

How to correctly deploy multi meteor instances WITH SSL on one digitalocean droplet using mup?

my mup.json config for first meteor instance:
{
"servers": [
{
"host": "111.222.333.444",
"username": "root",
"password": "mypass"
}
],
"setupMongo": true,
"setupNode": true,
"nodeVersion": "0.10.40",
"setupPhantom": false,
"enableUploadProgressBar": true,
"appName": "myapp1",
"app": "../myapp1",
"env": {
"PORT": 3001,
"ROOT_URL": "https://my.domain.com"
},
"ssl": {
"pem": "./ssl.pem"
},
"deployCheckWaitTime": 15
}
So after deployment I want to get access to this instance by https://my.domain.com:3001. Then with similar configuration I want to deploy second instance to same droplet and get access to it by https://my.domain.com:3002.
The problem is that after deployment accessing by https taking ERR_CONNECTION_CLOSED, but accessing by http is OK.
How can I make it working?
Finally, I did it.
Firstly, I used mupx. But there I had troubles too. Later I found that my fault was writing same ports for different apps or protocols. So, there is working configurations of first and second apps:
{
"servers": [{
"host": "111.222.333.444",
"username": "root",
"password": "mypass",
"env": {}
}],
"setupMongo": true,
"appName": "myapp1",
"app": "../myapp1",
"env": {
"PORT": 8000,
"ROOT_URL": "http://my.domain.com"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true,
"ssl": {
"certificate": "../ssl/bundle.crt",
"key": "../ssl/private.key",
"port": 8001
}
}
{
"servers": [{
"host": "111.222.333.444",
"username": "root",
"password": "mypass",
"env": {}
}],
"setupMongo": true,
"appName": "myapp2",
"app": "../myapp2",
"env": {
"PORT": 8100,
"ROOT_URL": "http://my.domain.com"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true,
"ssl": {
"certificate": "../ssl/bundle.crt",
"key": "../ssl/private.key",
"port": 8101
}
}
bundle.crt and private.key are common for all apps.
Don't forget to use mupx.
So after
mupx setup
mupx deploy
We can get access for first app by
http://my.domain.com:8000
https://my.domain.com:8001
And for second app by
http://my.domain.com:8100
https://my.domain.com:8101
EDIT: accessing by http is not working. I don't know why, maybe it just for my configuration. But this feature I don't need, I need only https. So if you know how to fix, please, write.
EDIT2: it's alright, http access works. The reason was Chrome browser, it always redirects my domain from http to https. After cleaning browser history it do all good.