I have a docker container running Swagger UI on port 80 and I have another API running in another container on port 32788
http://127.0.0.1:80/ >>> returns swagger UI
http://127.0.0.1:32788/swagger.json >>> returns swagger API def
But when I put the json file into the Swagger UI field and hit explore, it says
NetworkError when attempting to fetch resource. http://127.0.0.1:32788/swagger.json
Any ideas on how to solve this. The docs say that they should automatically be connected to the bridge network.
Below is the result of the network inspection
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b5cc1526055297df70dc9adc4959fcee93384c412fbf90500c041b5b83ed43a",
"Created": "2018-01-17T03:48:39.2325461Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"257a15af9ab9b25c6c5622fb0ebe599e5703b2ca5f2e4eaa97a8745a21e7f9a9": {
"Name": "pensive_neumann",
"EndpointID": "22be4b781f75e071bcb0098b917b81b16ca493e9080848188dd7a811c27070ec",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"30de904a599a19075d5e20ef5d974a11be9d7e58a68d984a24f4af9e22c4d92b": {
"Name": "naughty_mirzakhani",
"EndpointID": "f704b3e103a82ca5c56d5955ac27845d8951cfe13f0bc3e1ccc8717ea9c28d39",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Edit to explain how started each:
The API is part of Azure Machine Learning so its hard to say how it gets started exactly (unless there is some command I can run in docker):
az ml service create realtime
Swagger UI was started as follows:
docker run -p 80:8080 swaggerapi/swagger-ui
Related
My Copy Activity is setup to use a REST Get API call as my source. I keep getting Error Code 2200 Invalid PaginationRule RuleKey=supportRFC5988.
I can call the GET Rest URL using the Web Activity, but this isn't optimal as I then have to pass the output to a stored procedure to load the data to the table. I would much rather use the Copy Activity.
Any ideas why I would get an Invalid PaginationRule error on a call?
I'm using a REST Linked Service with the following properties:
Name: Workday
Connect via integration runtime: link-unknown-self-hosted-ir
Base URL: https://wd2-impl-services1.workday.com/ccx/service
Authentication type: Basic
User name: Not telling
Azure Key Vault for password
Server Certificate Validation is enabled
Parameters: Name:format Type:String Default value:json
Datasource:
"name": "Workday_Test_REST_Report",
"properties": {
"linkedServiceName": {
"referenceName": "Workday",
"type": "LinkedServiceReference",
"parameters": {
"format": "json"
}
},
"folder": {
"name": "Workday"
},
"annotations": [],
"type": "RestResource",
"typeProperties": {
"relativeUrl": "/customreport2/company1/person%40company.com/HIDDEN_BI_RaaS_Test_Outbound"
},
"schema": []
}
}
Copy Activity
{
"name": "Copy Test Workday REST API output to a table",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010",
"requestMethod": "GET",
"paginationRules": {
"supportRFC5988": "true"
}
},
"sink": {
"type": "SqlMISink",
"tableOption": "autoCreate"
},
"enableStaging": false
},
"inputs": [
{
"referenceName": "Workday_Test_REST_Report",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Destination_db",
"type": "DatasetReference",
"parameters": {
"schema": "ELT",
"tableName": "WorkdayTestReportData"
}
}
]
}
],
"folder": {
"name": "Workday"
},
"annotations": []
}
}
Well after posting this, I noticed that in the copy activity code there is a nugget about "supportRFC5988": "true" I switched the true to false, and everything just worked for me. I don't see a way to change this in the Copy Activity GUI
Editing source code and setting this option to false helped!
I'm trying to use the chaos toolkit istio extension, my problem is as follows:
I have a experiment.json file which contains a single probe to retrieve a virtual service. The file looks similar to the following:
{
"version": "1.0.0",
"title": "test",
"description": "N/A",
"tags": []
"secrets": {
"istio": {
"KUBERNETES_CONTEXT": {
"type": "env",
"key": "KUBERNETES_CONTEXT"
}
}
},
"method": [
{
"type": "probe",
"name": get_virtual_service:,
"provider": {
"type": "python",
"module": "chaosistio.fault.probes",
"func": "get_virtual_service",
"arguments": {
"virtual_service_name": "test"
"ns": "test-ns"
}
}
}
}
I have set KUBERNETES_CONTEXT and http/https proxy as env vars. My authorisation is using $HOME/.kube/config.
When playing the experiment it validates the file fine and tries to perform the action but becomes stuck and just hangs until it times out.
The error I see in the logs is a HTTPSConnectionPool error (failed to establish a new connection, operation timed out).
Am I missing any settings? All help appreciated.
I'm trying to get the currently running containers of a service to visualize them like in Portainer.io.
Portainer shows the currently running machines and replicas like 5/8.
I can get desired replica number using engine api with /services endpoint.
What I couldn't find is currently running containers of a service.
Service endpoint returns a result like;
{
"ID": "frf43534t43543t43gt435",
"Version": {
"Index": 10936
},
"CreatedAt": "2019-12-11T14:36:03.361254384Z",
"UpdatedAt": "2019-12-11T14:40:19.911714617Z",
"Spec": {
"Name": "connector-service",
"Labels": {
"com.docker.stack.image": "connector",
"com.docker.stack.namespace": "conn"
},
"TaskTemplate": {
"ContainerSpec": {
"Image": "connector:latest",
"Labels": {
"com.docker.stack.namespace": "conn"
},
"Hostname": "connector-service{{.Task.Slot}}",
"Env": [
"CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=3",
"CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=3"
],
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"Isolation": "default"
},
"Resources": {},
"Placement": {},
"Networks": [
{
"Target": "sfer32432fr4ewt4r3g4tr54",
"Aliases": [
"connector-service"
]
}
],
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 6
}
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "safcedsvcsg4425r32dsf",
"Addr": "10.0.0.55/24"
},
{
"NetworkID": "sfsfe4233fr3g435432greg43",
"Addr": "10.0.3.11/24"
}
]
}
}
I've realized that in engine API containers can be retrieved with two endpoints; first one is /containers and second one is /tasks. In order to get running containers of a service /tasks endpoint with two filters can be used for example; http://192.168.4.142:1777/v1.40/tasks?filters={"service":{"my-service":true},"desired-state":{"running":true}}
This endpoint returns total number of running containers for a service, /services endpoint returns desired number so one can find how many of the desired containers are running.
I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.
I need a mini-hdfs service that can run in a single agent, so I started building one in a docker container. I then deployed it to DCOS. The Namenode UI comes up, but un-styled. It turns out that the references inside the UI we not prefixed.
My service is at http://m1.dcos/service/small-hdfs/dfshealth.html
The browser generates requests such as http://m1.dcos/static/bootstrap-3.0.2/css/bootstrap.min.css
Instead of http://m1.dcos/service/small-hdfs/static/bootstrap-3.0.2/css/bootstrap.min.css
This is my marathon.json - very basic for now - I'll expose the volumes after I get it basically working ...
How do I fix this. If I can pass the prefix into the container, I may be able to configure a Hadoop property with the prefix, but not sure if that is possible. I also did not see any documented way of passing this prefix.
{
"id": "small-hdfs",
"cmd": "/root/docker_entrypoint.sh",
"cpus": 1.5,
"mem": 4096.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "slowenthal/small-hdfs",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 9000, "hostPort": 0, "protocol": "tcp" },
{ "containerPort": 50070, "hostPort": 0, "protocol": "tcp" }
]
}
},
"labels": {
"DCOS_SERVICE_NAME": "small-hdfs",
"DCOS_SERVICE_PORT_INDEX": "1",
"DCOS_SERVICE_SCHEME": "http"
}
}