Create env fails when using a daemonset to create processes in Kubernetes - process

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks

Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.

Related

REST dataset for Copy Activity Source give me error Invalid PaginationRule

My Copy Activity is setup to use a REST Get API call as my source. I keep getting Error Code 2200 Invalid PaginationRule RuleKey=supportRFC5988.
I can call the GET Rest URL using the Web Activity, but this isn't optimal as I then have to pass the output to a stored procedure to load the data to the table. I would much rather use the Copy Activity.
Any ideas why I would get an Invalid PaginationRule error on a call?
I'm using a REST Linked Service with the following properties:
Name: Workday
Connect via integration runtime: link-unknown-self-hosted-ir
Base URL: https://wd2-impl-services1.workday.com/ccx/service
Authentication type: Basic
User name: Not telling
Azure Key Vault for password
Server Certificate Validation is enabled
Parameters: Name:format Type:String Default value:json
Datasource:
"name": "Workday_Test_REST_Report",
"properties": {
"linkedServiceName": {
"referenceName": "Workday",
"type": "LinkedServiceReference",
"parameters": {
"format": "json"
}
},
"folder": {
"name": "Workday"
},
"annotations": [],
"type": "RestResource",
"typeProperties": {
"relativeUrl": "/customreport2/company1/person%40company.com/HIDDEN_BI_RaaS_Test_Outbound"
},
"schema": []
}
}
Copy Activity
{
"name": "Copy Test Workday REST API output to a table",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010",
"requestMethod": "GET",
"paginationRules": {
"supportRFC5988": "true"
}
},
"sink": {
"type": "SqlMISink",
"tableOption": "autoCreate"
},
"enableStaging": false
},
"inputs": [
{
"referenceName": "Workday_Test_REST_Report",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Destination_db",
"type": "DatasetReference",
"parameters": {
"schema": "ELT",
"tableName": "WorkdayTestReportData"
}
}
]
}
],
"folder": {
"name": "Workday"
},
"annotations": []
}
}
Well after posting this, I noticed that in the copy activity code there is a nugget about "supportRFC5988": "true" I switched the true to false, and everything just worked for me. I don't see a way to change this in the Copy Activity GUI
Editing source code and setting this option to false helped!

Set Subnet ID and EC2 Key Name in EMR Cluster Config via Step Functions

As of November 2019 AWS Step Function has native support for orchestrating EMR Clusters. Hence we are trying to configure a Cluster and run some jobs on it.
We could not find any documentation on how to set the SubnetId as well as the Key Name used for the EC2 instances in the cluster. Is there any such possibility?
As of now our create cluster step looks as following:
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "TestCluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{ "Name": "spark" }
],
"ServiceRole": "SomeRole",
"JobFlowRole": "SomeInstanceProfile",
"LogUri": "s3://some-logs-bucket/logs",
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge"
}
]
},
{
"Name": "CoreFleet",
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 2,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 100 }
]
}
]
}
},
"ResultPath": "$.cluster",
"End": "true"
}
}
As soon as we try to add "SubnetId" key in any of the subobjects in Parameters, or in Parameter itself we get the error:
Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: The field "SubnetId" is not supported by Step Functions at /States/Create an EMR cluster/Parameters' (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition;
Referring to the SF docs on the emr integration we can see that createCluster.sync uses the emr API RunJobFlow. In RunJobFlow we can specify the Ec2KeyName and Ec2SubnetId located at the paths $.Instances.Ec2KeyName and $.Instances.Ec2SubnetId.
With that said I managed to create a State Machine with the following definition (on a side note, your definition had a syntax error with "End": "true", which should be "End": true)
{
"Comment": "A Hello World example of the Amazon States Language using Pass states",
"StartAt": "Create an EMR cluster",
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "TestCluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{
"Name": "spark"
}
],
"ServiceRole": "SomeRole",
"JobFlowRole": "SomeInstanceProfile",
"LogUri": "s3://some-logs-bucket/logs",
"Instances": {
"Ec2KeyName": "ENTER_EC2KEYNAME_HERE",
"Ec2SubnetId": "ENTER_EC2SUBNETID_HERE",
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge"
}
]
},
{
"Name": "CoreFleet",
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 2,
"InstanceTypeConfigs": [
{
"InstanceType": "m3.2xlarge",
"BidPriceAsPercentageOfOnDemandPrice": 100
}
]
}
]
}
},
"ResultPath": "$.cluster",
"End": true
}
}
}

How to get currently running containers of a service using Docker Engine API?

I'm trying to get the currently running containers of a service to visualize them like in Portainer.io.
Portainer shows the currently running machines and replicas like 5/8.
I can get desired replica number using engine api with /services endpoint.
What I couldn't find is currently running containers of a service.
Service endpoint returns a result like;
{
"ID": "frf43534t43543t43gt435",
"Version": {
"Index": 10936
},
"CreatedAt": "2019-12-11T14:36:03.361254384Z",
"UpdatedAt": "2019-12-11T14:40:19.911714617Z",
"Spec": {
"Name": "connector-service",
"Labels": {
"com.docker.stack.image": "connector",
"com.docker.stack.namespace": "conn"
},
"TaskTemplate": {
"ContainerSpec": {
"Image": "connector:latest",
"Labels": {
"com.docker.stack.namespace": "conn"
},
"Hostname": "connector-service{{.Task.Slot}}",
"Env": [
"CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=3",
"CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=3"
],
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"Isolation": "default"
},
"Resources": {},
"Placement": {},
"Networks": [
{
"Target": "sfer32432fr4ewt4r3g4tr54",
"Aliases": [
"connector-service"
]
}
],
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 6
}
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8083,
"PublishedPort": 8083,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "safcedsvcsg4425r32dsf",
"Addr": "10.0.0.55/24"
},
{
"NetworkID": "sfsfe4233fr3g435432greg43",
"Addr": "10.0.3.11/24"
}
]
}
}
I've realized that in engine API containers can be retrieved with two endpoints; first one is /containers and second one is /tasks. In order to get running containers of a service /tasks endpoint with two filters can be used for example; http://192.168.4.142:1777/v1.40/tasks?filters={"service":{"my-service":true},"desired-state":{"running":true}}
This endpoint returns total number of running containers for a service, /services endpoint returns desired number so one can find how many of the desired containers are running.

AWS Code Build - cached DOWNLOAD_SOURCE taking too long

I am trying to improve the build speed of one of my projects on CodeBuild. The project uses a Github source provider and source caching of type local is enabled.
The first time I ran the build it took 103 secs. I ran it again immediately after the first one finished expecting it to run in a few seconds due to source caching, but it took 60 secs.
What I am missing here? Is the cache not working? If it is working, why does it take that long on the second run?
Thanks
Project Details:
{
"projectsNotFound": [],
"projects": [
{
"environment": {
"computeType": "BUILD_GENERAL1_LARGE",
"imagePullCredentialsType": "SERVICE_ROLE",
"privilegedMode": true,
"image": "111669150171.dkr.ecr.us-east-1.amazonaws.com/***********/ep-build-env:latest",
"environmentVariables": [
{
"type": "PLAINTEXT",
"name": "NEXUS_URI",
"value": "http://***************"
},
{
"type": "PLAINTEXT",
"name": "REGISTRY",
"value": "111669150171.dkr.ecr.us-east-1.amazonaws.com/*********"
}
],
"type": "LINUX_CONTAINER"
},
"timeoutInMinutes": 60,
"name": "StorefrontApi",
"serviceRole": "arn:aws:iam::111669150171:role/CodeBuild-ECRReadOnly",
"tags": [],
"artifacts": {
"type": "NO_ARTIFACTS"
},
"lastModified": 1571227097.581,
"cache": {
"type": "LOCAL",
"modes": [
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
"LOCAL_CUSTOM_CACHE"
]
},
"vpcConfig": {
"subnets": [
"subnet-fd7f958b"
],
"vpcId": "vpc-71e3f414",
"securityGroupIds": [
"sg-19b65e6c",
"sg-9e28e9f9"
]
},
"created": 1571082681.262,
"sourceVersion": "refs/heads/ep-mysql",
"source": {
"buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - env\n - cd extensions\n - mvn --settings $CODEBUILD_SRC_DIR_DEVOPS_WINE/pipelines/storefront/build-war/settings.xml --projects storefront/ext-storefront-webapp -am -DskipAllTests clean install\n\nartifacts:\n secondary-artifacts:\n storefront-war:\n base-directory: $CODEBUILD_SRC_DIR/extensions/storefront/ext-storefront-webapp/target\n files:\n - \"*.war\"\n\ncache:\n paths:\n - '/root/.m2/**/*'\n - '/root/.npm/**/*'",
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
},
"badge": {
"badgeEnabled": false
},
"queuedTimeoutInMinutes": 480,
"secondaryArtifacts": [],
"logsConfig": {
"s3Logs": {
"status": "DISABLED",
"encryptionDisabled": false
},
"cloudWatchLogs": {
"status": "ENABLED"
}
},
"secondarySources": [
{
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"sourceIdentifier": "DEVOPS_WINE",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
}
],
"encryptionKey": "arn:aws:kms:us-east-1:111669150171:alias/aws/s3",
"arn": "arn:aws:codebuild:us-east-1:111669150171:project/StorefrontApi",
"secondarySourceVersions": [
{
"sourceVersion": "refs/heads/staging",
"sourceIdentifier": "DEVOPS_WINE"
}
]
}
]
}
Apparently, at time of writing, CodeBuild does not use the native git client to fetch the source from GitHub. I understand that the CodeBuild internal teams have an internal feature request to move from whatever they're using to the native git client to improve performance.
Does your repository, by change, have lots of large files in its history? You can use this answer for a command to run to analyze your repository.
If you have lots of large files in your history and you're able to remove them, you can then use a tool like BFG Repo Cleaner to rewrite history. That should speed up the DOWNLOAD_SOURCE phase.
Also, if you have a dedicated support plan with AWS, you should reach out to your TAM to upvote the feature request to move to native git for GitHub source downloads.

Swagger UI and Docker Container Communication

I have a docker container running Swagger UI on port 80 and I have another API running in another container on port 32788
http://127.0.0.1:80/ >>> returns swagger UI
http://127.0.0.1:32788/swagger.json >>> returns swagger API def
But when I put the json file into the Swagger UI field and hit explore, it says
NetworkError when attempting to fetch resource. http://127.0.0.1:32788/swagger.json
Any ideas on how to solve this. The docs say that they should automatically be connected to the bridge network.
Below is the result of the network inspection
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b5cc1526055297df70dc9adc4959fcee93384c412fbf90500c041b5b83ed43a",
"Created": "2018-01-17T03:48:39.2325461Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"257a15af9ab9b25c6c5622fb0ebe599e5703b2ca5f2e4eaa97a8745a21e7f9a9": {
"Name": "pensive_neumann",
"EndpointID": "22be4b781f75e071bcb0098b917b81b16ca493e9080848188dd7a811c27070ec",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"30de904a599a19075d5e20ef5d974a11be9d7e58a68d984a24f4af9e22c4d92b": {
"Name": "naughty_mirzakhani",
"EndpointID": "f704b3e103a82ca5c56d5955ac27845d8951cfe13f0bc3e1ccc8717ea9c28d39",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Edit to explain how started each:
The API is part of Azure Machine Learning so its hard to say how it gets started exactly (unless there is some command I can run in docker):
az ml service create realtime
Swagger UI was started as follows:
docker run -p 80:8080 swaggerapi/swagger-ui