kubernetes e2e tests fails with spec.configSource: Invalid value - testing

We are running kubernetes(1.15.3) e2e tests via sonobuoy and 3 of them fail with the same error:
/go/src/k8s-tests/test/e2e/framework/framework.go:674
error setting labels on node
Expected error:
<*errors.StatusError | 0xc001e6def0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
Status: "Failure",
Message: "Node \"nightly-e2e-rhel76-1vm\" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Reason: "Invalid",
Details: {
Name: "nightly-e2e-rhel76-1vm",
Group: "",
Kind: "Node",
UID: "",
Causes: [
{
Type: "FieldValueInvalid",
Message: "Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Field: "spec.configSource",
},
],
RetryAfterSeconds: 0,
},
Code: 422,
},
}
Node "nightly-e2e-rhel76-1vm" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil
not to have occurred
/go/src/k8s-tests/test/e2e/apps/daemon_set.go:170
kubectl get nodes -o yaml gives these fields and yes, we do have kubelet dynamic config enabled:
spec:
configSource:
configMap:
kubeletConfigKey: kubelet
name: kubelet-config-1.15.3-1581671888
namespace: kube-system
[cloud-user#nightly-e2e-rhel76-1vm ~]$ kubectl get no nightly-e2e-rhel76-1vm -o json | jq .status.config
{
"active": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"assigned": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"lastKnownGood": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
}
}
What are we missing?
Thanks.

Related

Invalid s3 object key in s3:ObjectCreated:Put event

I have lambda function that is triggered when image is uploaded to s3 and optimises it.
Everything is working when I upload file from console, but when I upload it from code using boto3 I receive invalid key in the event.
Uploaded by renderImage lambda function:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "eu-central-1",
"eventTime": "2022-08-08T20:32:14.880Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS::*******************:renderImage"
},
"requestParameters": {
"sourceIPAddress": "*.*.*.*"
},
"responseElements": {
"x-amz-request-id": "***********",
"x-amz-id-2": "*****************************************************************************"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "s3ImageOptimizer-2386451296541264526234",
"bucket": {
"name": "myBucket",
"ownerIdentity": {
"principalId": "**************"
},
"arn": "arn:aws:s3:::myBucket"
},
"object": {
"key": "000-2022_08_08-08%3A32%3A04_PM.jpg", # <-- Invalid
"size": 558339,
"eTag": "e7e63f48Fd1849a53628255353f7027b",
"sequencer": "0062F172CED5CE84E2"
}
}
}
]
}
Uploaded by console:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "eu-central-1",
"eventTime": "2022-08-08T20:33:43.584Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:*******************"
},
"requestParameters": {
"sourceIPAddress": "*.*.*.*"
},
"responseElements": {
"x-amz-request-id": "***********",
"x-amz-id-2": "*****************************************************************************"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "s3ImageOptimizer-2386451296541264526234",
"bucket": {
"name": "myBucket",
"ownerIdentity": {
"principalId": "**************"
},
"arn": "arn:aws:s3:::myBucket"
},
"object": {
"key": "000-2022_08_08-08-32-04_PM.jpg", # <-- Correct
"size": 558339,
"eTag": "e7e63f48Fd1849a53628255353f7027b",
"sequencer": "0062F173278112713C"
}
}
}
]
}
The thing is that when I upload using lambda function key is invalid it returns
000-2022_08_08-08%3A32%3A04_PM.jpg
instead of
000-2022_08_08-08-32-04_PM.jpg
It seems that is is encoding 2 of the dashes and replaces them with %3A.
Why is this happening?
Here is my serverless.yml definition:
service: s3-optimizer
package:
individually: true
plugins:
- serverless-iam-roles-per-function
provider:
name: aws
deploymentMethod: direct
runtime: nodejs16.x
region: eu-central-1
stage: develop
memorySize: 4096
apiGateway:
minimumCompressionSize: 1024
shouldStartNameWithService: true
httpApi:
cors: true
functions:
s3ImageOptimizer:
handler: src/functions/s3/image-optimizer/handler.main
timeout: 90
iamRoleStatements:
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
Resource:
- arn:aws:s3:::myBucket/*
events:
- s3:
bucket: myBucket
event: s3:ObjectCreated:*
rules:
- prefix: original/
- suffix: .jpg
existing: true

Error with IPFS COR

When trying to use IPFS from my localhost I am having trouble accessing the IPFS service. I tried setting my config to accept the localhost and all server stuff, but nothing seems to work.
The error:
Failed to load http://127.0.0.1:5001/api/v0/files/stat?arg=0x6db883c6f3b2824d26f3b2e9c30256b490d125b10a3942f49a1ac715dd2def89&stream-channels=true: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
IPFS Config:
{
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Origin": [
"*"
]
}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
},
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": true,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": false,
"Libp2pStreamMounting": false,
"ShardingEnabled": false
},
"Gateway": {
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"localhost:63342"
]
},
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmRgQdig4Z4QNEqs5kp45bmq6gTtWi2qpN2WFBX7hFsenm"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 900,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false
}
}
Ben, try replacing 127.0.0.1 with localhost. go-ipfs whitelists localhost only. Also check https://github.com/ipfs/js-ipfs-api/#cors
my answer might come very late, however I am trying to solve some CORS issues with IPFS on my end; therefore I might have a solution for you:
by running:
# please update origin according to your setup...
origin=http://localhost:63342
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000", "http://127.0.0.1:48084", "https://gateway.ipfs.io", "https://webui.ipfs.io"]'
ipfs config API.HTTPHeaders.Access-Control-Allow-Origin
and restarting your ipfs daemon it might fix it
if the "fetch" button in the following linked page works : you are all set ! https://gateway.ipfs.io/ipfs/QmXkhGQNruk3XcGsidCzQbcNQ5a8oHWneHZXkPvWB26RbP/
This Command Works for me
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin
'["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000"]'
you can allow the request from multiple origins

Create env fails when using a daemonset to create processes in Kubernetes

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.

Cloudformation S3bucket creation

Here's the cloudformation template I wrote to create a simple S3 bucket, How do I specify the name of the bucket? Is this the right way?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Simple S3 Bucket",
"Parameters": {
"OwnerService": {
"Type": "String",
"Default": "CloudOps",
"Description": "Owner or service name. Used to identify the owner of the vpc stack"
},
"ProductCode": {
"Type": "String",
"Default": "cloudops",
"Description": "Lowercase version of the product code (i.e. jem). Used for tagging"
},
"StackEnvironment": {
"Type": "String",
"Default": "stage",
"Description": "Lowercase version of the environment name (i.e. stage). Used for tagging"
}
},
"Mappings": {
"RegionMap": {
"us-east-1": {
"ShortRegion": "ue1"
},
"us-west-1": {
"ShortRegion": "uw1"
},
"us-west-2": {
"ShortRegion": "uw2"
},
"eu-west-1": {
"ShortRegion": "ew1"
},
"ap-southeast-1": {
"ShortRegion": "as1"
},
"ap-northeast-1": {
"ShortRegion": "an1"
},
"ap-northeast-2": {
"ShortRegion": "an2"
}
}
},
"Resources": {
"JenkinsBuildBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
"AccessControl": "Private"
},
"DeletionPolicy": "Delete"
}
},
"Outputs": {
"DeploymentBucket": {
"Description": "Bucket Containing Chef files",
"Value": {
"Ref": "DeploymentBucket"
}
}
}
}
Here's a really simple Cloudformation template that creates an S3 bucket, including defining the bucketname.
AWSTemplateFormatVersion: '2010-09-09'
Description: create a single S3 bucket
Resources:
SampleBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: sample-bucket-0827-cc
You can also leave the "Properties: BucketName" lines off if you want AWS to name the bucket for you. Then it will look like $StackName-SampleBucket-$uniqueIdentifier.
Hope this helps.
Your code has the BucketName already specified:
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
The BucketName is a string, and since you are using 'Fn Join', it will be combined of the functions you are joining.
"The intrinsic function Fn::Join appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter."
Your bucket name if you don't change the defaults is:
cloudops-stage-deplyment-yourAwsRegion
If you change the default parameters, then both cloudops, and stage can be changed, deployment is hard coded, yourAWSRegion will be pulled from where the stack is running, and will be returned in short format via the Mapping.
To extend 'cloudquiz' answer, this is what it'd look in yaml format:
Resources:
SomeS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Join: ["-", ["yourbucketname", {'Fn::Sub': '${AWS::Region}'}, {'Fn::Sub': '${Stage}'}]]

Unable to resolve raml file errors

I have been given the attached RAML file to use in Mule but I am having problems working out how to clean up the errors in the file and not even sure this raml file conforms to standards. The errors I am getting are for missing {} and another is missing block entry when I remove the version. Can't figure out how to resolve them.
Below is a cut down version of the RAML:
#%RAML 0.8
---
title: Databox
version: v1
protocols: [HTTPS]
baseUri: https://databox/v1/{version}
mediaType: application/json
traits:
- http-data: !include http-data.raml
resourceTypes: !include types.raml
documentation:
- title: Home
content: |
Databox 1st draft
/stores:
type:
store:
description: Stores
dataSchema: !include stores.json
The traits (http-data.raml):
responses:
200:
description: |
Success
The resourceType (types.raml):
- store:
head:
description: Retrieve data for <<description>>.
is: [ http-data ]
get:
description: Retrieve data for <<description>>.
responses:
200:
body:
application/json:
schema: |
{
"type": "object",
"properties": {
"meta": {
"title": "Data",
"type": "object",
"properties": {
"createdOn": {
"type": "string",
"format": "date-time"
}
},
"required": [
"createdOn"
]
},
"data": {
"type": "array",
"items": <<dataSchema>>
}
},
"required": [
"data"
]
}
description: |
Success. Returns a JSON object containing all <<description>>.
The schema (stores.json):
{
"id": "http://localhost:8000/stores.json#",
"$schema": "http://json-schema.org/draft-04/schema",
"title": "Databox Store Schema",
"type": "object",
"properties": {
"storeId": {
"type": "string"
},
"storeDescription": {
"type": "string"
},
},
"required": [
"storeId"
],
"additionalProperties": false
}
Thanks
RAML is valid except for that <<dataSchema>> parameter used in the json schema, not sure if that's a valid use of parameters.
I would start by replacing that <<dataSchema>> for the json in stores.json and try again.
Let me know if that works or what errors you get.
UPDATE:
Mulesoft's anypoint portal validates your RAML with just that single change, you can see it here