Multiple subnets in same AZ with eksctl - amazon-eks

In the eksctl ClusterConfig file, how to map if I have multiple subnets (subnets with different CIDR range) in the same AZ ?
eu-west-1a:
id: "subnet-02a3dacfd211d0870"
cidr: "192.168.32.0/22"
eu-west-1b:
id: "subnet-07a7b2710e102cc03"
cidr: "192.168.36.0/22"
eu-west-1c:
id: "subnet-00b560c1f99779a6d"
cidr: "192.168.40.0/22"
eu-west-1a:
id: "subnet-0c5e28e892372ebf4"
cidr: "192.168.47.0/25"

this is supported by eksctl since version 0.32.0:
eksctl version 0.32.0 introduced further subnet topology customisation with the ability to:
List multiple subnets per AZ in VPC configuration
https://eksctl.io/usage/vpc-networking/#custom-subnet-topology

Based on a discussion on eksctl's slack channel, multiples subnets per AZ is not supported yet. From the discussion:
isn't supported currently because that field takes a map which does not permit duplicate keys (availability zones here)
There are some request for this also:
https://github.com/weaveworks/eksctl/issues/475#issuecomment-498995479
https://github.com/weaveworks/eksctl/issues/806

Related

Calls between 2 APIs on the same Kubernetes cluster

I have two Api's on the same cluster and when I run the get services I get the following.
dh-service ClusterIP 10.233.48.45 <none> 15012/TCP 70d
api-service ClusterIP 10.233.54.208 <none> 15012/TCP
Now I want to make a Api call from one API to the other, When I do it using the Ingress address for the two Images I get 404 Not Found.
What address should I use for my post calls? Will the cluster ip work ?
I want to make a Api call from one API to the other
If they are in the same namespace and you use http, you can use:
http://dh-service
http://api-service
to access them.
If e.g. the api-service is located in a different namespace e.g. blue-namespace you can access it with:
http://api-service.blue-namespace
See more on DNS for Services and Pods

Spinnaker - SQL backend for front50

I am trying to setup SQL backend for front50 using the document below.
https://www.spinnaker.io/setup/productionize/persistence/front50-sql/
I have fron50-local.yaml for the mysql config.
But, not sure how to disable persistent storage in halyard config. Here, I can not completely remove persistentStorage and persistentStoreType should be one of a3,azs,gcs,redis,s3,oracle.
There is no option to disable persistent storage here.
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spinnaker
rootFolder: front50
maxKeys: 1000
oracle: {}
So within your front50-local.yaml you will want to disable the service you used to utilize e.g.
spinnaker:
gcs:
enabled: false
s3:
enabled: false
You may need/want to remove the section from your halconfig and run your apply with
hal deploy apply --no-validate
There are a number of users dealing with these same issues and some more help might be found on the Slack: https://join.spinnaker.io/
I've noticed the same issue just recently. Maybe this is because, for example Kayenta (which is an optional component to enable) is still missing the non-object storage persistent support, or...
I've created a GitHub issue on this here: https://github.com/spinnaker/spinnaker/issues/5447

Restrict Log Analytics logging per deployment or container

We've seen our Log Analytics costs spike and found that the ContainerLog table had grown drastically. This appears to be all stdout/stderr logs from the containers.
Is it possible to restrict logging to this table, at least for some deployments or containers, without disabling Log Analytics on the cluster? We still want performance logging and insights.
AFAIK the stdout and stderr logs under ContainerLog table are basically the logs which we see when we manually run the command "kubectl logs " so it would be possible to restrict logging to ContainerLog table without disabling Log Analytics on the cluster by having the deployment file something like shown below which would write logs to logfile within the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxx
spec:
selector:
matchLabels:
app: xxxxxxx
template:
metadata:
labels:
app: xxxxxxx
spec:
containers:
- name: xxxxxxx
image: xxxxxxx/xxxxxxx:latest
command: ["sh", "-c", "./xxxxxxx.sh &> /logfile"]
However, the best practice would be to send log messages to stdout for applications running in a container so the above process is not a preferrable way.
So you may create an alert when data collection is higher than expected as explained in this article and / or occasionally delete unwanted data as explained in this article by leveraging purge REST API (but make sure you are purging only unwanted data because the deletes in Log Analytics are non-reversible!).
Hope this helps!!
Recently faced a similar problem in one of our Azure Clusters. Due to some incessant logging in the code the container logs went berserk. It is possible to restrict logging per namespace at the level of STDOUT or STDERR.
You have to configure this by deploying a config map on the kube-system namespace upon which, logging ingestion to the log analytics workspace can be disabled/restricted per namespace.
The omsagent pods in kube-system namespace will absorb these new configs in a few mins.
Download the below file and apply it on your Azure Kubernetes cluster
container-azm-ms-agentconfig.yaml
The file contains the flags to enable/disable logging and namespaces can be excluded in the rule.
# kubectl apply -f <path to container-azm-ms-agentconfig.yaml>
This only prevents the log collection in the Log analytics Workspace but not the log generation in the individual containers.
Details on each config flag in the file is available here

How do I connect to Neptune using Java

I have the following code based on the docs...
#Controller
#RequestMapping("neptune")
public class NeptuneEndpoint {
#GetMapping("")
#ResponseBody
public String test(){
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("...endpoint...");
builder.port(8182);
Cluster cluster = builder.create();
GraphTraversalSource g = EmptyGraph.instance()
.traversal()
.withRemote(
DriverRemoteConnection.using(cluster)
);
GraphTraversal t = g.V().limit(2).valueMap();
t.forEachRemaining(
e -> System.out.println(e)
);
cluster.close();
return "Neptune Up";
}
}
But when I try to run I get ...
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
Also how would I add Secret key from AWS IAM account?
Neptune doesn't allow you to connect to the db instance from your local machine. You can only connect to Neptune via an EC2 inside the same VPC as Neptune (aws documentation).
Try making a runnable jar of this code and run it inside an ec2, the code should work fine. If you're trying to debug something from your local system, then use PuTTY instance tunneling to connect to ec2 which then will be forwarded to neptune cluster.
Have you created an instance with IAM auth enabled?
If yes, you will have to sign your request using SigV4. More information (and examples) on how to connect using SigV4 is available at https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-connecting-gremlin-java.html
The examples given in the documentation above also contain information on how to use your IAM credentials to connect to a Neptune cluster.
I just had the same issue and the root cause was a dependency version conflict with Netty which is unfortunately a very pervasive dependency. Gremlin 3.3.2 uses io.netty/netty-all version 4.0.56.Final. You might find your project depends on another Netty jar such as io.netty/netty or io.netty/netty-handler both of which can cause issues so you will need to excluded them from other dependencies in your POM or use managed-dependencies to set a project level Netty version.
Another option is to use a AWS SigV4 signing proxy that acts as a bridge between Neptune and your local development environment. One of these proxies is https://github.com/monken/aws4-proxy
npm install --global aws4-proxy
# have your credentials exported as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
aws4-proxy --service neptune-db --endpoint cluster-die4eenu.cluster-eede5pho.eu-west-1.neptune.amazonaws.com --region eu-west-1
wscat localhost:3000/gremlin
Refer this
Note: You need to be in the same VPC to access Neptune cluster.

openshift concrete builder quota

How does the openshift count the resource quota consumed by specific builder image? (There may be multiple images)
It is created by sti builder, but not openshift cluster itself (k8s exactly).
I know the quota is equal to the sti builder, but would like to know how to count it if we customized the quota (and if I can do that). It looks like the cluster can't count the resource quota (cpu/memory, etc)
Together with quota you can define a scope. See OpenShift Origin: quota scopes.
The relevant scope for build and deployment pods is NonTerminating.
Adding this scope to quota definition will constrain it to only build and deployment pods (pods that have spec.activeDeadlineSeconds is nil according to docs)
Example definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: slow-builds-and-deployments
spec:
hard:
pods: "2"
limits.cpu: "1"
limits.memory: "1Gi"
scopes:
- NotTerminating
The Terminating scope on the other side will be applied other pods (pods with spec.activeDeadlineSeconds >= 0).