Can any one of you please tell me where I can find the way to monitor docker images with falco? Now I'm using Ubuntu for testing purposes, but in the end I want to use it in AWS Fargate environment.
Thanks
Help on this from the community
Related
How do I setup EKS for new infrastructure? We are currently using KOPS to manage kubernetes which is a big problem. We would like to move to AWS EKS.
How did we get to do this?
Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors. So, it's better to use GKE to deploy it.
Google Cloud Platform supports easy way to deploy using Deployment Manager + Helm. You can use TerraForm to deploy if you want.
To understand step by step GKE deployment, follow three topics below:
https://codeburst.io/google-kubernetes-engine-by-example-part-1-358dc84d425b
https://codeburst.io/google-kubernetes-engine-by-example-part-2-ee1f519a32f9
https://codeburst.io/google-kubernetes-engine-by-example-part-3-9b7205ad502f
If you want to use deployment manager together with Helm, you can follow the topic below:
https://medium.com/google-cloud/gitlab-continuous-deployment-pipeline-to-gke-with-helm-69d8a15ed910
welcome to stack overflow!
You will find a super guide here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
I just checked KOPS docs and they apparently do AWS as well. I haven't worked with this tool before though.
Could you please describe a bit more what is the challenge? You can set up an eks cluster and then change your pipeline to go to the new one instead of the Google one.
I like to find out the current best practice for setting up a kubernetes cluster on a Dell Alienware Aurora workstation running Ubuntu 18.04 LTS for GPU based tensorflow workload. This will be a staging ground for my services/containers before I deploy them to a full-blown k8s cluster. I am not sure what the correct strategy for such a setup looks like. Here are some possibilities:
Minikube with virtualbox driver, worker node in VM
Minikube with --vm-driver=none, relying on docker
Kubeadm with scheduling pods on master enabled
Kubeadm-dind (docker in docker)
Update: added kubeadm options. Can someone also comment on the docker in docker solution. Will services/pods work seamlessly from docker in docker setup to multi-node remote machines/cloud instance setups?
Would love to hear from the kubernetes experts or someone familiar with tensorflow/GPU workloads on a single physical machine.
I'd go with 2 or 3 vm's and using kubeadm. You'll have real cluster to play with. There's some vagrant/ansible playbooks out there. GPU/Tensorflow is kinda new, so play ;)
Can I run docker-compose thru docker deamon remote socket?
I wasnt able to find anything on Engine AIP https://docs.docker.com/engine/api/v1.24/#310-tasks
In case docker does not support that, are you aware of any docker-compose remote API?
thanks in advance
Docker compose is just a utility that delegates the commands to the Docker daemon. Docker compose does not have a client sever architecture like Docker. It is only a client tool.
Thus there are no docker-compose apis. You can achieve everything by talking directly to the docker daemonexposed api.
Currently, all our nodes are on Ubuntu, but I'm considering switching to CentOS. But I want to stagger the switch over.
Short answer: Yes.
See Introducing Docker Cloud
You can also provide your own node or nodes. This means you can use any Linux host connected to the Internet as a Docker Cloud node as long as you can install a Cloud agent. The agent registers itself with your Docker account, and allows you to use Docker Cloud to deploy containerized applications.
We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.