Trying to find docker-compose remote API - api

Can I run docker-compose thru docker deamon remote socket?
I wasnt able to find anything on Engine AIP https://docs.docker.com/engine/api/v1.24/#310-tasks
In case docker does not support that, are you aware of any docker-compose remote API?
thanks in advance

Docker compose is just a utility that delegates the commands to the Docker daemon. Docker compose does not have a client sever architecture like Docker. It is only a client tool.
Thus there are no docker-compose apis. You can achieve everything by talking directly to the docker daemonexposed api.

Related

falco docker containers monitoring

Can any one of you please tell me where I can find the way to monitor docker images with falco? Now I'm using Ubuntu for testing purposes, but in the end I want to use it in AWS Fargate environment.
Thanks
Help on this from the community

Does Docker Cloud bring your own nodes need to all have the same OS?

Currently, all our nodes are on Ubuntu, but I'm considering switching to CentOS. But I want to stagger the switch over.
Short answer: Yes.
See Introducing Docker Cloud
You can also provide your own node or nodes. This means you can use any Linux host connected to the Internet as a Docker Cloud node as long as you can install a Cloud agent. The agent registers itself with your Docker account, and allows you to use Docker Cloud to deploy containerized applications.

Is it possible to deploy Spinnaker to an instance smaller than m4.xlarge on AWS?

We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.

The best way of develop with Open shift origin: VM or local installation

What is the best way to develop with open shift origin? Is it using vm or install it locally? I have tried installing the vm and I could not login to the vm. What is the default credential used to login to fedora vm.
Default credentials
Depending on which route you follow (see below) there might or might not be real authorization in place.
If you have the AllowAllPasswordIdentityProvider in place you can get away with test/test or whatever.
If you take the binary version (see below) this is what you'll have by default. I changed it to be HTPasswdPasswordIdentityProvider instead.
For the other options I think you will have a user called system, with the password admin coming with the setup.
Docker container version
You can quickly get OpenShift running in a Docker container using
images from Docker Hub on a Linux system. This method is supported on
Fedora, CentOS, and Red Hat Enterprise Linux (RHEL) hosts only.
Link: https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
As per the origin folks, this setup is not (yet) a full example, but very easy to get started with. You should be able to follow the instructions to get an all-in-one instance up and running in no time. However, this approach cannot teach you how to create a cluster (master(s) and node(s))
Vagrant VM
This image is based off of OpenShift Origin and is a fully functioning
OpenShift instance with an integrated Docker registry. The intent of
this project is to allow Web developers and other interested parties
to run OpenShift V3 on their own computer. Given the way it is
configured, the VM will appear to your local machine as if it was
running somewhere off the machine.
The OpenShift Master, Node, Docker Registry, and other pieces are running in one VM. Given it's focus on application developers, it should NOT be used in production.
Link: https://www.openshift.org/vm
Binary option
Red Hat periodically publishes binaries to GitHub, which you can
download on the OpenShift Origin Releases page.
Link: https://github.com/openshift/origin/releases
This is the option I follow currently. You download the binaries, install GO, then setup the OC client tools. Next step you generate the configuration files and start adding your system components (router, ...).
Follow this page to understand the basics:
Link: https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Ansible route
For production installation you probably want to install your cluster via Ansible.
My humble advice is to do this once you got a bit of an experience via configuring by hand (see previous point). Let's hear some people with more experience though.
Link: https://docs.openshift.org/latest/install_config/install/index.html
Documentation in general
Link: https://docs.openshift.org/latest/install_config/master_node_configuration.html#creating-new-configuration-files
Spin up a Centos.7 VM, download the latest origin tools:
wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.2/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
tar xzvf openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
ln -s /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc /usr/local/bin/oc
chmod 755 /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc
Bring up your single node origin cluster:
oc cluster up --use-existing-config --host-data-dir=/var/tmp/etcd
Login using the instructions provided.

Is there any way to run MobileFirst Platform Foundation Docker (IBM Containers) images on a local Docker instance?

I followed the below steps and created the image of MobileFirst Platform Foundation:
Run IBM MobileFirst Platform Foundation on IBM Containers
The above steps push the image to Bluemix and start it. But I'd like to use the image on my docker-machine, especially for trouble shooting (ic/ice commands are limited compared to docker commands, and sometimes I can not access IBM container by ssh).
But the mobilefirst foundation image uses bluemix database service so perhaps we need to some environment variables like VCAP?
If your image uses Bluemix service database, I'm not what you can do. Perhaps you should switch to a local database for the during of the local image run.
Last time this was attempted, the following were the steps taken:
Run docker images to list the available images in the repository and their ID, tag, etc...
To start your image on a local container run: docker run -d -p 9080:9080 -p 9443:9443 <image ID>
To verify that the image is properly configured and the MobileFirst project runtime is available, launch the MobileFirst Console by loading the following URL: http://192.168.59.103:9080/worklightconsole
Again, these commands may differ. Hopefully it'll work in your case.