I have a gen3 AWS rack.
My convox build is very slow. (30 minutes+)
I am using a custom docker file in my convox.yml to build my services.
When I run convox build I can see that the docker image is being built from scratch without any docker layer caching.
My rack node_type is t3.large
Is there something I can configure in convox to make my builds faster/ enable layer caching?
How many instances are in your Rack? On v3, the build can happen on a random instance, and unfortunately the docker cache is not shared between them so if you build on a 'new' instance, with no cache, then it will take longer. If you build on an instance that's previously had a build then it should re-use the layer cache from before.
Convox is actively looking into utilising buildx and it's options to open up more build options and quicker builds so keep an eye out for that!
Thanks,
I have 9 instances in the prod rack at the moment, but that can scale a bit at times.
It would be great to be able to stick the builds to an instance so I can hit the cache.
:)
Related
In my development environment, I have my IDE, a database, web-server... installed
A script exist: 80 different commands are ran for it.
Then, at delivery time (integration, acceptance), I have a big mess to execute a script that create many Docker containers, each having its goal: database, web-server, etc.
Their scripts are some subsets of the big one I'm using for my own local developer computer. But adapted.
It's very difficult to ensure the transition between my standalone - "flat" if I can say so - dev computer and the containerized version fitted for delivery.
I wonder if a way exists to develop directly an application being containerized at its early beginning :
With all the tree of its containers ready (and not a single one containing everything: it would be cheating...)
As soon as I compile my sources in my IDE : simple compilation would have for result binaries and files going in their due container
and it's in these containers that my application would be executed, even in development mode.
Is it possible? Is it already done by some of you?
Or does it have too much drawbacks to be attempted?
I have a following use case:
We have one solution that contains 5-10 different services (.NET Framework Web Apps of various versions) within. We have to setup CI/CD in Azure DevOps to be able to automate the deployments of each services separately (or all services at once). There will be around 5 different environments for each service.
Challenges:
We are trying to avoid having (# of services X # of environments) seperate builds and releases (~50 build/ ~50 release).
We do have to be able to deploy one service alone without others being affected.
We do have to be able to deploy ALL services all at once for mass deployments.
P.S. We are currently using trunk based development but I am thinking about moving to giflow to have branch based triggers as I feel it would be easier to manage in this case.
CI - handled by your build server (e.g. teamcity). Responsibility: Build, Test, Obfuscate, Create Packages and lastly push Packages to nuget server (.net specific). Traditionally besides the app code you also need at least 2 other packages: db migrations, infra migrations.
You build packages once and deploy the exact version everywhere else you want it to go.
https://gist.github.com/leblancmeneses/1d352bb79447cd7a486598c4dc796ef1
This script works in conjunction with https://github.com/leblancmeneses/RobustHaven.DevOps
CD - handled by something like octopus deploy. Responsibly: orchestrate deployment process across your cluster. Octopus pulls packages from nuget server and moves them to what ever environment you want and to whatever machines encompasses that environment.
https://www.robusthaven.com/presentations/DevOps
you dont really need 50 builds, you can use a single build per service (assuming builds for different environments are identical) and build from different branches. technically you can get away with a single release for 50 environments if you create your triggers\phases properly, but that would be a mess, just create a single one for each environment. I cant see how managing 50 environments on a single release is manageable.
when yaml release pipelines arrive, this becomes trivial, right now its not, unfortunatelly.
As part of my job I evaluate many software and applications.
I need to have an environment that is easy to clean (so the previous apps are not bloating my system) and always light.
One idea is to create isolated environments (either by Docker or Virtual machines) and fire up a new environment every time I need to start over with new software to evaluate.
Questions:
1.Does Docker support this? Can I use it to create new environment every few days and test software in it?
2. If not, which VM system would be suitable for this particular need?
Thanks
This is exactly what all the Continuous Integration systems do: get fresh code, build your project and run tests inside the freshly created container. This is how clean testing is done nowadays. So Docker fits perfectly your needs.
Each fresh container is a clean environment where you can run your tests in. Then you can parse the result and remove the container, for example docker run --rm -it my-image ./tests.sh
Documentation on this is quite rare but are there any tips on how to speed up build on CloudBees, especially using the workflow plugin?
Usually -- when using the very same machine for subsequent builds, you can make use of caches or reuse previous computations.
There are some quite expensive computations like downloading dependencies with SBT, Maven or Gradle; the initial npm install; Gemfile Cache that are quite expensive in time and computation but are great to cache.
On CloudBees you will most probably get a random (new) node for your builds, so there's no cache.
We are also using Snap-CI - there we have a persistent CACHE_DIR that allows that. Is there anything similar on CloudBees?
If you are referring to DEV#cloud, CloudBees’ hosted Jenkins, there is a cached workspace system, though it is not used for every build. (Depends on detail of hardware allocation in the cloud.) If you run a number of builds, over time you should see most of them picking up an existing workspace, and thus being able to use Maven local repository caches, etc.
Use of the Workflow plugin as opposed to freestyle or other project types should not matter in this regard.
I have read what the Docker is but having hard time finding of what are the real scenarios of using Docker?
It would be great to see here your usages.
I'm replicating production environment with it, on commit on project with jenkins after building binaries i deploy there, launch the required daemons and run integration tests, all in a very short time (a few seconds over the time that takes the integration tests). Having no need to boot, and little overhead on memory/cpu/disk is great for that kind of things.
I could extend that use for development (just adding a volume where the code resides to my git repository, at least for scripting languages) to have the production environment with the code im actually editing, at a fraction of what virtualbox would require.
Also needed to test how to integrate some 3rd party code into a production system that modified DB. Cloned the DB in a container, installed the production system in another, launched both and iterated the integration until i did it well, going back to zero to try again in seconds, and faster, cheaper and more scriptable than doing it with VMs+snapshots.
Also run several desktop browser instances on containers, with their own plugins, cookies, data storage and so on separated. The docker repository example for desktop integration is a good start for it, but planning to test subuser to extend this kind of usage.
I've used Docker to implement a virtualized build server which any user could ask to run a build off their personal git branch in our canonical environment.
Each SSH connection made to the server was connected to a new container, ensuring that all builds were isolated from each other (a major pain point in the past), ensuring that the container's state couldn't be corrupted (since changes were all isolated to that single instance), and ensuring that even developers on platforms such as Windows where Docker (and other tools in our canonical build environment) couldn't be run locally would be able to run builds.
We use it for the following uses:
We have a Jenkins Container which we can use to bring up our Jenkins server. We mount the workspace using volumes so we can migrate the server easily just by copying the files and launching the container somewhere else.
We use a Jetty container to easily deploy our war files in our production and development environment.
We use a whole host of other monitoring tools such as Uptime which we have containers for so that we can bring them up and down on various hosts with a single command.
I use docker to build and test our software on several different Linux distributions (RHEL 4/5/6/7, Ubuntu 12.04, 14.04).
Docker makes it easy and fast to create minimalistic and consistent build environments.
Docker gives you the benefits that other virtualization solutions give you to a fraction of the recourse needed.