Tekton to use local image - tekton

I am starting to use Tekton pipelines for my project. I have imported the required images on my environment and checked it by running docker images. Can I make the steps in tasks to use the local images instead of looking at docker.io or a private registry

Related

How to deploy s3 bucket within a docker container in CDK?

I am deploying a static site with AWS CDK. This works but as the site has grown, the deployments are failing due to
No space left on device
I am looking for solutions to this problem. One suggestion I have seen is to deploy within a docker container.
How can I do this in CDK and are there any other solutions?
I would advise that you use cdk-pipelines to manage your deployment - thats the best way forward.
But if you have to use a docker container then I have done something similar (in Jenkins).
Steps...
Create a 'Dockerfile' in your project, this will be your custom build environment, it should look like this...
FROM node:14.17.3-slim
ENV CDK_DEFAULT_ACCOUNT=1234 \
CDK_DEFAULT_REGION=ap-southeast-2
RUN npm install -g typescript
Make sure your pipeline installs any npm packages you need
'Build' your project, npx cdk synth
'Deploy' your project,npx cdk deploy --require-approval never
Lastly you'll need a way to authenticate with AWS so BB Pipelines and specifically the Docker container can 'talk' to cloudformation.
But like I said, cdk-pipelines is best solution, here is good tutorial

convox docker build is very slow

I have a gen3 AWS rack.
My convox build is very slow. (30 minutes+)
I am using a custom docker file in my convox.yml to build my services.
When I run convox build I can see that the docker image is being built from scratch without any docker layer caching.
My rack node_type is t3.large
Is there something I can configure in convox to make my builds faster/ enable layer caching?
How many instances are in your Rack? On v3, the build can happen on a random instance, and unfortunately the docker cache is not shared between them so if you build on a 'new' instance, with no cache, then it will take longer. If you build on an instance that's previously had a build then it should re-use the layer cache from before.
Convox is actively looking into utilising buildx and it's options to open up more build options and quicker builds so keep an eye out for that!
Thanks,
I have 9 instances in the prod rack at the moment, but that can scale a bit at times.
It would be great to be able to stick the builds to an instance so I can hit the cache.
:)

How to change the local folder for an App Engine project?

TIA for your help.
I recently started experimenting with Google App Engine, and I have been able to set up a project successfully.
However, I made a mistake with the location of my local files and I would like to change it.
This is the output from my console when I deploy:
jnkrois#dev:~/Development/My_Project$ gcloud app deploy
Initializing App Engine resources...done.
You are about to deploy the following services:
My_Project/default/1234567890 (from [/home/jnkrois/Development/My_Project/app.yaml])
Notice that the local folder is /home/jnkrois/Development/My_Project/app.yaml
I want to change the gcloud settings in order to pull the files from my /var/www/html/My_Project/
That way I can run the project locally via my Apache server.
Thanks for your help.
That way I can run the project locally via my Apache server.
In the vast majority of cases you won't be able to run your GAE project through apache. Except, maybe, for a totally static website with a very particular config.
The proper way to run your GAE project locally is using the development server, see Using the Local Development Server
But to answer your question - there is no extra dependency of the project outside the project directory, so just move the project directory to where you want (up to you to check address any permission issues, assuming all permissions are met in the example below) and run the gcloud cmd from the new project location:
mv /home/jnkrois/Development/My_Project /var/www/html
cd /var/www/html/My_Project/
gcloud app deploy
Again, donno if this will help you run it through apache or not.

How to Use Docker (or alternative) as Test Environment

As part of my job I evaluate many software and applications.
I need to have an environment that is easy to clean (so the previous apps are not bloating my system) and always light.
One idea is to create isolated environments (either by Docker or Virtual machines) and fire up a new environment every time I need to start over with new software to evaluate.
Questions:
1.Does Docker support this? Can I use it to create new environment every few days and test software in it?
2. If not, which VM system would be suitable for this particular need?
Thanks
This is exactly what all the Continuous Integration systems do: get fresh code, build your project and run tests inside the freshly created container. This is how clean testing is done nowadays. So Docker fits perfectly your needs.
Each fresh container is a clean environment where you can run your tests in. Then you can parse the result and remove the container, for example docker run --rm -it my-image ./tests.sh

Steps to get angular 2 universal starter to deploy to an external server host (Google Cloud, Azure, etc)?

I cloned universal-starter (webpack version) and have it up and running on my local machine using npm start and npm run watch per the instructions
Now stuck after npm run build and attempting to deploy to Azure (and Google Cloud) via the github integration - can't figure out how to set up either to work.
Anyone have a recipe on how to get the webpack bundled files to fire up on an external host with express.js? Do I need to run commands via a CI integration? The files in /dist don't seem to stand on their own.
At Netlify you can connect your git repo and tell them what build commands you want them to use. If you specify the "dist" directory, then they will deploy anything that gets in there (after they have compiled your application).
Edit: the lowest tier is free.
Edit2: I am not associated with Netlify. I just used them in my latest deploy, and found the process extremely easy.
Note: This has changed dramatically since Angular 2. While I'm now moved on to SSR, docker, and all kinds of other things, the simplest answer was to
1) Production build
ng build --prod
2) Transfer files to a static web host (i.e., I used awscli to connect to a s3 bucket when it was just a static site...I know use SSR so I need to use a node server like express)
3) Serve files (there are some complexities for redirect requirements for index.html for error and for 404...and of course setting the status for both redirects to 200)
4) Put something on the frontend for performance/ ssl/ etc. nginx or a CDN would make sense.