spinnaker deploy failed :installation issue Local Debian Deployment - spinnaker

Spinnaker installation failed:
A Spinnaker installation in AWS EC2 is failing. The selected choice is local debian. Halyard cannot complete the deployment and shows an error in execution
Environment:
AWS EC2
storage: S3
Deployment Type: Local debian /GIT
What could be the reason of this failure on my deployment?
hal version: 1.10.1-20180912131447
spinnaker version : 1.9.4
hal deploy apply : Failed with below error
/opt/halyard/bin/hal: 22: /etc/default/spinnaker: [[: not found
Picked up _JAVA_OPTIONS: -Xmx8G
+ Get current deployment
Success
+ Prep deployment
Success
Problems in default.provider.aws.spin-install:
- WARNING No validation for the AWS provider has been
implemented.
+ Preparation complete... deploying Spinnaker
~/dev/spinnaker/clouddriver ~/dev/spinnaker ~
git#github.com:spinnaker/clouddriver.git
No changes to stash in clouddriver
HEAD is now at 41f14e1... fix(config): Move core config to clouddriver.config package (#2994)
~/dev/spinnaker/deck ~/dev/spinnaker ~
git#github.com:spinnaker/deck.git
No changes to stash in deck
HEAD is now at f208cbf... fix(core): Fix error when changing execution grouping (#5793)
~/dev/spinnaker/echo ~/dev/spinnaker ~
git#github.com:spinnaker/echo.git
No changes to stash in echo
HEAD is now at b20d3d0... chore(pubsub): add a global enable flag for pubsub (#345)
~/dev/spinnaker/fiat ~/dev/spinnaker ~
git#github.com:spinnaker/fiat.git
No changes to stash in fiat
HEAD is now at 4045c08... chore(dependencies): Bump spinnaker dependencies to 1.0.13 (#262)
~/dev/spinnaker/front50 ~/dev/spinnaker ~
git#github.com:spinnaker/front50.git
No changes to stash in front50
HEAD is now at f7f83f8... refactor(gcs): Update doRetry and bump clouddriver version (#355)
~/dev/spinnaker/gate ~/dev/spinnaker ~
git#github.com:spinnaker/gate.git
No changes to stash in gate
HEAD is now at 47440fb... chore(dependencies): Update gradle plugin to 4.3.0 (#602)
~/dev/spinnaker/igor ~/dev/spinnaker ~
git#github.com:spinnaker/igor.git
No changes to stash in igor
HEAD is now at 2a3d239... chore(dependencies): Bump spinnaker dependencies to 1.0.13 (#303)
~/dev/spinnaker/kayenta ~/dev/spinnaker ~
git#github.com:spinnaker/kayenta.git
No changes to stash in kayenta
HEAD is now at c423808... fix(stackdriver): Drop project_id from response tags. (#382)
~/dev/spinnaker/orca ~/dev/spinnaker ~
git#github.com:spinnaker/orca.git
No changes to stash in orca
HEAD is now at e57fbce... feat(queue): Add `stageType` and `taskType` to MDC while task executing (#2428)
~/dev/spinnaker ~
~/dev/spinnaker ~
~/dev/spinnaker/rosco ~/dev/spinnaker ~
git#github.com:spinnaker/rosco.git
No changes to stash in rosco
HEAD is now at f785bf2... fix(bake/oracle): scrape image id as well as name. (#278)
~/dev/spinnaker ~
+ Get current deployment
Success
- Apply deployment
Failure
Problems in Global:
! ERROR Unexpected exception: java.lang.IndexOutOfBoundsException:
Index: 0, Size: 0
- Failed to deploy Spinnaker.

Local Debian option should only be used for plugin and Spinnaker development.
If you require a deployment installation use Kubernetes or get started by using Minnaker.
For that issue I suggest:
Upgrade Halyard
Make sure your config is appropriate by running hal config
Install with hal config deploy edit --type localdebian
Deploy by running hal deploy apply

Related

kubectl versions Error: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube

How to build container serving Vue SPA using Cloud Native Buildpacks

Currently I'm trying to build container serving VueJS application via Cloud Native Buildpacks.
I already have working Docker file that builds VueJS in production mode and then copy results to nginx image, but I would like to try to use CNB.
So I just have created empty VueJS project for test via vue create vue-tutorial and trying to do with CNB somehting like described there https://cli.vuejs.org/guide/deployment.html#heroku but using CNB.
Does anyone know working recipe how to do that with CNB?
P.S. Currently I'm trying to build that with
pack build spa --path . \  SIGINT(2) ↵  17:22:41
--buildpack gcr.io/paketo-buildpacks/nodejs \
--buildpack gcr.io/paketo-buildpacks/nginx
but getting next error (and I'm not sure that I'm on right way):
===> DETECTING
ERROR: No buildpack groups passed detection.
ERROR: Please check that you are running against the correct path.
ERROR: failed to detect: no buildpacks participating
ERROR: failed to build: executing lifecycle: failed with status code: 100
UPD
My current dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:1.19-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
We chatted about this in Slack, but I wanted to capture it here too:
pack build --buildpack heroku/nodejs --buildpack https://cnb-shim.herokuapp.com/v1/heroku-community/static yourimage
This command may do what you want. The static buildpack used in that example is not yet converted to a cloud native buildpack, but the shim may allow you to build a workable artifact. Then run your image with something like docker run -it -e PORT=5000 -p 5000:5000 yourimagename

Deploying Symfony 4 Application to AWS Elasticbeanstalk

I have a working Symfony 4.0.1 application running on PHP 7.1.14 (locally) that I would like to deploy to AWS Elastic Beanstalk using the EB CLI
I have a dist package of the application on my master git branch configured for production (vendor folder removed etc) that I am able to successfully deploy to Heroku. Now I need to deploy to AWS EB.
The AWS EB environment has already been set up (although I dont have access to the console). Some environment details are as follows:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.7.7
Tier: WebServer-Standard-1.0
At first, I was able to successfully deploy the application, but accessing the URL gave a 404 error for every page.
I did some googling and found a few articles describing the use of .config files. I have added one named 03_main.config with the following contents.
commands:
300-composer-update:
command: "export COMPOSER_HOME=/root && composer.phar self-update -n"
container_commands:
300-run-composer:
command: "composer.phar install --no-dev --optimize-autoloader --prefer-dist --no-interaction"
600-update-cache:
command: "source .ebextensions/bin/update-cache.sh"
700-remove-dev-app:
command: "rm web/app_dev.php"
Deploying with this .config file gives the following deployment failure error:
ERROR: [Instance: i-0c5f61f41d55a18bc] Command failed on instance. Return code: 127 Output: /bin/sh: composer.phar: command not found. command 300-composer-update in .ebextensions/03-main.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I understand the purpose of .config files but do not understand what additional configuration is needed for get this Symfony app running.
I guess you should use the full path to composer like bellow :
100-update-composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update -n

Spinnaker "enable server group" stage failing

I am getting Exception ( Determine Target Server Group )
Path parameter "app" value must not be null. when enabling server group. Can anyone tell me what I could be doing wrong? I can enable the server manually but when I put it in a stage it fails with the error.
Please upgrade to spinnaker version 1.17 That version solves issues with the enable server group stage.
To Upgrade Spinnaker:
Access halyard pod
get pods name
export HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
Access Halyard pod with bash
kubectl -n spinnaker exec -it ${HALYARD} /bin/bash
Obtain the version by running Halyard Command
hal version bom
Set the version you want to use. Refer to the releases page Versions1
export UPGRADE_VERSION=1.17.6
hal config version edit --version $UPGRADE_VERSION
Deploy and apply the new version with hal
hal deploy apply

Cannot use apache flink in amazon emr

I can not a start a yarn session of Apache Flink in Amazons EMR. The error message I get is
$ tar xvfj flink-0.9.0-bin-hadoop26.tgz
$ cd flink-0.9.0
$ ./bin/yarn-session.sh -n 4 -jm 1024 -tm 4096
...
Diagnostics: File file:/home/hadoop/.flink/application_1439466798234_0008/flink-conf.yaml does not exist
java.io.FileNotFoundException: File file:/home/hadoop/.flink/application_1439466798234_0008/flink-conf.yaml does not exist
...
I am using Flink verision 0.9 and Amazons Hadoop version 4.0.0. Any ideas or hints?
The full log can be found here: https://gist.github.com/headmyshoulder/48279f06c1850c62c28c
From the log:
The file system scheme is 'file'. This indicates that the specified Hadoop configuration path is wrong and the sytem is using the default Hadoop configuration values.The Flink YARN client needs to store its files in a distributed file system
Flink failed to read the Hadoop configuration files. They are either picked up from the environment variables, e.g. HADOOP_HOME, or you can set the configuration dir in the flink-conf.yaml before you execute your YARN command.
Flink needs to read the Hadoop configuration to know how to upload the Flink jar to the cluster file system such that the newly created YARN cluster can access it. If Flink fails to resolve the Hadoop configuration, it uses the local file system for uploading the jar. That means that the jar will be put on the machine you launch your cluster from. Thus, it won't be accessible from the Flink YARN cluster.
Please see the Flink configuration page for more information.
edit: On Amazong EMR, export HADOOP_CONF_DIR=/etc/hadoop/conf let's Flink discover the Hadoop configuration directory.
if i were you i would try with this:
./bin/yarn-session.sh -n 1 -jm 768 -tm 768