Spinnaker AWS Provider not allowing create cluster - spinnaker

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.

Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

Related

Cloud Foundry on Google Cloud Platform

My application has Jackrabbit Oak implementation, so it uses direct binary upload features through S3DataStore for storing files on AWS-S3. For AWS-S3 integration, we had created service broker instance on Cloud Foundry which was on our on-premises server.
Now I moved Cloud Foundry on Google Cloud Platform, but when I am trying to search AWS-S3 service broker using cf marketplace command. I could not see the aws-s3 service broker.
How to get aws-s3 service broker and if it is not possible then, is there any way to integrate aws-s3 storage with application deployed on CloudFoundry on GCP in case of above scenarios.
It's hard to know what you had deployed on your platform as we don't have any context of what was installed there. Just a guess, but it sounds like you had the Tanzu AWS Service Broker installed. It has service offerings for aws-s3.
https://docs.pivotal.io/aws-services/creating.html#view
You can still install the Tanzu AWS Service Broker when running Tanzu Application Service on top of GCP, you just need to have a AWS account where the broker will create your service instances. The broker creates AWS resources on behalf of the users and it does it under a given AWS account, so as long as you still have an AWS account you can make it work.
That said, there's also a GCP broker that functions in the same, so if you are trying to move off AWS to GCP entirely you could look at using the GCP broker instead. GCP has a similar cloud storage offering.
https://docs.pivotal.io/partners/gcp-sb/index.html
Once you install either broker, you'll see the service plan offerings in your marketplace. If you're still not seeing them, check cf service-access as an admin user. You may need to enable access to those services with cf enable-service-access.
Go back to the team that moved you between CF/AWS and CF/GCP and tell them you need your S3 broker back :)

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

Can't deploy marketplace object on GKE

I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post

Retrieve application config from secure location during task start

I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.

How to: can I test an application using both worker role AND VM role in Azure emulator?

I've looked but can't see an answer to this one:
I have an application that passes Azure messages between a VM role and a worker role. Before I load this into Azure I'd like to test that both work correctly by using the Azure emulator.
Does anyone know if the Azure emulator will accept messages that originate from the VM role and will it allow me to send messages to the VM? Is there a workaround or solution to this?
Both the emulator and the VM will be running on the same host server in my case.
The queues are accessed as HTTP endpoints, so you need to ensure that both components you want to test can access the queue.
If you want to test your application using the storage emulator (an HTTP endpoint provisioned on your local machine, normally http://127.0.0.1:1001/) then you will to ensure that the VM role can get to that address.
I would recommend testing with the real storage service. There are difference between the emulator and the actual service, so it's better to test the real deal (you can always create a test queue).
In this case the endpoint will be on the internet (i.e. http://myaccount.queue.core.windows.net/).