I have a spinnaker local deployment running on ubuntu 14. I followed the notifications setup guide but its a bit fuzzy. I would appreciate any help in getting this integrated with slack.
Currently, I added a custom echo.yml with the slack configs. But its not picking up. Did not know where to specify custom settings.js (deck) as mentioned in the setup guide (https://www.spinnaker.io/setup/features/notifications/)
I would appreciate any hints/tips. Thanks!
cat /home/user/.hal/default/service-settings/echo.yml
slack: enabled: true token: some-token botName: spinnaker
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS Release:14.04
echo.yml
slack:
enabled: true
token: some-token
settings.js
notifications: {
slack: {
enabled: true,
botName: 'Spinnaker'
}
},
Under features at the bottom of settings.js make sure notifications is set to true.
Also you must invite the bot into the slack room, don't worry if the bot is shown as offline.
The latest Halyard allows you to do this with hal - no need for file overrides:
https://www.spinnaker.io/reference/halyard/commands/#hal-config-notification-slack
Related
having a bit of headache getting past basic auth for our uat environments, we have switched to using Appium and I am trying to configure Codeceptjs to allow me to pass through credentials.
My current config looks like :-
exports.config = {
output: './output',
helpers: {
Appium: {
path: '/wd/hub',
port: 4723,
platform: 'Android',
browserName: 'Chrome',
url: 'https://test.co.uk/',
basicAuth: {username: 'user', password: 'pass'},
show: true,
I have looked through the Codeceptjs docs and this should pass my credentials through, does Appium work differently? All I am doin is hitting a website on a mobile, any help would be gratelly appreciated.
Forgot to mention I am testing on Android/chrome, also reults at the momemnt the username/password posted in the url obviously result site can not be reached.
Finally figured it out, seems like having special characters in the username/password, was causing the problem, now resolved.
I have 5 profiles for my Spring Boot application
application.yml
application-prod.yml
application-stg.yml
application-dev.yml
application-local.yml
One default config and 4 for different environments.
application.yml looks like this
spring:
cloud:
vault:
enabled: ${VAULT_ENABLED:false}
host: ${VAULT_HOST}
port: ${VAULT_PORT}
authentication: aws_iam
aws-iam:
role: ${VAULT_POLICY}
server-name: ${VAULT_HOST}
kv:
backend: kv
enabled: true
Some of the properties are provided by the host in the environment variables.
To support local development I am overriding authentication in local profile like this
spring:
cloud:
vault:
enabled: true
authentication: token
token: ${VAULT_TOKEN}
Now the question is how to import config correctly?
If I will do spring.config.import: "vault:" in application.yml it will fail while running with local profile. As ConfigData API will try to resolve vault properties immediately after default profile is processed (but auth info not yet loaded). But as local profile is supposed to use different auth method, it cannot access Vault and fails.
Another question is how to disable Vault in some cases? I could do spring.cloud.vault.enabled=false, but this again would cause failure as ConfigData cannot resolve vault:.
Yes I could use legacy bootstrap mode which would work fine for my scenario, but in the longer run wouldn't be ideal...
Only thing that comes on my mind is to create additional profile, eg vault which would be loaded as a last one. With enabling / disabling this profile I could control if config from Vault is imported or not...
Any other ideas?
We have the same problem, but we have found a workaround overriding the default import order of Spring Boot by importing also the profile-specific configuration files explicitly using spring.config.import in application.yml like this:
spring:
profiles:
active: ${STAGE:local}
config:
import:
- optional:classpath:application-${STAGE:local}.yml
- vault://secret/our-secret
Note that the STAGE environment variable corresponds to the profile used per stage. We made the import of the profile-specific configuration file optional, as we don't have a dedicated file for every stage.
By providing the import for the profile-specific configuration files explicitly before the vault config, we can override the default vault settings before the vault is accessed.
Still, this approach feels a bit awkward, but it's the only way so far we found to work around the issue, so better solutions would be appreciated.
I'm setting up a Strapi install in my Apache cPanel (WHM on CentOS 7), and can't find a proper way to deploy it. I've managed to get it running, but when I try to access the dashboard (/admin), it just shows the index page (the one in public/index).
Is this the proper way to deploy Strapi to an Apache server?
Is the "--quickstart" setting only for testing purposes, or can this be used in Production? If so, what are the pre-deployment steps I need to take?
This is for a simple project that requires easy to edit content that will be grabbed via API manually from another cPanel installation.
Reading through the Strapi docs, I could only find deployment information about Heroku, Netlify and other third-party services such as these, nothing on hosting it yourself on Apache/cPanel.
I've tried setting up a "--quickstart" project locally, getting it working and then deploying via Bitbucket Pipelines. After that, just going into the cPanel terminal and starting it - though the aforementioned problem occurs, can't access admin dashboard.
Here's my server.json configuration:
Production
{
"host": "api.example.com",
"port": 1337,
"production": true,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
Development
{
"host": "localhost",
"port": 1337,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
There are no console errors, nor 404s when trying to access it.
Edit
Regarding deployment with the --quickstart setting:
there are many features (mainly related to searching) that don't work properly with SQLite (lack of proper index support) Not to mention the possible slowness due to disk speed and raw IOPS of the disk.
A suggestion on how to implement:
Respectfully, to deploy strapi you likely need to:
1. build a docker container for it
2. make a script to deploy it
3. use SSH and do it manually
4. use a CI/CD platform and scripted to deploy it
In summary:
Strapi is not your typical "copy the files and start apache" it's not a flat file system, Strapi itself is designed to be run as a service similar to Apache/Nginx/MySQL ect. They are all services (Strapi does need Apache/Nginx/Traefik to do ssl for it though via proxying)
If you have the index page when you visit /admin it's because the admin is not built.
Please run yarn build before starting your application.
I would like to know how YARN Web UI running at port 8088 consolidates the Datanodes,Namenodes and other cluster components health status.
For example, this is what i see when i open the Web UI.
Hi guy, your all datanodes are healthy.
The ResourceManager REST API's allow the user to get information about the cluster - status on the cluster, metrics on the cluster, scheduler information, information about nodes in the cluster, and information about applications on the cluster.
The below example is taken from the official documentation.
Request:
GET http://<rm http address:port>/ws/v1/cluster/info
Response:
{
"nodes":
{
"node":
[
{
"rack":"\/default-rack",
"state":"NEW",
"id":"h2:1235",
"nodeHostName":"h2",
"nodeHTTPAddress":"h2:2",
"healthStatus":"Healthy",
"lastHealthUpdate":1324056895432,
"healthReport":"Healthy",
"numContainers":0,
"usedMemoryMB":0,
"availMemoryMB":8192,
"usedVirtualCores":0,
"availableVirtualCores":8
},
{
"rack":"\/default-rack",
"state":"NEW",
"id":"h1:1234",
"nodeHostName":"h1",
"nodeHTTPAddress":"h1:2",
"healthStatus":"Healthy",
"lastHealthUpdate":1324056895092,
"healthReport":"Healthy",
"numContainers":0,
"usedMemoryMB":0,
"availMemoryMB":8192,
"usedVirtualCores":0,
"availableVirtualCores":8
}
]
}
}
More information can be found from the below link
https://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
I hope this helps.
I trying to setup RestComm Web SDK demo application on my local system, I just want to create an application for audio/video, chat, IVR, etc(RestComm provide me perfect solution for my needs). Now I have setup RestComm Web SDK on my local system and whenever I an trying to sip call, It throws WebRTCommClient:call(): catched exception:NotSupportedError: Failed to construct 'RTCPeerConnection': Unsatisfiable constraint IceTransports on browser console.
My webRTC confrigration is as below:
// setup WebRTClient
wrtcConfiguration = {
communicationMode: WebRTCommClient.prototype.SIP,
sip: {
sipUserAgent: 'TelScale RestComm Web Client 1.0.0 BETA4',
sipRegisterMode: register,
sipOutboundProxy: parameters['registrar'],
sipDomain: parameters['domain'],
sipDisplayName: parameters['username'],
sipUserName: parameters['username'],
sipLogin: parameters['username'],
sipPassword: parameters['password'],
},
RTCPeerConnection: {
iceServers: undefined,
stunServer: 'stun.l.google.com:19302',
turnServer: undefined,
turnLogin: undefined,
turnPassword: undefined,
}
};
While I can use olympus without any issue in Chrome Browser. I am stuck with this exception, any suggestions would be highly appreciated.
I think the problem here is that the version of Webrtcomm library inside the demo application you are using is outdated and doesn't include a fix for latest Chrome version. So please replace samples/hello-world/scripts/WebRTComm.js within your repository, with:
https://github.com/RestComm/webrtcomm/blob/master/build/WebRTComm.js
That should fix your issue.
Best regards,
Antonis Tsakiridis