I have a k8s cluster on Azure, and push all logs to OMS. I've noticed there is a table ContainerInventory with a column EnvironmentVar which stores each of a container's environment variables in plain text. Some of these variables are sensitive credentials passed to the container through k8s secrets. Is there any way to restrict OMS to not store this information?
AFAIK you may disable collection of environmental variables on a new or existing container by setting the variable AZMON_COLLECT_ENV
with a value of False in your Kubernetes deployment yaml configuration file. For more information, please refer this document. Hope this helps!!
Related
Shortly my scenario is to test a remote API if there was any changes in the called APIs, like some parameter removed or something like that.
To get this info I need to have a token.
My problem is, I can't store it in the Database and use windowsCredentials, because in the AzurePipeline the build agents has no access and connection to the Database. And if I pass the token through variables in the pipeline then I won't have the token when I run the code in local.
appSetting is stored in Git so it is not safe.
Any idea on this?
Thanks!
UserSecrets + Environment variables are the key here. Appsettings.json is for configuration that is non-sensitive, but there is a concept called user secrets (see link below) that will allow you to have stored an appsettings.json equivalent just on your machine and not in git. When specifying info in it it should override or add onto anything in your appsettings.json.
If that info is also needed for production, then environment variables should be used. Instead of a file, single configurations can be specified/overridden using environment variables.
This is all accomplished ONLY if the aspnet core web server configuration is setup to accept from all of these places. The default setup from the template should accomplish this but read the links below to make sure that your setup works.
All the configuration and best practices can be found here.
Non sensitive info/Defaults: appsettings.json
Sensitive info for devs/dev specific info: UserSecrets
Sensitive info for prod: Environment variables
I am trying to deploy a RavenDB instance on a Kubernetes cluster. The deployment should be fully automated, i.e. there should be no need to access the UI to configure something.
I have found plenty of documentation on how raven in a container can be configured, e.g. with command line args via RAVEN_ARGS, environment variables (e.g. RAVEN_License_Eula_Accepted), or a custom settings.json file in a mounted volume.
I have tried all the options above, and they all work, except when trying to set a license. I have tried to set either License directly as a JSON string or License.Path pointing to a license.json file mounted in a volume. Yet whenever I access the UI after deploying the container, I get a notification telling me I need to set a license.
Can anyone tell me how I can get Raven to use the license I provide via the approaches mentioned above?
Thanks
You need to bootstrap the cluster with some kind of operation for the license to be picked up. For example, create a database or call the /admin/cluster/bootstrap endpoint.
I am using database connector component, with vault component to store the database credentials. Now as per the documentation of both components i have created different properties file for each environment to store the encrypted credentials for diff env.
Following is the structure of my mule project
Now the problem with this structure is that i have to build new deployable zip file whenever i have to update the database credentials for any environment.
I need a solution where i can keep all credentials encrypted and centralized and i don't have to create a build every time after updated the credentials, We can afford to restart the server, but building new zip and deploying is really cumbersome.
Second problem we have this approach is a developer needs to know the production db to update it in properties file, this is also a security issue.
Please suggest alternate approach for credentials management for mule projects.
I'm going to recommend you do NOT try to change the secure solution provided to you by MuleSoft. To alleviate the need for packaging and deployment, you would have to extract the properties files outside of the deployment and this would be a huge risk. Regardless of where you store the property files within the deployment if you change the files, you have to package and re-deploy. I see the only solution to your problem as moving the files outside of the deployment and securely storing them. Mule has provided a solution while it may be cumbersome, they are securing these files first with encryption and secondly within the server container. You can move out the property files but you have to provide a custom implementation and you will be assuming great risk to your protected resources.
Set a VM arguement e.g. environment.type=local for local machine on your anypoint studio.
Read this variable in wherever you are reading your properties file in a way that environment type is read dynamically such as below.
" location="classpath:properties/sample-app-${environment.type}.properties" doc:name="Secure Property Placeholder"/>
In order to set the environment type on your production server(or wherever you are using mule runtime), open \conf\wrapper.conf and add the arguement wrapper.java.additional.=-Dserver.type=production. If you already have any property in this file, you may need to set the value of n appropriately. For example 13 or 14.
This way you don't need to generate different deployment artefacts for different environment because correct properties file is picked by using environment specific VM arguement.
I have used AWS Community AMI for configuring Spinnaker. I am able to get the lists of ELB, AMI and Security Groups while creating Server Group. But, I am not getting the Instance types in the custom drop down list. Any idea about what could be going wrong?
Spinnaker Cluster Error
It looks like you are not having a correct IAM role assigned to the user whose access keys you are using for the spinnaker integration with AWS.
Mostly if you used the spinnaker.Check if you have enough rights in AWS.
If not then create a role and assign AWS POWER USER ACCESS to your user and then try to get the integration .
Spinnaker is a tool which would need AWS EC2 Full access atleast as it directly access EC2 spin up its server groups.
Instance types are cached in the browser's local storage. You can explicitly refresh the cache via the 'Refresh all caches' link:
If you show the network tab of your browser's console (prior to clicking 'Refresh all caches'), you should see a request to http://localhost:8084/instanceTypes.
If the response contains your instance types, you should be good to go.
So, a common practice these days is to put connection strings & passwords as environment variables to avoid their being placed into a file. This is all fine and dandy, but I'm not sure how to make this work when trying to set up a continuous deployment workflow with some configuration management tool such as Salt/Ansible or Chef/Puppet.
Specifically, I have the following questions in environments using the above mentioned configuration management tools:
Where do you store connection strings/passwords/keys separate from codebases?
Do you keep those items in a code-repo of some type (git, etc.)?
Do you use some structure built-in to your tool?
How do you keep those same items secure?
Do you track changes/back-up these items, and if so, how?
In Chef you can
store passwords or API tokens in either encrypted data bags or using chef-vault. They are then decrypted while chef does the provisioning (with encrypted data bags using a shared secret, with chef-vault using the existing PKI of Chef client).
set environment variables when calling external software using the environment parameter of e.g. the execute resource.
not sure, what to write here -- I'd say you don't really manage them. This way you set the variables only for the command that needs it, not e.g. for the whole chef run.
With puppet, the preferred way is probably to store the secrets in Hiera files, which are just plain YAML files. That means that all secrets are stored on the master, separate from the manifest files.
truecrypt virtual encrypted disks are cross-platform and independent of tooling. Mount it read-write to change the secrets in the files it contains, unmount it and then commit/push the encrypted disk image into version control. Mount read-only for automation.
ansible-vault can be used to encrypt sensitive data files. A CI server like Jenkins however is not the safest place to store access credentials. If you add Hashicorp Vault and Ansible Tower/AWX, then you can provide a secure solution for several teams.