I am writing a k8s operator. In my CR file, I want to get password and store it in secret.
Everything works fine except this password gets printed on the screen when I describe my object kubectl describe myKind myObject is there any way to hide particular property from spec or at least show *** instead of actual value? just like secret it just shows bytes and not actual value.
Added line before my property // +kubebuilder:validation:Format=password. this add format: password in CRD file but when I describe myObject it still prints all Spec values on the console.
Edit: SO putting more light on this:
my **_types.go snipplet is:
// DB username
DbUser string `json:"dbUser,required"`
// DB password
// +kubebuilder:validation:Format=password
DbPassword string `json:"dbPassword,required"`
so I am making k8s secret out of dbUser and dbPassword
I have another option to ask users to create a secret as pre-req but I am not happy with that approach.
Thanks in advance.
You should NOT store passwords / tokens etc. in plain text in the CR. It will be visible to anyone with permissions to read the CR (no matter if k describe shows it or not).
I would recommend changing the CRD spec so users can reference their secret by name. Users will need to create a secret of type opaque then create a CR that looks something like this:
apiVersion: "grp/v1"
kind: "mykind"
metadata:
name: "my-kind-cr"
namespace: "default"
spec:
secretName: mysecret
where the secret would look like this:
apiVersion: v1
kind: Secret
metadata:
name: dbpassword
namespace: default
type: Opaque
stringData:
dbPassword: "my password"
Related
I am trying to fetch some certficates from hashicorp vault using tf data source
This is how cert path looks like in vault
serverA:
dev-cert: <base64 encoded cert>
qa-cert: <base64 encoded cert>
test-cert: <base64 encoded cert>
This cert is used in other resource block which works fine as shown below
resource <somegcpresource> <xyz>
{
certificate = base64decode(data.vault_generic_secret.server_cryptoobjects.data["dev-cert"])
}
Can I use a custom env variable to fetch value of certificate like;
certificate = base64decode(data.vault_generic_secret.server_cryptoobjects.data["var.env-cert"])
or a local var to reference the key name from vault datasource like;
certificate = base64decode(data.vault_generic_secret.server_cryptoobjects.data[local.certname])
Yes, the data.vault_generic_secret.server_cryptoobjects.data is an object so you can access its values with their corresponding keys. If you declare a variable env-cert:
variable "env-cert" {
type = string
default = "dev-cert"
}
then you can use it as the key:
certificate = base64decode(data.vault_generic_secret.server_cryptoobjects.data["var.env-cert"])
Yes you can also use a local as the key:
locals {
certname = "dev-cert"
}
certificate = base64decode(data.vault_generic_secret.server_cryptoobjects.data[local.certname])
I would also suggest looking at the Vault PKI secrets engine for your overall use case if you have not already, since this example in the question is using the KV2 secrets engine.
I'm working with Flux2. I'm new to Flux and I'm trying to set up the Image Reflector controller to find the last image tag in my image registry but I'm getting an error on my image policy 'unable to determine latest version from provided list'
In my registry I have the following tags:
16
rc-9.20.7975.18473
Flux is reporting that it's connecting to my image registry and says 'successful scan, found 2 tags'. Based on my image policy below I was expecting only 1 tag to match.
Here is my Image Policy:
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: xxxxxxxx
spec:
imageRepositoryRef:
name: xxxxxxxx
filterTags:
pattern: '^rc-(?P<ts>.*)'
extract: '$ts'
policy:
semver:
range: '^9.20.x.x'
I would like to it update on new 'rc' images. Any thoughts on why the Image Reflector is saying it found 2 tags when '16' isn't a match by the filter pattern? What should I change in my Image Policy to determine the latest version? Thx!
Your range is not correct. It should be '>=9.20.0.0'. For more details check https://fluxcd.io/flux/components/image/imagepolicies/
Can someone help me how i can update an authentication data entry using wsadmin without logging into was console as i have too many data sources and doing them manually is a time taking procedure. Below is the far that i can get to. Not sure how to use the arguments. Thanks for your help in advance.
wsadmin>$AdminTask help modifyAuthDataEntry
WASX8006I: Detailed help for command: modifyAuthDataEntry
Description: Modify an authentication data entry
Target object: None
Arguments:
securityDomainName - Name used to uniquely identify the security domain.
*alias - The alias of the auth data.
user - The username of the auth data.
password - The password of the auth data.
description - The description of the auth data.
Steps:
None
wsadmin>
Modify authData with:
AdminTask.modifyAuthDataEntry('[-alias myAlias -user myUser -password myPassword -description "my alias description" ]')
In general, to learn the wsadmin command for a given Admin Console operation which you know how to perform, you can use the command assistance function to capture the equivalent last wsadmin scripting command.
I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.
I am setting up openLDAP for one of my Java applications. Usernames and passwords are stored in openLDAP and users are able to update their passwords via the application (using the javax.naming.directory API'). I imported our users from our existing Sun Directory Server into openLDAP. Import was successfull and passwords were encrypted in SSHA format. I noticed that when i update a password from the application, it stores it in 'Plain Text' format. I can unhide the password when i view it via Apache Directory Studio. A lot of googling later, i tried setting the "password-hash {SSHA}" in the slapd.conf file and that didn't help me either. I am on a windows environment. I am passing the password to openLDAP in plain text format. There is no encryption going on in the code. I know i can encrypt it in the application but i would prefer openLDAP to do it for me. Please let me know if i can do anything on the openLDAP side.
This is the JAVA code i use today to modify passwords. This has been working fine in our existing environment for the past 7 years.
ModificationItem[] newAttribs = new ModificationItem[1];
Attribute passwordAttrib = new BasicAttribute(DirectoryConstants.USER_PASSWORD, password);
ModificationItem passwordItem = new ModificationItem(DirContext.REPLACE_ATTRIBUTE, passwordAttrib);
newAttribs[0] = passwordItem;
.....
DirContext ctx = this.getContext();
ctx.modifyAttributes( DirectoryConstants.USER_UID + "=" + userId + "," + ou, newAttribs);
So, the default password hash format in openldap is SSHA, which is nice.
Unfortunately, the default password policy in openldap is 'do not enforce password hashing'.
You will want to add an overlay to the database you're storing users in.
In the cn=config version, this looks like, approximately:
dn: olcOverlay={X}ppolicy,olcDatabase={Y}bdb,cn=config
objectClass: olcPPolicyConfig
olcOverlay: {X}ppolicy
olcPPolicyHashCleartext: TRUE
(where Y is your database number in cn=config, X is the overlay number you want it to be)
The slapd.conf version is similar, you need an:
overlay ppolicy
ppolicy_hash_cleartext
entry, inside the relevant database definition (you don't need to provide a value for ppolicy_hash_cleartext, presence indicates TRUE).