We updated our Couchbase from 4.6 Community edition to 5.0.0-2873 Enterprise Edition for testing purposes and our software using the java-client started throwing InvalidPasswordException when trying to open a bucket.
As I've found, every newly created bucket has authType='sasl' and a randomly generated saslPassword.
I've tried creating a bucket using the CLI instead of the GUI:
couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket=general --bucket-ramsize=1300 --bucket-type=couchbase --bucket-password=
I got the following error:
ERROR: unrecognized arguments: --bucket-password=password
I also tried the bucket-edit function with the same result.
According to the documentation the argument should be valid.
I also tried using the REST API to change bucket authentication (and similarly password), but even though this didn't throw any erros, the authType and the password remained the same.
curl -X POST -u Administrator:password -d 'authType=none' http://<host>:8091/pools/default/buckets/general
Again, according to the documentation this should work.
If I query the bucket information for the sasl password and provide that for the openBucket function then the connection works, however we really don't want to use this feature in our system.
So, any other ideas how it would be possible to remove the bucket authentication in our 5.0EE Couchbase setup?
In Couchbase 5.0 we no longer support bucket passwords and have moved to using role based access control when connecting to buckets. This means that in 5.0 the standard (pre-production) way to connect to a bucket is by using the Administrator user and password that you created when setting up the cluster. In case you're unsure what the Administrator user is, it is the user you create when you first go through the Couchbase setup wizard or the it is the username and password you specify on the command line when running the couchbase-cli cluster-init command.
One thing to note is that using the Administrator user/password is the standard pre-production workflow. I would recommend that when you go into production you create separate users for your application which only have access to cluster resources they need to access in the cluster. You can do this by going to the Users tab in the Administration Console and creating a new user and giving them the Full Bucket Access role which is the standard role that applications should have.
You might now be saying to yourself that this all sounds great, but when I use the Administrator user/password I still am having issues. If this is the case the reason is because you have Couchbase 5.0, but your SDK is not new enough to handle the new RBAC authentication mechanism in 5.0. The workaround for this is to create a user in the Users tab with the same name as the bucket and give that user the Full Bucket Access role. You can then use this user to authenticate.
One last thing to mention is that during an upgrade from a pre-5.0 cluster to a 5.0 cluster Couchbase will automatically create a user for each bucket. The each user will have the same name as one of the buckets and the password for that user will correspond to the bucket password. This is done mainly to ensure that there is no application downtime during an upgrade. After upgrading the cluster the next step should ideally be to upgrade the Couchbase client library to have it start using RBAC authentication.
If you need to stay with old approach and no password you can use cochbase-cli with --rbac-username and --rbac-password "", but you need to specify password as "", e.g.
./couchbase-cli user-manage -c localhost:8091 -u Admin -p password --set --rbac-username <UserForBucket> --roles bucket_full_access[<BucketName>] --rbac-password "" --auth-domain local
Related
How to get the password of an existing user in Rabbitmq from CLI?
I got the name of the user by CLI command "sudo rabbitmqctl list_users" and the outupt is as follows:
Listing users ... guest [administrator] openstack []
I want to know the 'openstack' user password.
The user passwords are stored using a one-way hash so there is no way to retrieve their value. You should use the rabbitmqctl change_password command to change that user's password to a known value.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
If you used a deployment solution you might find the password in it's configuration. For example, when you deploy Openstack using kolla-ansible you find the password in /etc/kolla/passwords.yml.
user#deployhost:~$ grep -E ^rabbitmq_password: /etc/kolla/passwords.yml
rabbitmq_password: haH2ZPjVVKmiqoXdRPCYJcdQyzP2cqeU
It might be stored in some secure way, for example an vault if ansible is used for deployment, in this case you need to check the deployment framework on how to retrieve it.
I am having trouble working through the Compute Engine Quickstart: Build a to-do app with a MongoDB tutorial. (edit: I am running the tutorial from within the compute engine console; i.e. https://console.cloud.google.com/compute/instances?project=&tutorial=compute_quickstart)
I SSH into the backend instance. I enter the "gcloud compute" command as copied from the tutorial. I am prompted to enter a passphrase. The following is returned:
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in
...
<< Identifying detail ommitted >>
...
**ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Insufficient Permission**
I had run through this stage of the tutorial on a previous occasion with no problems.
I am working from a Windows 10 PC with the google-cloud-sdk installed. I am using google chrome. I have tried in both regular and incognito modes.
Any help or advice greatfully received!
DaveDub
It looks like the attempt to SSH is recognising the instance in your project, but the user doesn't have the required permissions to access the machine.
Have you tried running:
gcloud auth login
and completing the web-based authorization to ensure you are attempting to access the machine as the correct (authenticated) user? This process ensures the Cloud SDK you are running inherits the permissions of the user specified in the web-based authorisation. See here for more information on this.
It's also worth adding the link to the tutorial you are following to your question.
Besides the accepted answer, be sure you are in the correct gcloud project
gcloud projects list
Then
gcloud config set project <your-project>
I just ran into this for yet another reason. Google has always had poor handling of multi-user auth conflicts with their business products. Whatever you sign into a clean chrome session with 'first' gets a 'special', invisible role. I've noticed with gsuite that I get 'forced' into that first user when I try to access the admin panel, and the only way to escape is to make sure that whatever google user I use for the gsuite admin is 'first', or open an incognito window. I've seen this bug for years, can't believe it still exists.
Anyways, I ran into a similar issue. Somehow I was the wrong google user, so the link I got when copy/pasting out of 'connect with gcloud command' was implying wrong google user. Only noticed later when I just gave up and used the terminal that I was not my normal user... So, might look into that.
I've just created a new cluster using Google Container Engine running Kubernetes 1.7.5, with the new RBAC permissions enabled. I've run into a problem allocating permissions for some of my services which lead me to the following:
The docs for using container engine with RBAC state that the user must be granted the ability to create authorization roles by running the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>]
However, this fails due to lack of permissions (which I would assume are the very same permissions which we are attempting to grant by running the above command).
Error from server (Forbidden):
User "<user-name>" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope.:
"Required \"container.clusterRoleBindings.create\" permission."
(post clusterrolebindings.rbac.authorization.k8s.io)
Any help would be much appreciated as this is blocking me from creating the permissions needed by my cluster services.
Janos's answer will work for GKE clusters that have been created with a password, but I'd recommend avoiding using that password wherever possible (or creating your GKE clusters without a password).
Using IAM: To create that ClusterRoleBinding, the caller must have the container.clusterRoleBindings.create permission. Only the OWNER and Kubernetes Engine Admin IAM Roles contain that permission (because it allows modification of access control on your GKE clusters).
So, to allow person#company.com to run that command, they must be granted one of those roles. E.g.:
gcloud projects add-iam-policy-binding $PROJECT \
--member=user:person#company.com \
--role=roles/container.admin
If your kubeconfig was created automatically by gcloud then your user is not the all powerful admin user - which you are trying to create a binding for.
Use gcloud container clusters describe <clustername> --zone <zone> on the cluster and look for the password field.
Thereafter execute kubectl --username=admin --password=FROMABOVE create clusterrolebinding ...
I am unable to create project in open shift. I created a project previously and deleted it. Looks like a project exists but unable to access or delete it. Seems like i am stuck. Also logging into the console https://console.preview.openshift.com/console/ doesn't show any existing projects.
I ran the following oc commands from the terminal.
Any suggestions on how to resolve this issue?
Thanks
XX:~ XX$ oc new-project test
Error from server: projectrequests "test" is forbidden: user XX cannot create more than 1 project(s).
XX:~ XX$ oc delete project test
Error from server: User "XX" cannot delete projects in project "test"
XX:~ XX$ oc status
Error from server: User "XX" cannot get projects in project "default"
XX:~ XX$ oc get projects
You need to give privileges/policies to your user which will allow the actions you want to perform.
If you are just in a proof-of-concept environment I would recommend the make your user cluster-admin in the whole cluster. This will give all the possible privileges to your user. Of course this in't recommended for every user in a 'real' environment.
First you need to authenticate with the 'default admin' which is created after the installation. This default admin-user isn't working with the normal user/password authentication. It's using a client certificate.
oc login -u system:admin --config=/etc/origin/master/admin.kubeconfig
Now you will see a list of the available projects (default, openshift management, etc). Now you're able to give cluster-roles to other users.
Make your user cluster-admin over the whole cluster
oadm policy add-cluster-role-to-user cluster-admin (youruser)
Now you have the cluster-admin privileges inside the whole cluster. You are also able to give privileges for some user in a specific project and not in the whole cluster. Than you have to use:
oadm policy add-role-to-user <role> <username> (in the current project)
This will give the role to a user, but only inside the project from where you've performed this command.
For more information about the avaiable cluster roles and policies I will point to the official documentation.
I raised a defect with Openshift Team as pointed out in the Support Link.
https://docs.openshift.com/online/getting_started/devpreview_faq.html#devpreview-faq-support
Here is the response i received from Support Team.
It seems that you have issued a bug and followed up for this already:
https://bugzilla.redhat.com/show_bug.cgi?id=1368862
After the cause is investigated, our operations team will sure clean up the project manually for you to allow you continue working with the developer preview
Latest update:
The project has now been cleaned up and you should be able to create a new project.
I am able to create Project in Openshift now.
I have a sqlcmd command which will give the result of to a file which will be placed in a shared folder.
exec xp_cmdshell 'sqlcmd -s $dataSource -d $dbName -i $inputFilePath -o $outputFilePath'
Now, what if the shared drive is protected and requires username and password.
How to give credential in the Sqlcmd to bypass the authentication.
xp_cmdshell will execute under the NT (Windows) credentials of:
impersonated login if logged in using Windows credentials
service account if logged in using SQL credentials and no explicit credential object exists
explicit credential is logged in using SQL credentials associated with a credential (see CREATE CREDENTIAL
if you insist on accessing a remote resource (file share) using the default context, you're uphill shitcreek without a paddle, as impersonated access to remote resources is 'double-hop' and requires constrained delegation for at leats one of the cases (logged in using NT).
A better option is to explicitly map the remote share \\server\share locally as a drive X: and then access drive X: instead. Mapping a drive locally allows for persisted credentials to be stored, but you have to be careful to make sure the mapping is visible in the service account session. Which is... basically impossible, see Map a network drive to be used by a service.
Now that you know why you cannot do this properly and you'll be pulling your own hair, meanwhile turn white from constantly be fighting difficult to troubleshoot failures, stand back and look at the problem from a different angle: Why do you want to use xp_cmdshell to call sqlcmd? Call sqlcmd directly, from a job/process. SQL Agent has all the support for you need for this, just set the job to run under a proxy account with appropiate credentials to connect to both the remote share and the destination $datasource.