I have 2 issues on two different environments, created on the same way, with Local Admin credentials, and added with an account that have Domain admin rights.
Environment #1:
One of them it that looking on the secondary node, i see a ? (questionmark) under Availability Replicas for the Primary node, and i do not have the text "Primary" as on the primary node . That's new for me. What access rights are missing ?
Environment #2 (witch does not work failing over):
When trying to add the listener i get this:
Cluster network name resource 'Listenername, clustername' encountered an error enabling the network name on this node. The reason for the failure was:
'Unable to obtain a logon token'.
?
Related
I setup a peer-to-peer replication topology on 2 IBM LDAP servers (Version 6.4). It works, both ways, with simple attribute modifications like changing description or displayName attributes. But it blocks when I add a new entry on either server. I checked the logs and see an error 50 (insufficient access) for the change. The audit logs show an "extra" operational attribute, ibm-entryuuid, are added to the other server, which maybe causes the error.
It also blocks when I try to login on an account with an invalid password. I get an error 65 (object class violation). This is maybe because the password policy mechanism modifies/adds/deletes certain operational attributes(e.g. PWDFAILURETIME)
The schema files are the same for both servers. And both servers are cryptographically synched.
I use JXplorer to test. I use admin credentials.
What should I do to allow these operations to replicate? Thanks in advance for any help.
Update:
I have checked the supplier credentials and when I tried to change the ibm-slapdmasterdn and ibm-slapdmasterpw, I get an Already Exists error. What do I do?
I found the problem. I didn't quite understand what the credentials attributes meant until I re-read the IBM tutorial. I was trying to modify the replica DN to the admin DN, that's why I got the error.
It replicates smoothly now.
There is an Azure VM encrypted disk with Bitlocker in North Europe. Everything has replicated well in West Europe. While doing Test Failover, getting below error.
Failover Error: ID28031
Error Message: Virtual machine XXX-AZ-WEB01-test' could not be created under the resource group 'XXXX-Destination-RG'. Azure error message: 'Key Vault https://XXX-keyvault-ne.vault.azure.net/keys/Bitlocker/XXXX either has not been enabled for Volume Encryption or the vault id provided does not match /subscriptions/XXXX-XX-XXXX-XXX-XXXX/resourceGroups/XXX-Destination-RG/providers/Microsoft.KeyVault/vaults/XXX-KEYVAULT-WE's true resource id. (Provisioning failed)'.
Things are already in place what is showing in error.
Volume encryption has enabled in both source and destination Key vault.
The user has assigned all the permission as per this doc.
Thanks in advance.
Based on the Error message Failover failed with Error ID 28031 due to Quota and also check
Are you trying to do failover to different resource group or key vault? When restoring the vm, and encrypting it with the existing keys again trying to store the keys in the target keyvault
Have a crosscheck if required user KeyVault permissions as mentioned in https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms#required-user-permissions.
While enabling mentioned KeyVault permissions (on primary & recovery) under access policies, please enable volume encryption under advanced access policies (to make failover to work).
Also try to create manually the Resource Group & Storage Account post which Enable Replication was successful.
There is some limitation in KeyVault which is making the failover to fail: https://github.com/Azure/azure-cli/issues/4318
Kindly let us know if the above helps or you need further assistance on this issue.
The mistake was that destination KeyVault was created and keys were imported manually. The destination Keyvault must be created by the script provided below.
https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms#copy-disk-encryption-keys-to-the-dr-region-by-using-the-powershell-script
Once I created the destination KeyVault by script, everything goes smoothly.
The goal here is to: Assist client in configuring his Key Vault so that he would be able to enable TDE encryption and access it over the government portal url
Customer Verbatim:
"I am running into an issue when trying to enable TDE for SQL Server 2016. I have attached a few files with show the problem. Basically the problem is when SQL tries to connect to the Azure Key Vault it is using the public suffix (azure.net) instead of the the govcloud suffix (usgovcloudapi.net).
How do I force it to use the correct URL?"
https://vant4gekeyvault.vault.usgovcloudapi.net/
I think the issue is this is a gov tenant and he's stuck using the commercial URL but we were unable to force the correct URL. I sent him instructions on how to
Set-AzureRmEnvironment for AzureKeyVaultServiceEndpointResourceId as *.vault.usgovcloudapi.net, should be https://vault.usgovcloudapi.net.
but that didn't seem to work. I may be way off on this assumption too, as I'm not really that great in KV. Any Ideas or a known fix?
Here is his error message:
---SQL
Msg 33049, Level 16, State 2, Line 17
Key with name 'SqlTDEKey' does not exist in the provider or access is denied. Provider error code: 2058. (Provider Error - No explanation is available, consult EKM Provider for details)
---EVENT LOG
The description for Event ID 2 from source SQL Server Connector for Microsoft Azure Key Vault cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Vault Name: EKM Operation
Operation: SqlCryptGetKeyInfoByName
Key Name: N/A
Message: Error when accessing registry:5
Read the message again, the account doesn't have permission to modify the registry. It's an issue introduced in the feb release of the connector. I ran into a similar issue, the provider tries to create a registry key but doesn't have permissions to do so, therefore it fails. Try the following steps taken from this blogpost [1]
Open regedit
Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft
Create a new Key called “SQL Server Cryptographic Provider” (without quotes)
Right click the key, from the context menu select ‘permissions.
Give Full Control permissions to this key to the Windows service account that runs SQL Server
[1] https://www.visualstudiogeeks.com/devops/SqlServerKeyVaultConnectorProviderError2058RegistryConsultEKMProvider
I am working for an IBM Business Partner and I am trying to complete a first PoC ICP installation. The basic installation has worked. I did not configure LDAP during the deployment but I am trying to add an LDAP connection in the console now, afterwards.
Unfortunately, I always fail. And there seem to be a number for limitations and/or bugs in the LDAP connection of ICP to the point of making it unuseable.
First, I would like to connect to an IBM Domino Directory as my LDAP server. Anyone who has worked with a Domino directory before knows that many Domino deployments have an O=Org suffix where Org is a company name containing spaces. For example, in our case it is "O=ARS GmbH". I would normally need to use this as the base DN (search base). However, ICP does not allow spaces in this field ... that need to be fixed! Any other LDAP client product I tried to connect to our Domino directory over many years was able to deal with spaces in the base DN.
Next, in a Domino directory usually the groups do have a different suffix (e.g. search base) than users. But ICP only offers ONE base DN field and not separate base DN fields for users and groups. Any other LDAP client ... DOES offer this. This needs to be fixed in ICP as well.
Next, the bind DN field does not allow some commonly used special characters which are often found in account names, such as the - character. This needs to be fixed as well (as it happens, the special user ID we have in our Domino directory which we use for LDAP binding is named dir-client ...).
Well, after hitting all those blocking problems, I finally tried to connect to our Microsoft Active Directory. This time I could successfully complete the LDAP connection. After doing so, I turned to "Users" and discovered I need to "Import group". However, no matter what I try to enter as (correct) values into the CN and OU fields, I only end up with an "internal server error".
Further more, after I could save the LDAP connection to Active Directory, I could no longer log in to the console with the builtin admin account! But since I could not import any users/groups, I could not assign that role to an LDAP account ... luckily, I had a VM snapshot of the master server and could thus revert to the state before.
This is really frustrating ...
I ran into identical issue when hooking up to an openldap server running in a docker container. It took me awhile to figure out the ICP pod and container where the log file is to get more information than "Internal Server Error".
Here is how to find the relevant ICP pod/container log:
Look for the "auth-idp" pods in the kube-system namespace. I use:
kubectl get pods --namespace=kube-system | grep auth-idp
If you are running an HA cluster, you will have a pod on each master node.
In my case I have 3 master nodes. If you are running only a single master, then you will have only one auth-idp pod.
Again, in an HA scenario, you need to figure out which is your current master node. (The easiest, crude way to do that is ssh to your master VIP and see which node you land on.)
Now figure out which pod is running on the current master node. On each pod I use:
kubectl describe pod auth-idp-vq5bl --namespace=kube-system | grep IP
or
kubectl get pod auth-idp-vq5bl --namespace=kube-system -o wide
The one on the IP that is the current master node is where the log of interest will be.
The container in the pod that has the log of interest is: platform-identity-mgmt
To actually see the log file use:
kubectl logs auth-idp-vq5bl --namespace=kube-system --container=platform-identity-mgmt
At that point you will be able to scroll through the log and see a more detailed error message.
In the case of my error the log indicated my search filter for the group was not working properly. I decided to mess with the user ID map and user filter so I used a user ID map of *:cn and a user filter of: (&(cn=%v)(objectclass=inetOrgPerson)) Once I changed those in the ICP LDAP configuration, the user import succeeded. However, later I realized the logins were not working because the login is based on a search on userid or uid. So I changed the user ID map back to *:uid and the user filter back to (&(uid=%v)(objectClass=inetOrgPerson)). That corrected the login issue. I added some users to my LDAP group and reimported the group and the import worked as well. At this point, I'm not sure what was going on with the original import not working until I messed with the user ID map and user filter. Go figure.
In my OpenLDAP directory instance my groups are all under ou=groups and each group member is listed as, e.g., cn=Peter Van Sickel,dc=ibm,dc=com. I had to edit the group member to get it using the full DN of an actual user.
My users are all directly under the root DN: dc=ibm,dc=com.
As to specific issues with other LDAPs, it is my experience that each has its own set of idiosyncrasies to get things working as desired.
whilst 'hardening' the accounts - namely removing or toning down accounts with editor permissions on the projects I removed editor from what appears to be the kubernetes account that container engine uses on the back end of gcloud commands.
Once you remove the last role from an account it vanishes - hard lesson to learn!
Removed editor
serviceAccount:386242358897#cloudservices.gserviceaccount.com
It meant I initially couldn't deploy because it couldn't access container registry.
So I deleted the cluster and recreated expecting the account to get recreated. That failed due to insufficient permissions.
so I manually removed the compute instances (it wouldn't have permissions to recreate them), then templates and then the cluster.
As the UI now thinks you have no clusters it looks like you are back to the beginning. So I ran my scripts and they failed.
ERROR: (gcloud.container.clusters.create) Opetion [https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/operations/operation-xxxx'
startTime: u'2017-10-17T17:59:41.515667863Z'
status: StatusValueValuesEnum(DONE, 3)
statusMessage: u'Deploy error: "Not all instances running in IGM. Expect 1. Current actions &{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/gke-xxxx-default-pool-xxxx:PERMISSIONS_ERROR]".'
targetLink: u'https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/clusters/xxxx'
zone: u'europe-west2-b'>] finished with error: Deploy error: "Not all instances running in IGM. Expect 1. Current actions &{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/xxxx:PERMISSIONS_ERROR]".
Updated property [container/cluster].
when I try to create through UI I get this
Permission denied (HTTP 403): Google Compute Engine: Required 'compute.zones.get' permission for 'projects/xxxx/zones/us-central1-a'
Have done a number on it!
My problem is that I don't see a way of giving permissions back to whatever account it is trying to use (as I cannot see that account if it exists) nor can I see how to attach a new service account with permissions that are needed to whatever is doing the work under the hood.
UPDATE:
So ...
I recreated the account at the organisation level. Gave it service account role there because you cannot modify the domain of the accounts at project level.
I have then modified that at the project level to have editor permissions.
This means i can deploy a cluster but ... still cannot create load balancer - insufficient permissions
Error creating load balancer (will retry): Error getting LB for service default/bot: googleapi: Error 403: Required
'compute.forwardingRules.get' permission for 'projects/xxxx/regions/europe-west2/forwardingRules/xxxx', forbidden
the user having the problem this time is:
service-xxx#container-engine-robot.iam.gserviceaccount.com
So ...
I played with recreating accounts etc. Eventually got Kubernetes working again.
A week later tried to use datastore and discovered that AppEngine was dead beyond dead.
The only recourse was to start a new project from scratch.
The answer to this question is (some may laugh at its self evidence, but we are all in a rush at some point).
DO NOT CREATE USER ACCOUNTS OR GIVE THEM PERMISSIONS BEYOND WHAT THEY NEED BECAUSE DELETING THEM LATER IS REALLY NOT WORTH THE RISK.
Thankyou for listening :D