How do I automate Jenkins SSH credentials creation/assigning to nodes? - ssh

I am writing an automated Jenkins machine creation script, and I have encountered a problem with SSH credentials, namely:
In Jenkins there is a file called credentials.xml (in /var/lib/jenkins) which stored credentials for the nodes. Mine looks like so:
<?xml version='1.0' encoding='UTF-8'?>
<com.cloudbees.plugins.credentials.SystemCredentialsProvider plugin="credentials#1.18">
<domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash">
<entry>
<com.cloudbees.plugins.credentials.domains.Domain>
<specifications/>
</com.cloudbees.plugins.credentials.domains.Domain>
<java.util.concurrent.CopyOnWriteArrayList>
<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
<scope>GLOBAL</scope>
<id>8743cc14-bc2c-44a6-b6bb-c121bef4ae2d</id>
<description>root_with_secret</description>
<username>root</username>
<password>2Xd4i7+8tjVXg2RHP6ggl/ZtWJp177ajXNajJxsj80o=</password>
</com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
</java.util.concurrent.CopyOnWriteArrayList>
</entry>
</domainCredentialsMap>
There is (are) also nodes (slaves) configuration file(s) (stored in /var/lib/jenkins/nodes/HOSTNAME/config.xml for each slave) which look(s) like:
<?xml version='1.0' encoding='UTF-8'?>
<slave>
<name>HOSTNAME_OF_MY_SECRET_MACHINE</name>
<description>HOSTNAME_OF_MY_SECRET_MACHINE</description>
<remoteFS>/root</remoteFS>
<numExecutors>1</numExecutors>
<mode>NORMAL</mode>
<retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
<launcher class="hudson.plugins.sshslaves.SSHLauncher" plugin="ssh-slaves#1.9">
<host>10.0.10.1</host>
<port>22</port>
<credentialsId>8743cc14-bc2c-44a6-b6bb-c121bef4ae2d</credentialsId>
<maxNumRetries>0</maxNumRetries>
<retryWaitTime>0</retryWaitTime>
</launcher>
<label></label>
<nodeProperties/>
<userId>anonymous</userId>
</slave>
The problem is that after I create the jenkins machine, copy credentials.xml and config.xmls for each slave then the credentials wouldn't work. I get
[07/26/15 16:00:39] [SSH] Opening SSH connection to 10.0.10.1:22.
ERROR: Failed to authenticate as root. Wrong password. (credentialId:8743cc14-bc2c-44a6-b6bb-c121bef4ae2d/method:password)
[07/26/15 16:00:41] [SSH] Authentication failed.
hudson.AbortException: Authentication failed.
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1178)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:701)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:696)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[07/26/15 16:00:41] Launch failed - cleaning up connection
[07/26/15 16:00:41] [SSH] Connection closed.
To solve this issue I can go to Jenkins -> Credentials -> and then update the credential with the same password that I would use anyway and it will work.
So the question is whether Jenkins uses kind of salting/hashing per installation so that the credentials.xml will not work if copied to a new machine?

OK, I have managed to solve this with (I believe) a workaround-ish solution, namely:
To store a password in plain text in credentials.xml, copy it over to the Jenkins machine after installing and starting the service. Then Jenkins will encrypt it with its new secret (or whatever it uses for that purpose) and it will work :)
EDIT
A second option is to install Jenkins, start it, and then copy the credentials.xml with encrypted passwords together with secrets directory and secret.xml from previous installation. This will copy both encryption master key and the encrypted credentials that have been created using this master key.

Related

mbsync authentication failed

I was able to configure mbsync and mu4e in order to use my gmail account (so far everything works fine). I am now in the process of using mu4e-context to control multiple accounts.
I cannot retrieve emails from my openmailbox account whereas I receive this error
Reading configuration file .mbsyncrc
Channel ombx
Opening master ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave ombx-local...
Connection is now encrypted
Logging in...
IMAP command 'LOGIN <user> <pass>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
In other posts I've seen people suggesting AuthMechs Login or PLAIN but mbsync doesn't recognizes the command. Here is my .mbsyncrc file
IMAPAccount openmailbox
Host imap.ombx.io
User user#openmailbox.org
UseIMAPS yes
# AuthMechs LOGIN
RequireSSl yes
PassCmd "echo ${PASSWORD:-$(gpg2 --no-tty -qd ~/.authinfo.gpg | sed -n 's,^machine imap.ombx.io .*password \\([^ ]*\\).*,\\1,p')}"
IMAPStore ombx-remote
Account openmailbox
MaildirStore ombx-local
Path ~/Mail/user#openmailbox.org/
Inbox ~/Mail/user#openmailbox.org/Inbox/
Channel ombx
Master :ombx-remote:
Slave :ombx-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns *
Create Slave
Expunge Both
Sync All
SyncState *
I am using Linux Mint and my isync is version 1.1.2
Thanks in advance for any help
EDIT: I have run a debug option and I have upgraded isync to version 1.2.1
This is what the debug returned:
Reading configuration file .mbsyncrc
Channel ombx
Opening master store ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave store ombx-local...
pattern '*' (effective '*'): Path, no INBOX
got mailbox list from slave:
Connection is now encrypted
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Openmailbox is ready to
handle your requests.
Logging in...
Authenticating with SASL mechanism PLAIN...
>>> 1 AUTHENTICATE PLAIN <authdata>
1 NO [AUTHENTICATIONFAILED] Authentication failed.
IMAP command 'AUTHENTICATE PLAIN <authdata>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
My .msyncrc file now contains these options instead
SSLType IMAPS
SSLVersions TLSv1.2
AuthMechs PLAIN
At the end, the solution was to use the correct password. Since openmailbox uses an application password for third-party e-mail clients I was using the wrong (original) password instead of the application password.

Unable to start node on Weblogic dynamic cluster

I am trying to set up WLS dynamic cluster on two machines.
Two nodes are up and running on one of machine where admin server is hosted as well, but when I try to start the node which is on different machine that I have added afterwards I see below exception.
<Jun 7, 2016 2:13:07 AM PDT> <Critical> <Security> <BEA-090518> <Could not decrypt the username attribute value of {AES}Q64tW2ys+PviYQPkPGPc8/c79/RwfgrsoekwDFpgZKI= from the file /usr/home/devtools/Middleware/user_projects/domains/v12C_d/servers/Cluster-0-abc-4/data/nodemanager/boot.properties. If an encrypted attribute was copied from boot.properties from another domain into /usr/home/devtools/Middleware/user_projects/domains/v12C_d/servers/Cluster-0-abc-4/data/nodemanager/boot.properties, change the encrypted attribute to its clear text value, and then restart the server. The attribute will be encrypted again. Otherwise, change all encrypted attributes to their clear text values, then restart the server. All encryptable attributes will be encrypted again. The decryption failed with the exception weblogic.security.internal.encryption.EncryptionServiceException: com.rsa.jsafe.JSAFE_PaddingException: Invalid padding..>
<Jun 7, 2016 2:13:07 AM PDT> <Critical> <Security> <BEA-090518> <Could not decrypt the password attribute value of {AES}qusooByFxC/eTogSMU2YEjfnWRpY69f6MfTeqhqfIFk= from the file /usr/home/devtools/Middleware/user_projects/domains/v12C_d/servers/Cluster-0-abc-4/data/nodemanager/boot.properties. If an encrypted attribute was copied from boot.properties from another domain into /usr/home/devtools/Middleware/user_projects/domains/v12C_d/servers/Cluster-0-abc-4/data/nodemanager/boot.properties, change the encrypted attribute to its clear text value, and then restart the server. The attribute will be encrypted again. Otherwise, change all encrypted attributes to their clear text values, then restart the server. All encryptable attributes will be encrypted again. The decryption failed with the exception weblogic.security.internal.encryption.EncryptionServiceException: com.rsa.jsafe.JSAFE_PaddingException: Invalid padding..>
Enter username to boot WebLogic server:<Jun 7, 2016 2:13:09 AM PDT> <Info> <Management> <BEA-141307> <Unable to connect to the Administration Server. Waiting 5 second(s) to retry (attempt number 1 of 3).>
<Jun 7, 2016 2:13:14 AM PDT> <Info> <Management> <BEA-141307> <Unable to connect to the Administration Server. Waiting 5 second(s) to retry (attempt number 2 of 3).>
On doing a search on internet I saw solutions such as giving the clear userid/password of weblogic admin in boot.properties file and upon restart the userid/password would get encrypted and issue should be fixed.
Well I have tried that and that didn't fix my issue.
Please note that I am using dynamic cluster which suggests that configurations are usually copied over nodes based on server templates.
Will really appreciate any input on this.
Suggestion:
1) Scale down your cluster to a single instance where the boot.properties files is known to work.
2) Change it to clear text.
3) Bounce WebLogic so it get's the file encrypted once again.
4) Make sure it works.
5) Scale Up your cluster again and see if the error persist.

Liferay 6.2 clustering issue with multicast

I am trying to cluster ehcache and lucene with Liferay 6.2 EE sp2 bundle on 2 servers with mutlicast enabled. WE have Apache HTTPD servers fronting tomcat servers using reverse proxy. A valid 6.2 license is deployed on both the nodes.
We user the following properties in the portal-ext.properties:
cluster.link.enabled=true
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
# Since we are using SSL on the frontend
web.server.protocol=https
# set this to any server that is visible to both the nodes
cluster.link.autodetect.address=dbserverip:dbport
#ports and ips we know work in our environment for multicast
multicast.group.address["cluster-link-control"]=ip
multicast.group.port["cluster-link-control"]=port1
multicast.group.address["cluster-link-udp"]=ip
multicast.group.port["cluster-link-udp"]=port2
multicast.group.address["cluster-link-mping"]=ip
multicast.group.port["cluster-link-mping"]=port3
multicast.group.address["hibernate"]=ip
multicast.group.port["hibernate"]=port4
multicast.group.address["multi-vm"]=ip
multicast.group.port["multi-vm"]=port5
We are running into issues with the ehcache and lucene clustering not working. The following tests fail :
Moving a portlet on node 1, does not show up on node 2
There are no errors except for a startup error with lucene.
14:19:35,771 ERROR
[CLUSTER_EXECUTOR_CALLBACK_THREAD_POOL-1][LuceneHelperImpl:1186]
Unable to load index for company 10157
com.liferay.portal.kernel.exception.SystemException:
java.net.ConnectException: Connection refused at
com.liferay.portal.search.lucene.LuceneHelperImpl.getLoadIndexesInputStreamFromCluster(LuceneHelperImpl.java:488)
at
com.liferay.portal.search.lucene.LuceneHelperImpl$LoadIndexClusterResponseCallback.callback(LuceneHelperImpl.java:1176)
at
com.liferay.portal.cluster.ClusterExecutorImpl$ClusterResponseCallbackJob.run(ClusterExecutorImpl.java:614)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:579) at
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625) at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
We verified that the jgroups multicast works outside of liferay by running the following commands and using a downloaded copy of the jgroups.jar and replacing with the 5 multicast ips and ports.
Testing with JGROUPS
1) McastReceiver -
java -cp ./jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
2) McastSender -
java -cp ./jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
From there, typing things into the McastSender will result in the Receiver printing it out.
Thanks!
After a lot of troubleshooting and help from various folks in my team and at liferay support, we switched to using unicast and it worked a lot better.
Here is what we did:
Extracted jgroups.jar from the tomcat home/webappts/ROOT/WEB_INF/lib, saved locally.
Unzipped the jgroups.jar file and extracted and save the tcp.xml from the jar's WEB_INF folder
As a base line test, changed the section in the tcp.xml and saved
TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:servername1[7800],servername2[7800]}"
port_range="1"
num_initial_members="10"
Copy the tcp.xml to the liferay home on both the nodes
Change the portal-ext.properties to remove the mutlicast properties and add the following lines.
cluster.link.channel.properties.control=${liferay.home}/tcp.xml
cluster.link.channel.properties.transport.0=${liferay.home}/tcp.xml
Start node 1
start node 2
check logs
Do the cluster cache test:
Moving a portlet on node 1, shows up on node 2
Under control panel -> License manager both the nodes show up with valid licenses.
searching for user on node 2 after adding in node 1 in control panel -> user and organizations.
All of the above tests worked.
So we shutdown servers and changed the tcp.xml to use jdbc rather than the tcpping so we don't have to specify node names manually.
Step for the jdbc config:
Create the table in the liferay database manually.
CREATE TABLE JGROUPSPING (own_addr varchar(200) not null, cluster_name varchar(200) not null, ping_data blob default null, primary key (own_addr, cluster_name))
change tcp.xml and remove the tcpping section and add the following.
Note: Please replace the leading \ with less than symbol in the following code block. There are issues with the leading less than sign in the SO editor/parser hiding whatever comes after it:
\JDBC_PING datasource_jndi_name="java:comp/env/jdbc/LiferayPool"
initialize_sql="" />
Save and push the file manually to both the nodes.
Start the servers and repeat tests above.
It should work seamlessly.
It was invaluable to have the debug logging on for jgroups mentioned in the following the post:
https://bitsofinfo.wordpress.com/2014/05/21/clustering-liferay-globally-across-data-centers-gslb-with-jgroups-and-relay2/
tomcat home/webapps/ROOT/WEB-INF/classes/META-INF/portal-log4j-ext.xml file I used to triage various issues on bootup related to clustering.
<?xml version="1.0"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="com.liferay.portal.cluster">
<priority value="TRACE" />
</category>
<category name="com.liferay.portal.license">
<priority value="TRACE" />
</category>
We also found that the Lucene cluster replication startup errors were fixed in a fix pack and are getting a patch for it.
https://issues.liferay.com/browse/LPS-51714
https://issues.liferay.com/browse/LPS-51428
We added the following portal instance properties for lucene replication to work better between the 2 nodes:
portal.instance.http.port=port that the app servers listen on ex. 8080
portal.instance.protocol=http
Hope this helps someone.
Update
The lucene index load in a cluster issue was resolved by a Liferay 6.2 EE patch from support for the LPS's mentioned above.

Spring AMQP + RabbitMQ 3.3.5 ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN

I am getting below exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Configuration: RabbitMQ 3.3.5 on windows
On Config file in %APPDATA%\RabbitMQ\rabbit.config
I have done below change as per https://www.rabbitmq.com/access-control.html
[{rabbit, [{loopback_users, []}]}].
I also tried creating a user/pwd - test/test doesn't seem to make it work.
Tried the Steps from this post.
Other Configuration Details are as below:
Tomcat hosted Spring Application Context:
<!-- Rabbit MQ configuration Start -->
<!-- Connection Factory -->
<rabbit:connection-factory id="rabbitConnFactory" virtual-host="/" username="guest" password="guest" port="5672"/>
<!-- Spring AMQP Template -->
<rabbit:template id="rabbitTemplate" connection-factory="rabbitConnFactory" routing-key="ecl.down.queue" queue="ecl.down.queue" />
<!-- Spring AMQP Admin -->
<rabbit:admin id="admin" connection-factory="rabbitConnFactory"/>
<rabbit:queue id="ecl.down.queue" name="ecl.down.queue" />
<rabbit:direct-exchange name="ecl.down.exchange">
<rabbit:bindings>
<rabbit:binding key="ecl.down.key" queue="ecl.down.queue"/>
</rabbit:bindings>
</rabbit:direct-exchange>
In my Controller Class
#Autowired
RmqMessageSender rmqMessageSender;
//Inside a method
rmqMessageSender.submitToECLDown(orderInSession.getOrderNo());
In My Message sender:
import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component("messageSender")
public class RmqMessageSender {
#Autowired
AmqpTemplate rabbitTemplate;
public void submitToRMQ(String orderId){
try{
rabbitTemplate.convertAndSend("Hello World");
} catch (Exception e){
LOGGER.error(e.getMessage());
}
}
}
Above exception Block gives below Exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Error Log
=ERROR REPORT==== 7-Nov-2014::18:04:37 ===
closing AMQP connection <0.489.0> (10.1.XX.2XX:52298 -> 10.1.XX.2XX:5672):
{handshake_error,starting,0,
{amqp_error,access_refused,
"PLAIN login refused: user 'guest' can only connect via localhost",
'connection.start_ok'}}
Pls find below the pom.xml entry
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>1.3.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-amqp</artifactId>
<version>4.0.4.RELEASE</version>
</dependency>
Please let me know if you have any thoughts/suggestions
I am sure what Artem Bilan has explained here might be one of the reasons for this error:
Caused by: com.rabbitmq.client.AuthenticationFailureException:
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN.
For details see the
but the solution for me was that I logged in to rabbitMQ admin page (http://localhost:15672/#/users) with the default user name and password which is guest/guest then added a new user and for that new user I enabled the permission to access it from virtual host and then used the new user name and password instead of default guest and that cleared the error.
To complete #cpu-100 answer,
in case you don't want to enable/use web interface, you can create a new credentials using command line like below and use it in your code to connect to RabbitMQ.
$ rabbitmqctl add_user YOUR_USERNAME YOUR_PASSWORD
$ rabbitmqctl set_user_tags YOUR_USERNAME administrator
$ rabbitmqctl set_permissions -p / YOUR_USERNAME ".*" ".*" ".*"
user 'guest' can only connect via localhost
That's true since RabbitMQ 3.3.x. Hence you should upgrade to the same version the client library, or just upgrade Spring AMQP to the latest version (if you use dependency managent system).
Previous version of client used 127.0.0.1 as default value for the host option of ConnectionFactory.
The error
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
can occur if the credentials that your application is trying to use to connect to RabbitMQ are incorrect or missing.
I had this happen when the RabbitMQ credentials stored in my ASP.NET application's web.config file had a value of "" for the password instead of the actual password string value.
To allow guest access remotely, write this
[{rabbit, [{loopback_users, []}]}].
to here
c:\Users\[your user name]\AppData\Roaming\RabbitMQ\rabbitmq.config
then restart the rabbitmq windows service (Source https://www.rabbitmq.com/access-control.html)
On localhost , By default use 'amqp://guest:guest#localhost:5672'
So on a remote or hosted RabbitMQ. Let's say you have the following credentials
username: niceboy
password: notnice
host: goxha.com
port : 1597
then the uri you should pass will be
amqp://niceboy:notnice#goxha.com:1597
following the template amqp://user:pass#host:10000
if you have a vhost you can do amqp://user:pass#host:10000/vhost where the trailing vhost will be the name of your vhost
New solution:
The node module can't handle : in a password properly. Even url encoded, like it would work normally, it does not work.
Don't use typicalspecial characters from an URL in the password!
Like one of the following: : . ? + %
Original, wrong answer:
The error message clearly complains about using PLAIN, it does not mean the crendentials are wrong, it means you must use encrypted data delivery (TLS) instead of plaintext.
Changing amqp:// in the connection string to amqps:// (note the s) solves this.
just add login password to connect to RabbitMq
CachingConnectionFactory connectionFactory =
new CachingConnectionFactory("rabbit_host");
connectionFactory.setUsername("login");
connectionFactory.setPassword("password");
For me the solution was simple: the user name is case sensitive. Failing to use the correct caps will also lead to the error.
if you use the number as your password, maybe you should try to change your password using string.
I can login using deltaqin:000000 on the website, but had this while running the program. then change the password to deltaiqn. and it works.
I made exactly what #grepit made.
But I had to made some changes in my Java code:
In Producer and Receiver project I altered:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("your-host-ip");
factory.setUsername("username-you-created");
factory.setPassword("username-password");
Doing that, you are connecting an specific host as the user you have created.
It works for me!
In my case I had this error, cuz of wrongly set password (I tried to use 5672, when the actual one in my system was 5676).
Maybe this will help someone to double check ports...
I was facing this issue due to empty space at the end of the password(spring.rabbitmq.password=rabbit ) in spring boot application.properties got resolved on removing the empty space. Hope this checklist helps some one facing this issue.
For C# coder, I tried below code and It worked, may be this can help someone so posting here.
scenario- RabbitMQ queue is running on another system in local area network but I was having same error.
by default there is a "guest" user exists. but you can not access remote server's queue (rabbitMq) using "guest" user so you need to create new user, Here I created "tester001" user to access data of remote server's queue.
ConnectionFactory factory = new ConnectionFactory();
factory.UserName = "tester001";
factory.Password = "testing";
factory.VirtualHost = "/";
factory.HostName = "192.168.1.101";
factory.Port = AmqpTcpEndpoint.UseDefaultPort;
If you tried all of these answers for your issue but you still got "ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN", maybe you should remove rabbitmq and install it with a newer version.
Newer version worked for me.
Add one user and pass and connect to them. You can add 1 user via env variables (e.g., useful when Rabbit initializes in a Docker): RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS. See more details here:
https://stackoverflow.com/a/70676040/1200914
set ConnectionFactory or Connection hostname to localhost

App.config connection string Protection error

I am running into an issue I had before; can't find my reference on how to solve it.
Here is the issue. We encrypt the connection strings section in the app.config for our client application using code below:
config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None)
If config.ConnectionStrings.SectionInformation.IsProtected = False Then
config.ConnectionStrings.SectionInformation.ProtectSection(Nothing)
' We must save the changes to the configuration file.'
config.Save(ConfigurationSaveMode.Modified, True)
End If
The issue is we had a salesperson leave. The old laptop is going to a new salesperson and under the new user's login, when it tries to to do this we get an error. The error is:
Unhandled Exception: System.Configuration.ConfigurationErrorsException:
An error occurred executing the configuration section handler for connectionStrings. ---> System.Configuration.ConfigurationErrorsException: Failed to encrypt the section 'connectionStrings' using provider 'RsaProtectedConfigurationProvider'.
Error message from the provider: Object already exists.
---> System.Security.Cryptography.CryptographicException: Object already exists
http://blogs.msdn.com/mosharaf/archive/2005/11/17/protectedConfiguration.aspx#1657603
copy and paste :D
Monday, February 12, 2007 12:15 AM by Naica
re: Encrypting configuration files using protected configuration
Here is a list of all steps I've done to encrypt two sections on my PC and then deploy it to the WebServer. Maybe it will help someone...:
To create a machine-level RSA key container
aspnet_regiis -pc "DataProtectionConfigurationProviderKeys" -exp
Add this to web.config before connectionStrings section:
<add name="DataProtectionConfigurationProvider"
type="System.Configuration.RsaProtectedConfigurationProvider, System.Configuration, Version=2.0.0.0,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a,
processorArchitecture=MSIL"
keyContainerName="DataProtectionConfigurationProviderKeys"
useMachineContainer="true" />
Do not miss the <clear /> from above! Important when playing with encrypting/decrypting many times
Check to have this at the top of Web.Config file. If missing add it:
<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
Save and close Web.Config file in VS (very important!)
In Command Prompt (my local PC) window go to:
C:\WINNT\Microsoft.NET\Framework\v2.0.50727
Encrypt: (Be aware to Change physical path for your App, or use -app option and give the name o virtual directory for app! Because I used VS on my PC I preferred the bellow option. The path is the path to Web.config file)
aspnet_regiis -pef "connectionStrings" "c:\Bla\Bla\Bla" -prov "DataProtectionConfigurationProvider"
aspnet_regiis -pef "system.web/membership" "c:\Bla\Bla\Bla" -prov "DataProtectionConfigurationProvider"
To Decrypt (if needed only!):
aspnet_regiis -pdf "connectionStrings" "c:\Bla\Bla\Bla"
aspnet_regiis -pdf "system.web/membership" "c:\Bla\Bla\Bla"
Delete Keys Container (if needed only!)
aspnet_regiis -pz "DataProtectionConfigurationProviderKeys"
Save the above key to xml file in order to export it from your local PC to the WebServer (UAT or Production)
aspnet_regiis -px "DataProtectionConfigurationProviderKeys" \temp\mykeyfile.xml -pri
Import the key container on WebServer servers:
aspnet_regiis -pi "DataProtectionConfigurationProviderKeys" \temp\mykeyfile.xml
Grant access to the key on the web server
aspnet_regiis -pa "DataProtectionConfigurationProviderKeys" "DOMAIN\User"
See in IIS the ASP.NET user or use:
Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name
Remove Grant access to the key on the web server (Only if required!)
aspnet_regiis -pr "DataProtectionConfigurationProviderKeys" "Domain\User"
Copy and Paste to WebServer the encrypted Web.config file.
I found a more elegant solution that in my original answer to myself. I found if I just logged in as th euser who orignally installed the application and caused the config file connectionstrings to be encrypted and go to the .net framework directory in a commadn prompt and run
aspnet_regiis -pa "NetFrameworkConfigurationKey" "{domain}\{user}"
it gave the other user permission to access the RSA encryption key container and it then works for the other user(s).
Just wanted to add it here as I thought I had blogged this issue on our dev blog but found it here, so in case I need to look it up again it will be here. Will add link to our dev blog point at this thread as well.
So I did get it working.
removed old users account from laptop
reset app.config to have section not protected
removed key file from all users machine keys
ran app and allowed it to protect the section
But all this did was get it working for this user.
NOW I need to know what I have to do to change the code to protect the section so that multiple users on a PC can use the application. Virtual PC here I come (well after vacation to WDW tomorrow through next Wednesday)!
any advice to help pointing me in right direction, as I am not very experienced in this RSA encryption type stuff.
Sounds like a permissions issue. The (new) user in question has write permissions to the app.config file? Was the previous user a local admin or power user that could have masked this problem?