How to switch back to old cluster in rabbitmq incase of the system or hostname changes? - rabbitmq

Due to some problems we changed server name and after doing this we restarted server. We found that rabbitmq service stopped and we started rabbitmq service but we lost total data related to rabbitmq and it is looking like we setup new one and cluster name also changed to new server name. Now i want to switch back to my old cluster or i want to retrieve old data. We are using windows server 2012. How to do this?

RabbitMQ by default, stores the data inside an directory based on hostname.
The default dir in widows is:
C:\Users\{youruser}\AppData\Roaming\RabbitMQ\db
In my case for example is:
C:\Users\gabriele\AppData\Roaming\RabbitMQ\db\rabbit#windowsdev-mnesia
and
C:\Users\gabriele\AppData\Roaming\RabbitMQ\db\rabbit#windowsdev-plugins-expand
Now you should have 2 directories inside C:\Users\{youruser}\AppData\Roaming\RabbitMQ\db with old-hostname and new-hostname.
You can:
stop rabbitmq
backup your old-hostname directory
delete the new-hostname directory
rename the old-hostname directory with the new-hostname
I think that you still have your data in your server.
LEt me know.

Related

User/Role List could not be obtained pentaho

I am installing pentaho 8.1 CE on ubuntu 16.04
have made change of bd from HSQLDB to Mysql, tables have been created, jackrabbit, hibernate and hibernate.
When starting the server, not login, I imagine that the users are missing or not created. ??
the error that throws me in
catalina.out
ERROR [CompositeUserRoleListService] User / Role List could not be obtained.
java.lang.IllegalStateException: Target of Bean was never resolved: org.springframework.security.core.userdetails.UserDetailsService
atorg.pentaho.platform.engine.core.system.objfac.spring.BeanBuilder$1.invoke(BeanBuilder.java:159)
at com.sun.proxy.$Proxy84.loadUserByUsername(Unknown Source)
..
..
Database Jackrabbit no created tables..
any idea?
When changing the back-end database, there's a few things you need to make sure you check.
repository.xml
quartz.properties
hibernate-settings.xml
[your-database].hibernate.cfg.xml
context.xml (in WEB-INF)
You'll need to confirm the settings for the connection to the new database has been properly configured in all of these different config files. Some additional details on how to config these files for MySQL specifically can be found in the documentation here: https://help.pentaho.com/Documentation/8.1/Setup/Installation/Archive/MySQL_Repository
Past that, make sure that you delete the "repository" directory inside of /pentaho-solutions/system/jackrabbit as this is an index of the repository. If you change your database back-end, then this index needs to be rebuilt. The index is rebuilt automatically if the server sees that "repository" directory doesn't exist at startup.
I've found the same issue on Windows Server 2016 and MySQL8.0, with Pentaho 8.0.2.
Starting Pentaho server from the start-pentaho.bat command, everything was fine.
The problem arise when I was using Tomcat.
The solution was to be sure that Tomcat service was running with the LOCAL SYSTEM account.
If the Tomcat service runs with lower privileges, maybe it can't access to the pentaho-solutions directory and it cannot load the required java beans.
if everything as per the document is configured correctly, then You might be missing the connection change in applicationContext-spring-security-hibernate.properties, please check that.

RavenDB periodic backup bundle + web admin does not persist changes

I'm using the latest stable version (3.0.3660) on a VM on Windows Azure and would like to enable period backup. Have tried to enable both local backup and backup to Azure but the GUI doesn't seem to persist the changes. Modal dialog says "Saving..." but nothing more.
Is there a log for this so that I can troubleshoot what doesn't work?
/Erik
I tried it too and the database is non-responsive for several minutes (a co-worker was waiting for tens of minutes). But after waiting a while it actually does something. I configured the Azure backup and that went wrong because it couldn't upload a blob of that large a size. The error was logged and can be found in the studio > status > logs.
Running the server standalone (instead of running as a service) doesn't give any additional feedback either.
Managed it to work by setting "Raven/AnonymousAccess" to Admin and then save the changes, not sure why. Connected with API key that should have full access.

Why is WLST not recognizing the user/password in the key and config file in connect() call?

I'm trying to connect to an admin server in WLST using config and key files. There are no error messages but I am prompted for a username and password. These files were created (by another developer who is long gone[1]) with the storeUserConfig() command. My call to connect looks something like this: connect(userConfigFile=configFile, userKeyFile=keyFile, url='t3://somehost:7031')).
Is there some restriction in using these files, such as it can only be used on the host where created, or it needs access to the domain's boot.properties file?
Note: I'm trying to connect to an admin server on a different host and non-standard port (e.g. not 7001). The server I am running WLST on and the remote host are the same version of Weblogic.
Some of the things I have tried:
verified that these files appear correct, the key file being binary data and the config file having a line for "weblogic.management.username={AES}..." and "weblogic.management.password={AES}...".
verified that there is a server on the specified port by entering a known login and password that is successful
specified the admin server in the connect parameter
turn on debug(true); the only output is <wlst-debug> connect : Will check if userConfig and userKeyFile should be used to connect to the server and another line giving the path to the userConfig file
turn on Python logging in jython with -Dpython.verbose=debug; nothing relevant to decryption operation
Munging the key or the config files generates no error messages and behaviour as above
[1]: These files are still used today by other existing WLST scripts. However, these scripts are so convoluted and deliberately obfuscated that they are very difficult to reverse-engineer how connect() is being called.
You do not need to access to the domain's boot.properties file. You just need to make sure the configFile and keyFile pointing to the right files. FYI, here is one of the commands we are using:connect(userConfigFile='./user.secure',userKeyFile='./key.secure',url='t3://somehost:7001')
Have you check the network connectity that might be having a firewall in between that troubling you, check the traceroute from the script machine to the Remote machine. Recently I have faced simalar issue. once the routing table updated with allow the WL admin server port everything got set.
Hope this could helps you!
I had this problem too. In a script, I exported the Linux variables userConfigFile and userKeyFile. Then I connected by running:
url='t3://localhost:7002'
userConfigFile='$userConfigFile'
userKeyFile='$userKeyFile'
connect(userConfigFile=$userConfigFile, userKeyFile=#userKeyFile, url=url)
That all worked in a script, but would not work interactively. I changed to doing the following:
url='t3://localhost:7002'
userConfigFile='/users/me/weblogic-2014/weblogic-admin-WebLogicConfig.properties'
userKeyFile='/users/me/weblogic-2014/weblogic-admin-WebLogicKey.properties'
connect(userConfigFile=userConfigFile, userKeyFile=userKeyFile, url=url)
And that worked interactively.

Sql service could not be started

Please advise me on this issue:
I have one default SQL instance in SQL Server 2005(sp 3 x64 bits) called instanceA
Then I installed another 2 instances call instanceB and C.
After that I restore master.bak from production server to this instanceB. The SQL services for this instance could not be started at all since then. If I turned off the default instance services, instanceB can be started. This is because both of the instances are pointing to the same 'model.mdf' file in 'MSSQL.1' folder. Hence both instances could not be started simultaneously.
I believe that in production server, the model path is configured to the default folder 'MSSQL.1' .Is there anyway to change the path to 'MSSQL.8' that belongs to instanceB upon installation so that both instances A and B could be started together?
Thank you.

Windows Application ignores app.config and uses something to connect to local database

The application sits on a virtual environment and when I remote in and run the application, it connects to the remote database. However, when I remote in with a service account and double click the same .exe, it tries to connect to the local host database and ignores the app.config. The code is the same, only the login name I use is different. The login I use is part of the local admin group. Any ideas?
You haven't indicated whether or not this is the case in your question, but my first suspicion is that you are storing the connection strings in settings, but the connection string has been marked as a user-specific setting.
In the logic of the code it was doing a comparison of the SQL server setting in the config (Settings), which was entered in lower case, against the list of SQL servers (all in upper case). Since it couldn't find any match, the datasource was blank [datasource=;],hence causing the code to look local. My fix was to use String.Compare and ignore the case, which created the match and I was able to connect to the remote SQL server.