Redis .conf file running problem for slave redis instance - redis

The problem is my Windows 10 does not understand redis commands. I downlaoded and installed on D/Program Files/Redis my cli and server .msi files.
I run with command:
redis-server D:/Program Files/Redis/redis-slave.windows.conf
and expect to get an instance on redis slave with configuration provided in course in .conf file but ger error:
Invalid argument during startup: Failed to open the .conf file: Files/Redis/redis-slave.windows.conf CWD=D:\Program Files\Redis
and the problem is not in wrong configuration because I can copy default .conf redis file and it is the same
Another problem - but not so important as above to me -- but similar when I try to run a cluster Windows 10 does not know how to open this file. I run command:
D:\Program Files\Redis2\redis-7.0.4\utils\create-cluster>./create-cluster start
and receive windows where I need to choose a program to run this file but does not seen it in your MacOS case, anyway this create-cluster file does not have extension so I do not know what to do to make it run

Related

How configure multiple Redis instances on Debian

I've got a Debian server running Redis and I'd like to run a second copy using a different port. There are plenty of guides explaining how to do it on Ubuntu and other flavours of Linux but I'm having a hard time translating those to Debian.
So far I've created a copy of the /etc/redis/redis.conf and have renamed it /etc/redis/redis_6380.conf. In the new file I've changed the name of the PID file, location of the log file, the listening port (to 3680) so that they do not conflict with the existing instance of Redis.
The problem I have is knowing which changes to make so that systemd can start the new instance.
I've made a duplicate of /lib/systemd/system/redis-server.service and called it redis-server-6380.service and have changed the EXECStart and PIDFile lines to point to the new files:
ExecStart=/usr/bin/redis-server /etc/redis/redis_6380.conf
PIDFile=/var/run/redis/redis-server_6380.pid
Doing:
systemctl enable redis-server-6380.service
results in:
Failed to enable unit: File /etc/systemd/system/redis.service already exists and is a symlink to /lib/systemd/system/redis-server.service
How do I fix this ? I'm guessing that I've missed out a vital step but I'm not that familiar with configuring systemd supervised processes on Debian.
The end of the redis-server unit file would say:
[Install]
WantedBy=multi-user.target
Alias=redis.service
Either remove that Alias, or make sure it would be unique.
Also: make sure your Redis instances would have their own database and log files.

Redhat with httpd24 connecting to Informix using DBI

I'm at my wits' end on this. I have 2 RH7 boxes that I just installed httpd24 (v2.4.34) on. They were running httpd (v2.4.6) without any connection problems. Now when I try and run Perl scripts from the browser, they fail with...
install_driver(Informix) failed: Can't load '/usr/local/lib64/perl5/auto/DBD/Informix/Informix.so' for module DBD::Informix: libifsql.so: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line 190.
at (eval 5) line 3.
Compilation failed in require at (eval 5) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /var/www/html/app/cgi-bin/test_informix_odbc.cgi line 35.
But when I run the same script from the command line, as 'apache', it runs just fine. All the ENV vars are set correctly.
Anyone run into anything similar before?
It would no longer use the LD_LIBRARY_PATH environment variable I was setting in httpd.conf.
Services are started in a fresh environment without any influence of user's environment (like environment variable values). As a consequence, information of all enabled collections will be lost during service start up.
Newer versions of httpd have stopped bringing the user environment in when the service is started. I found this little blurb in /opt/rh/httpd24/service-environment.
grep -r "LD_LIBRARY_PATH" /opt/rh/httpd24/
/opt/rh/httpd24/enable:export LD_LIBRARY_PATH=/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
I prepended the standard informix paths in /opt/rh/httpd24/enable.
export LD_LIBRARY_PATH=/opt/IBM/informix/lib:/opt/IBM/informix/lib/esql:/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
And everything is back to normal. Woohoo!

Unable to start Apache Zookeeper

I just tried to install Apache Zookeeper standalone in Ubuntu OS. I have installed Java environment and install Zookeeper 3.4.6.
However, when I typed JPS here are only I got.
following are the configuration for bashrc and zoo.cfg file:
[terminal~] vim .bashrc
[terminal~]vim /usr/local/zookeeper/conf/zoo.cfg
Please anyone help me. I wasted 2 days for only Zookeeper. It was really frustrated.
I have a fresh installed Ubuntu machine, and this is what I did to get ZooKeeper working as a standalone program (I assume you mean that you didn't install it with the package manager.)
Download ZooKeeper tar. (I used 3.4.8)
Extract the folder zookeeper-3.4.8 somewhere. (I placed it on my desktop for now)
Copy .../zookeeper-3.4.8/conf/zoo_sample.cfg to /zookeeper-3.4.8/conf/zoo.cfg
And change the dataDir=... line to whatever you want. (I made a data dir inside the zookeeper-3.4.8 folder)
Now you can run zookeeper by executing the script ../zookeeper-3.4.8/bin/zkServer.sh start
foo#bar:~$ /home/foo/Desktop/zookeeper-3.4.8/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/foo/Desktop/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
foo#bar:~$ /home/foo/Desktop/zookeeper-3.4.8/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/foo/Desktop/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: standalone

Redis nodes.conf file locked?

I am following this tutorial to create a Redis cluster:
http://redis.io/topics/cluster-tutorial
In this tutorial I need to run several redis-server instances on port 7000 through 7005. However after I run the first instance successfully and try to run the second instance the nodes.conf file seems to be locked and I get the following error message:
"Sorry, the cluster configuration file nodes.conf is already used by a different Redis Cluster node. Please make sure that different nodes use different cluster configuration files."
Do I need a separate nodes.conf for every server instance? Or do I need a separate redis-server executable in each instance directory and run it from there?
The tutorial suggests you to use separated folders for each instance configuration, so each instance will also generate the nodes.conf on its own folder.
Create a redis.conf file inside each of the directories, from 7000 to
7005.
You need to have the .conf files on separated folders for each instance, and the executable must be ran from that folders.
Assuming you have the redis-server on /tmp/redis-cluster/, and the redis.conf on each /tmp/redis-cluster/700x folder:
cd /tmp/redis-cluster/7000
../redis-server ./redis.conf
This way the nodes.conf will be generated on the current folder 7000.
Note that you must first issue a cd to change the current directory, and from that folder execute the redis-server that is one folder up (../)

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file