Minishift Error: Error getting migrated host: unexpected end of JSON input - minishift

I am unable to start, stop or delete minishift on Windows. I have been able to start it and use it successfully before, but I did a minishift delete without stopping the minishift cluster first with minishift stop. Now I can't get minishift to start because I keep getting this error:
PS C:\Users\user01> minishift version
minishift v1.20.0+53c500a
PS C:\Users\user01> minishift start
-- Starting profile 'minishift'
Error getting migrated host: unexpected end of JSON input
Most commands give me this error now:
Error getting migrated host: unexpected end of JSON input
> minishift docker-env
Error getting migrated host: unexpected end of JSON input

I believe that some configuration files got corrupted, even though minishift delete should be safe using without stopping it first. What does execution of minishift status return?
Anyway, first, you can try using minishift delete --force which deletes all vm specific files in minishift home (~/.minishift). If does not help, continue with next steps.
Second, the way how to restore and start minishift again from fresh is to delete directory with configuration files. It is usually placed at ~/.minishift unless you have set MINISHIFT_HOME env. variable which could changes this location. Also, I usually delete ~/.kube folder. Then start minishift again, everything should be created from scratch.
Finally, new version of minishift (1.21) got released, you may try that as well.
Note that if you have used any persistent configuration you will lose it with removing minishift home folder, so back up all necessary stuff.

Related

Redis, almost no commands work properly, get Error Unknown command for KNOWN COMMANDS

Trying to make a SIMPLE backup of our Redis Database that is on Heroku for a Ruby on Rails command, I connect using redis-cli just fine, I can type help save and it says it is a command, but when I try to run save it gives me a lame error that says:
ec2-34-231-26-8.compute-1.amazonaws.com:19099> save
(error) ERR unknown command `save`, with args beginning with:
ec2-34-231-26-8.compute-1.amazonaws.com:19099> bgsave
(error) ERR unknown command `bgsave`, with args beginning with:
ec2-34-231-26-8.compute-1.amazonaws.com:19099>
Yet if I ask for help on these commands they do in fact exist:
ec2-34-231-26-8.compute-1.amazonaws.com:19099> help save
SAVE -
summary: Synchronously save the dataset to disk
since: 1.0.0
group: server
ec2-34-231-26-8.compute-1.amazonaws.com:19099> help bgsave
BGSAVE -
summary: Asynchronously save the dataset to disk
since: 1.0.0
group: server
ec2-34-231-26-8.compute-1.amazonaws.com:19099>
Does anyone know why Redis doesn't work properly?
It is the Heroku Hobby Dev version and I am connecting to it via Windows which is not giving me ANY trouble at all other than the Redis Server not working properly. On another instance I get an even stranger error that says Save is not allowed?
I have searched for hours and there appears to be NOTHING on this subject at all which is very confusing to me. A lot of the commands listed on redis.io return that same stupid error that the command doesn't exist when the help clearly states that it DOES exist.
Any help would be appreciated..
I hope this may help someone, In my case was that there was a rename in the configuration
check your redis.config file
I had
rename-command SAVE "SV"
now, type your rename command and that's it
127.0.0.1:6380> SV
OK

Setting up S3 logging in Airflow

This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Vagrant corrupted index file C:\Users\USERNAME\.vagrant.d/data/machine-index/index

My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index
Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.
When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.
When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7

Redis Server doesn't start or do anything - Redis-64 on Windows

I'm following these steps outlines on this link, however when I try to start the server nothing happens nor can I connect to anything from the client. Does anyone know how to run this?
when I try from a command prompt instead of double clicking the redis-server.exe I get this message
[11868] 23 Jul 11:58:26.325 # QForkMasterInit: system error caught. error code=0
x000005af, message=VirtualAllocEx failed.: unknown error
http://bartwullems.blogspot.ca/2013/07/unofficial-redis-for-windows.html
The easiest way to install Redis is through NuGet:
Open Visual Studio
Create an empty solution so that NuGet knows where to put the packages
Go the Package Manager Console: Tools –> Library Package Manager –>Package Manager Console
Type Install-Package Redis-64
image
Go to the Packages folder and browse to the Tools folder. Here you’ll find the Redis-server.exe. Double click on it to start it.
Redis is ready to use and start’s listening on a specific port(6379 in
my case)
image
Let’s open up a client and try to put a value into Redis. Start Redis-cli.exe. It already connects to the same port by default.
image
Add a value by executing following command:
image
Read the value again:
image
Try to run with redis-server --maxheap 4000000
Miguel is correct, but it is not that simple. To start redis-server either as a service or from the command prompt, the amount of available RAM and disk space must be sufficient for Redis to run as configured.
Now, if no configuration file is specified when running Redis, it will use the default configuration values. All of this is documented in the redis.windows.conf file as well as in the document "Redis on Windows.docx" (both deployed with the redis installation).
In my experience, errors when starting Redis usually come from lack of available resources (RAM or disk space) or some incorrect configuration of maxhead or maxmemory parameters.
To troubleshoot this kind of behavior, check your system's available resources and try running redis-server from the command line varying the parameters maxmemory, maxheap, and/or heapdir. The loglevel parameter set to verbose might also help diagnosing the issue.
Regards