Run linux distro of choice inside existing distro - archlinux

Just wondering if it's possible and what the best route might be to run a full-on Linux distro within my existing distro? It would be great to for instance run Arch Linux within a chroot, jail, etc.. I believe people are doing this on Chromium for example.
I would require that whatever fs loaded, I can install packages using pacman and that my changes are kept intact.
I have tried the Virtualbox route by the way and there is a pretty nasty bug involving double mouse pointers on rotated host screens that I can't seem to get around.
I should mention that I'll be using this chroot environment for development, maybe running the odd X client to be exported remotely, etc..

I followed the chroot guide at https://wiki.archlinux.org/index.php/Change_Root and basically installed a whole Arch system within a nested chroot according to the Arch Linux installation guide and I'm now able to switch to the environment at will.

There is a tool https://github.com/fsquillace/junest that does everything automatically for you (downloads and unpacks ArchLinux distro inside some folder and chroots there).

Related

Can you make WSL interface with windows installs? [duplicate]

How do I avoid installing same programming languages both in WSL and Windows10?
I am thinking about using WSL as a dev workspace. However, I realized I will need to install Node.js, Python, create-react-app, and so on in WSL even though my windows 10 already have them installed.
It would be helpful if you could spare me some advice.
Thanks.
To some degree, it depends on what type of development you are doing. Given your example languages/tools, I'm going to assume that most of your development is platform agnostic, web-development, etc.
My recommendation is to go all-in on WSL and install the Linux versions of the tools you use (with some notable exceptions covered below).
Uninstalling the Windows versions is recommended, but not strictly necessary. I recommend uninstalling because I continue to see a number of questions across the Stack sites where it becomes apparent that the Windows version of Node or Python is getting called from inside WSL. It's likely that some tool, such as nvm or equivalent, attempted to prepend the Windows Node or Python location to the Linux path.
This causes problems, as the Windows versions Node and Python understand Windows paths and processes. When you call them from the Linux shell in WSL, the shell/OS uses, of course, the Linux versions. And Windows Python just won't understand something like /mnt/c/Projects. It needs C:\Projects. You can work around this with utilities such as wslpath (automatically installed in some WSL distributions, installable in all others), or you could just manually adjust the path. But ... why go through the hassle if you don't need to.
Just use the Linux versions, with the corresponding Linux paths and instructions. Most development tools, tutorials, instructions, etc. are going to "default" to the Linux doc. It will typically be more complete, more up-to-date, etc.
And, of course, the Linux command-line experience is (subjectively, sure) far-and-above better than PowerShell. Don't get me wrong, I like PowerShell, but I like PowerShell even better when I call it through WSL (powershell.exe or pwsh.exe), since I can take advantage of Linux niceties like less (or bat), jq, and many others.
Not to say there aren't WSL caveats that you have to get used to. Be prepared to run into a few snags here and there (lack of Systemd support, permissions, filesystems, inotify), but most everything has a workaround that you'll typically find here on Stack (Stack Overflow, Ask Ubuntu, Unix & Linux, and/or Super User) if you search.
And for those "notable exceptions" I mentioned, I recommend installing:
Windows Terminal (available in the Microsoft Store), which will provide an upgraded terminal experience for WSL.
The Windows version of Visual Studio Code -- I've seen a question from someone here who tried to install the Linux version. It's just not necessary. Microsoft has done a great job of integrating the Windows version of VSCode with WSL. Just install the "Remote Development" extension pack, which includes the "Remote - WSL" extension.

Setting up desktop environment on NetBSD 6.1.5

I have installed NetBSD 6.1.5 with full installation setting. However, when I run startx it says no screens could be found. So i tried "X -configure" and then "X -config ~/xconfig.conf.new" and I was brought to a very generic screen with a black x crosshair, but I was unable to exit this using the suggested ctrl+alt+backspace, so I had to force power off and check if my keyboard was recognized in the conf file generated, which it was. I have installed xdm, xterm, Xorg, and other X programs.
I am not familiar with setting up desktop environments from scratch. I am a newb who is used to Ubuntu esque installers doing that stuff for me.
Would someone be able to walk me though the installation or point me to a link which explains a step by step process?
What happens if you rename your xorg.conf.new to /etc/X11/xorg.conf? Does startx or xdm work then?
Are you running this inside a VirtualBox or other emulator?
I have NetBSD on a Thinkpad T420 which I occasionally boot into Windows, and I've setup VirtualBox to be able to run the same NetBSD install when I'm in Windows. The key difference in the xorg.conf file is in the Device section:
Section "Device"
Driver "vesa"
EndSection
Also I've found the free version of http://mobaxterm.mobatek.net/ very handy - I use it to ssh into the virtual NetBSD box and then run X apps and have them display on the Windows desktop.
Final note - you might want to look out for the NetBSD-7 RC1 which should be out 'Real Soon Now', as there are some very handy improvements, including better support for most modern display hardware :)
I found that running startx from any directory with a .xinitrc file gives strange behavior in amd64 6.1.5 and 6.1.4. Delete (or rename) any .xinitrc files and try
xinit /path/to/windowmanager
Please read Chapter 9 of NetBSD Guide:
http://www.netbsd.org/docs/guide/en/chap-x.html
Section 9.9 discusses installing various Desktop Managers/Environments.
It turns out that I could run "X -config xorg.conf.new" as root on host and then ssh using putty to manually launch windows.

Cross-platform Debian package testing

I am trying to test Debian packages which are built on an x86 Linux system, but which will be executed on an ARM architecture. My {pre,post}{inst,rm} scripts are failing with a "exec format error" because the /bin/bash in the chroot'd environment, which is an image of a flash filesystem, are ARM binaries, not x86 binaries.
What I'm looking for, but cannot find, is an option to dpkg which is like --root, but which doesn't use chroot. I'd presumably need to know the name of some environmental variable (?) which contains the name of the parameter to --root.
It's probably easier to make the /bin/bash (and everything else) in the chroot executable.
Install qemu-user-static on the host. That will give you QEMU user space emulators for all architectures in static versions — so no complications with dynamic libraries in the chroot. It also configures binfmt support to execute ARM binaries with /usr/bin/qemu-arm-static.
Copy /usr/bin/qemu-arm-static into the /usr/bin of the chroot. Now you should be able to chroot and run programs normally. That way your Debian packages can be tested chrooted into their (emulated) native environment.
Alternatively to the good suggestion to use qemu, starting with dpkg 1.18.5 you could use --instdir in conjunction with --force-script-chrootless. Depending on the maintainer scripts you might need to adapt them to make use of the DPKG_ROOT environment variable. There's more information in the dpkg man page.

Vagrant and / or Docker workflow with full OS X filesystem integration for seamless local feel?

Recently I've been dabbling with vagrant and docker. These are quite interesting tools, but I haven't been able to convince myself that it's the way to go quite yet on my OS X machine. Being an old Unix hat, I have to say that I like having a consolidated and sandboxed environment for development purposes.
I've seen a lot of chatter and a number of friends have been using vagrant with just stock vim for editing. I'm not really a fan of that approach and would probably prefer to use the vm provider's sharing mechanism OR, more likely, NFS.
Personally I'd like to be able to edit directly in TextMate, SublimeText, Emacs (on OS X), or even perhaps use RubyMine and its various IDE features, etc.
Is there any way to really get the workflow down so that such an environment will be essentially like working on a local environment without having to pull a lot of additional background strings to make things work out?
I suppose a few well placed scripts could go a long way, but I've not found any solid answers on really making this a seamless environment.
What actually worked for me was to use boot2docker which makes it easy to install a lightweight virtual machine (with VirtualBox) that will host your docker deamon and images. The only thing you need in order to run docker commands is to run $(boot2docker shellinit) when you open a new Terminal.
If you need to also have your files on an OS X folder and share them with a running docker image, you need some additional setup, but once you do it, you won't have to do it again.
Have a look here for a nice walkthrough on how to do it. The steps in short are:
Get a special boot2docker image that allows you to use shared folders for VirtualBox
Configure VirtualBox to share a folder:
VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
This will share your /Users folder with the boot2docker image that hosts docker.
From you Mac share the folder you need with a folder in a docker image like:
docker run -it -v /Users/me/dev/my-project:/root/src:rw ubuntu /bin/bash
One small annoyance that I haven't found how to overcome is that you do not longer access your software through localhost because it actually runs on boot2docker instance. You have to run boot2docker ip and access that ip.
Hope that helps!

Automate CentOS installation with VMware for testing

Is is possible to automate the installation of an OS using VMware or any other virtualization product?
One of our products consists of a customized version of CentOS that installs the OS and our application on a server. It's much like any CentOS/RHEL installation where you choose a mode that corresponds to different kickstart options, and then you choose your keyboard type. The rest of the installation is automatic.
What I'd like to have is an automated system that will create a new guest VM, boot it with the ISO image of our product, start the installation (including choosing the keyboard), wait for the reboot, and then launch a set of automated tests.
I know that there are plenty of ways to automate the creation of new VM guests from existing templates/images, and I know you can use the VIX API to interact with virtual machines, but the VIX API seems to require that VMware tools is already running (which won't be the case when you're booting from the CentOS install disk).
This answer (Automating VMWare or VirtualPC) indicates that you can script VMware to boot from an ISO that does an unattended installation, but I would really like to test the same process that our customers will be using.
Another option might be to use Xen's fully-virtualized mode and see if scripting it over the serial port will work.
TIA,
Jason
I have a very very similar question, it is on superuser:
https://superuser.com/questions/36047/moving-vmware-os-image-as-primary-os-on-a-system
You can also use VirtualBox instead of VMWare. The VirtualBox SDK allows you to directly control the keyboard, the mouse the serial port and the parallel port of the guest without the virtualbox guest tools installed.
Unfortunately it doesn't offer a text console interface but the serial port can be connected to a local pipe file and that can probably be worked with just as well.
This may not be exactly what you need:
I have done something similar with a Ubuntu-based install. We used preseeding (Debian's form of kickstart), to answer all the questions during the install - providing the preseed file and the installer via tftp.
In addition to the official Ubuntu mirror we added the apt-server with our own packages in the preseed file. We put a .deb version of vmware-tools on the apt-server and added it to the packages to be installed.
The .deb of vmware tools just contained the .tar.gz and a postinstall script that would extract it to /tmp and run the vmware install script (which has a switch to be run unnattended, so it does not ask any questions).
So after the reboot vmware-tools were up and running and we could use vix to script the rest (which was not very reliable).
If you should encounter problems with running vmware-config.pl during boot, you could make a custom package that just extracts the tools and an init script that installs them on first boot, disables itself and reboots.
Maybe you can use this strategy (replacing apt by yum, preseed by kickstart and tftp by a remastered iso). If you really need to test that your users choose a keyboard in the installer (which is not very different from kickstart) this would obviously not work for you..