I was not able to mount rootfs using sdcard so i am trying to mount it using tftp and QSPI.
I have loaded my rootfs.jffs2 file at RAM address (0x21000000) using tftp protocol.i just want bootargs to pass which will mount my rootfs on board.
I just need help that how can i mount rootfs.using sd-card my u-boot and kernel are working so i dont want to change it at this time because it is working fine just i have problem with rootfs.'
Just suggest me bootargs to mount rootfs
There should be something like "root=/dev/mmcblk0p2" in your bootargs. It stands for MMC block device 0, partition 2. If you still fail to mount rootfs on SD card, please provide the full boot log.
Related
I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom
Looks like SSH is disabled by default on all new Jessie images. I only run Pi's headless with SSH, so I have no way to configure a new Pi without a keyboard/monitor (which I don't have).
Newsgroups/articles say just 'put a file called "SSH" into the /boot/ folder, but how can you do this from windows?
Is there a way to do this (enable SSH) on the SD card, then insert it into the Pi, then SSH to the new IP address?
For enabling ssh first with without keyboard/monitor you need to create an empty ssh file inside SD card.
touch ssh
I used following link:
https://www.installvirtual.com/enable-ssh-in-raspberry-pi-without-monitor/
After copying the image onto the SD card:
On a Linux computer with SD card slot the boot partition of the SD card is
normally automatically mounted as /media/yourusername/boot. In this
case you could simply execute the following command:
touch /media/`id -un`/boot/ssh
to create an empty file called ssh in the boot directory of the SD card.
After unmounting the SD card and putting it back into a Pi the SSH service
should start fine after boot.
Verified using 2017-03-02-raspbian-jessie.zip and a laptop with SD card slot running the Ubuntu LTS Linux version 14.04.
Creating an empty file with name ssh in the FAT32 boot partition should be
also possible using a laptop running Windows.
I would like to pull files from a network share to my QNAP device.
In windows i would type net use \MyDevice\MyShare /User:... and then copy \MyDevice\MyShare\FileFilter Localpath
o How do I mount the network share to the QNAP using SSH?
o Where are my Volumes at the QNAP? I did not find them
In the local filesystem of your QNAP there is a /share directory. It contains symlinks to all shared folders that have been set up. Even external storage options like USB harddrives are symlinked there.
It is also the mountpoint for the qnap volumes.
You can check this by just using the readlink command.
[/] # readlink -f /share/Music
/share/CACHEDEV1_DATA/Music
[/] #
A network share can be mounted on the qnap by various protocols. (e.g. nfs, cifs). If you are still on QTS 4.2 and did not update to QTS 4.3 yet, you could try this third party app (qpkg) to support sshfs.
I am trying to mount an nfs share to a solaris 10 machine at bootup without any luck so far.
The nfs share is accessible and mounts without any problems if I do so manually from the command line (mount -F nfs server_hostname:/exported_dir_path/ /mnt/tmpdir). But I don't know how to tell autofs to mount it at bootup.
We have another machine on the same network (also solaris) that has it working, but I can't figure out how is it configured differently from the non-working one.
I googled the problem and found that /etc/vfstab, /etc/auto_master, and /etc/hosts files need to have proper entries to make this work. I compared these files from the non-working machine with the ones on working machine but did not notice any differences.
Could someone please guide me to properly configure autofs to mount nfs shares on a solaris10 machine?
Thanks,
Aashish.
Today, I was trying to get on my Kali Linux virtual machine to just do a basic vulnerability check on a VPS I own. I have my Kali Linux Virtual Disk Image (VDI) saved on a USB external drive, so I plugged that in, fired up Virtual Box, but I got an error when I went to start it. It would appear that the drive letter for this drive has changed from F: to E:. Thus, VirtualBox could not retrieve the VDI from F:\Kali Linux VM\.
Trying to troubleshoot this on my own, I decided to open up the VM settings, remove the SATA Controller VDI that was registered on the F: drive, and then add the VDI from the E: drive (same VDI, just a difference in drive letter). That, however, did not go as smoothly as planned. I was able to remove incorrect VDI path without any problems, but when I tried to add the VDI on the proper path, I got the following error:
Cannot register the hard disk 'E:\Kali Linux VM\Kali Linux.vdi' {6b214e73-ae38-427b-90f8-995c7dd4211c} because a hard disk 'F:\Kali Linux VM\Kali Linux.vdi' with UUID {6b214e73-ae38-427b-90f8-995c7dd4211c} already exists.
Result Code:
E_INVALIDARG (0x80070057)
Component:
VirtualBoxWrap
Interface:
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:
VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)
It looks like I cannot add the VDI back to the VM because it is identical to the VDI I removed.
Has anyone else encountered a problem like this? And does anyone have a fix for this so I don't lose all the data on that VM?
Thank you all in advance.
Note: I know this isn't a programming question, so this may be the wrong Stack Exchange. Please let me know if this would be better suited under a different Stack Exchange site.
Open Oracle VM VirtuaBox Manager. Now go to
File > Virtual Media manager
Under Hard disks, select Kali Linux.vdi. Right click and remove it.
NOTE: If remove is disabled. Click release first. Then right click and remove.
Now add the VDI Kali Linux.vdi again.