I have created two yaffs partions like
a. /boot
b. /data
Generated "NK.bin" is copied to /boot partition and a text file "DATA.TXT" is copied to
/data partition.
Within windows CE only /YaffsPart1 partition is shown which contains NK.bin and DATA.TXT both file.
Why the WindowsCE is not showing partitions named boot and data ?
Why it merges two partition into one /YaffsPart1 ?
I need a solution that Windows CE shows boot and data partition both
Regards,
Nahid
Typically, you have to setup you partitions in the .reg file of your Yaffs driver for your BSP. There will be several registry keys for different partitions and their starting and ending address.
I assume you setup the partitions in the boot loader and after loading windows it shows as one partition? You have to setup the partition, but then also setup the same partitions in the Windows CE Image under the Yaffs driver registry keys. If you have that driver and its settings there will probably be a default one YaffsPart1 which uses the entire space
I did some searching on the CE documentation and did not see the settings, but here is what my driver settings look like (yours may differ)
; This is the way to add another partition
; Align to the block size.
;
;[HKEY_LOCAL_MACHINE\System\StorageManager\Profiles\lpd_yaffsbd\YAFFS_PARTDRV\PART01]
; For NOR Flash Devices
; "EndAddr"=dword:006C0000
; "StartAddr"=dword:03C0000
; For NAND Flash Devices
; "StartBlock"=dword:00000FE0 ;B256 (make sure to change PART00 values to not over-run)
; "EndBlock"=dword: 000001FF ;B511 (default size = 32MB or 128KB*256 Blocks)
; Common for NOR/NAND Flash
; "Name"="YaffsPart2"
; "PartType"="0"
; "ReadOnly"=dword:0
Related
https://forums.virtualbox.org/viewtopic.php?f=7&t=90893
Hello im desesperate and need help because i have lost about two months of work in my Windows 10 guest system.
Everything worked smoothly till i need to have more free space ( although i have a dynamic hd). So i have follow some tutorials and made some changes:
1 - I have the original almost full disk in: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk
2 - I made a copy in an external usb device.
3 - Convert to vdi: VBoxManage clonehd /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk1.vmdk /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk.vdi --format vdi
4 - Tried to resize the disk ( from 80gb to 100gb): VBoxManage modifyhd /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 and VBoxManage modifymedium disk /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 ( think this could be an error as i had to chage size to vdi file).
5 - Then i had to change the uuid ( because an error of uuid in use arised):VBoxManage internalcommands sethduuid "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk"
6 - Then comeback to: VBoxManage clonehd "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk" " " --format vdi
and resize VBoxManage modifymedium disk "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi" --resize 120000
I tried to change my virutal machine with the new vdi file to test if everything was fine ( change my /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk disk connection to the new/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi) . But i detected somewhat that the system has turned back two months ago !!!!
I was not worried and decided to go back to my "untouch" vmdk, but the most strange thing is that the original "untouch" file: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk also boots with things and files and state about two months ago. So im quite nervous.
Selección_058.png
Selección_058.png (65.19 KiB) Viewed 9 times
As watching files the 6c***** has to be the "good status" as was modified yesterday at night. Here is my file manager:
Selección_059.png
Selección_059.png (54.06 KiB) Viewed 9 times
Here is my VM ( made an snapshot about two months ago i dont remember when exactly)
https://imagebin.ca/v/4QlKV3Equ1fW
My log:
https://pastebin.com/JSLFRNMs
Hope anybody can help...
i think that the key is to return somewhat to 6c**** state of my vmdk file, i dont understand how this vmdk got changed as it was not touched
Thanks in advance
The problem was solved. It was nothing to do with resizing disks. I select the { 6cc3c***-*****} hard disk ( although it was "only" 47 gb), for surprise for me it load its "snapshot" part of 47 gb with the whole disk windows10-disk1.vmdk....
Sorry for my bad english, but its difficult to explain, in the settings of the virtual machine in storage section, select as main disk the 6cc***** and start/boot the VM
Once was loaded and working fine, i deleted the snapshot ( to bring all together to the present state) and then made another snapshot for backup.
Thanks
I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.
I have a Freescale I.MX ARM board for which I am preparing the bootloader, kernel and Root filesystem on the sdcard.
I am a little confused about the order in which I partition and copy my files into sdcard. Let us say I have an empty sdcard 4GB size. I used gparted to first parition it into:
Firts partition 400 MB size as FAT32 system. this is my boot partition
Second partition is the rest of the card as ext3. This is my root file system partition.
Let us say my sdcard is under /dev/sdb.
Now I have seen many documents differing slightly in the way of copying the boot files.
Which is the right way?
Method 1:
(without mounting sdb partitions:
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
sudo dd if=uImage of=/dev/sdb bs=512 seek=2
Mount sdb2 for copying rootfs:
mount /dev/sdb2 /mnt/rootfs
copy rootfs:
tar -xf tarfile /mnt/rootfs
Method 2:
Mount sdb1 boot partition:
mount /dev/sdb1 /mnt/boot
copy uboot and kernel:
cp u-boot.bin /mnt/boot/
cp uImage /mnt/boot/
Then copy rootfs as above!
Which is the correct one. I tried two but the sddcard is not even booting.
When I tried method 1, the card boot up until it says the rootfs is not found in the partition. I removed the card and inserted and found that the first fat 32 partition is somehow 'destroyed' as it says 'unallocated' on gparted.
Please help.
You need to mark first partition as bootable.
Check your first partition details in gparted or disk utility.
From disk utility you cab mark a partition bootable. by selecting specific partition and going into 'more action' option --> 'edit partition type'.
below is script to flash binaries onto SD card for my
Arndale OCTA board. You can see the placement of bootloader binaries:
BL1
dd iflag=dsync oflag=dsync if=arndale_octa.bl1.bin of=/dev/sde bs=512 seek=1
BL2
dd iflag=dsync oflag=dsync if=../arndale_octa.bl2.bin of=/dev/sde bs=512 seek=31
uboot
dd iflag=dsync oflag=dsync if=u-boot.bin of=/dev/sde bs=512 seek=63
kernel and trust software , ....
Please notes:
1) The partition table is at SDcard offset 0 (seek 0), then you have to run:fdisk /dev/sde
and create paratiions that does not overlapped with blocks ocppuied by kernel or trust software.
2) add the "dsync" option in dd command to gaurantee every written data is immediately flushed into SD card
In the most of the cases, imx processor requires bootloader at 0x400 offset. So whatever you are doing for u-boot is correct, you need to use dd command for that.
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
While partitioning the sd card, Make sure that you are keeping enough room for u-boot image. So start your 1st bootable partition at let's say 1 MB offset.
You can simply copy your uImage and environment variables (uEnv.txt or boot.scr) through cp command.
For rootfs also, You can follow the same steps as kernel.
I am running a map reduce job that takes a small input (~3MB, list of integers of size z),
with a sparse matrix cache of size n x m, and basically outputs z sparse vectors of dimension (n x 1). The output here is pretty big (~2TB). I am running 20 m1.small nodes on Amazon EC2 with S3 storage as inputs and output.
However, I am getting a IOException: No space left on device.
It seems like there are s3 bytes written on Hadoop logs, but no files are created.
When I used a smaller input (smaller z), the output is correctly there after the job is done.
Thus, I believe that it runs out on a temporary storage.
Is there way to check where this temporary storage is?
Also, funny thing is that the log is saying that all the bytes are written to s3, but I see no files and don't know where these bytes are being written.
Thank you for your help.
Example code (Have also tried to split into map and reduce job with same error)
public void map(LongWritable key, Text value,
Mapper<LongWritable, Text, LongWritable, VectorWritable>.Context context)
throws IOException, InterruptedException
{
// Assume the input is id \t number
String[] input = value.toString().split("\t");
int idx = Integer.parseInt(input[0]) - 1;
// Some operations to do, but basically outputting a vector
// Collect the output
context.write(new LongWritable(idx), new VectorWritable(matrix.getColumn(idx)));
};
Amazon EMR supports a couple of versions. These are the default values 0.20.205
hadoop.tmp.dir - /tmp/hadoop-${user.name} - A base for other temporary directories.
mapred.local.dir - ${hadoop.tmp.dir}/mapred/local - The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
mapred.temp.dir - ${hadoop.tmp.dir}/mapred/temp - A shared directory for temporary files.
Run the du --max-depth=7 /home/xyz | sort -n command on the hadoop.tmp.dir and check which directory is occupying the most space. Although hadoop.tmp.dir says temporary, it stores system and data files also.
I am working on an Embedded ARM9 development board. In that i want rearrange my nand partitions. Can anybody tell me how to do that ?
In my u-boot shell if i give the command mtdparts which gives following information .
Boardcon> mtdparts
device nand0 <nandflash0>, # parts = 7
#: name size offset mask_flags
0: bios 0x00040000 0x00000000 0
1: params 0x00020000 0x00040000 0
2: toc 0x00020000 0x00060000 0
3: eboot 0x00080000 0x00080000 0
4: logo 0x00100000 0x00100000 0
5: kernel 0x00200000 0x00200000 0
6: root 0x03c00000 0x00400000 0
active partition: nand0,0 - (bios) 0x00040000 # 0x00000000
defaults:
mtdids : nand0=nandflash0
mtdparts: mtdparts=nandflash0:256k#0(bios),128k(params),128k(toc),512k(eboot),1024k(logo),2m(kernel),-(root)
Kernel boot message shows the following :
Creating 3 MTD partitions on "NAND 64MiB 3,3V 8-bit":
0x000000000000-0x000000040000 : "Boardcon_Board_uboot"
0x000000200000-0x000000400000 : "Boardcon_Board_kernel"
0x000000400000-0x000003ff8000 : "Boardcon_Board_yaffs2"
Anybody can please explain me what is the relation between both these messages . And which one either kernel or u-boot is responsible for creating partions on nand flash?. As for as i know kernel is not creating partitions on each boot but why the message "Creating 3 MTD partitions"?
For flash devices, either NAND or NOR, there is no partition table on the device itself. That is, you can't read the device in a flash reader and find some table that indicates how many partitions are on the device and where each partition begins and ends. There is only an undifferentiated sequence of blocks. This is a fundamental difference between MTD flash devices and devices such as disks or FTL devices such as MMC.
The partitioning of the flash device is therefore in the eyes of the beholder, that is, either U-Boot or the kernel, and the partitions are "created" when beholder runs. That's why you see the message Creating 3 MTD partitions. It reflects the fact that the flash partitions really only exist in the MTD system of the running kernel, not on the flash device itself.
This leads to a situation in which U-Boot and the kernel can have different definitions of the flash partitions, which is apparently what has happened in the case of the OP.
In U-Boot, you define the flash partitions in the mtdparts environment variable. In the Linux kernel, the flash partitions are defined in the following places:
In older kernels (e.g. 2.6.35 for i.MX28) the flash partitioning could be hard-coded in gpmi-nfc-mil.c or other driver source code. (what a bummer!).
In the newer mainline kernels with device tree support, you can define the MTD paritions in the device tree
In the newer kernels there is usually support for kernel command line partition definition using a command line like root=/dev/mmcblk0p2 rootwait console=ttyS2,115200 mtdparts=nand:6656k(all),1m(squash),-(jffs2)
The type of partitioning support that you have in the kernel therefore depends on the type of flash you are using, whether it's driver supports kernel command line parsing and whether your kernel has device tree support.
In any event, there is an inherent risk of conflict between the U-Boot and kernel partitioning of the flash. Therefore, my recommendation is to define the flash partitions in the U-Boot mtdparts variable and to pass this to the kernel in the U-Boot kernel command line, assuming that your kernel support this option.
you can set the mtdparts environment variable to do so in uboot, and the kernel only use this if you pass this in kernel boot commandline, or else it'll default to the nand partition structure in the kernel source code for your platform, which in this case the 3 MTD partition default.