Use srec_cat to join three binaries and fill holes - embedded

I have three binaries for specific memory addresses that I want to combine with srec_cat filling the holes with 0xFF.
bootloader.bin —> 0x1000
conf.bin —> 0x8000
app.bin —> 0x10000
Memory map
|- pad w/ 0xFF -|- *bootloader* ~~~ pad w/ 0xFF -|- *conf* ~~~ pad w/ 0xFF -| - *app* ~~~|
0 0x1000 0x8000 0x10000
~~~ signifies a "fluid" boundary i.e. the binary to the left of it doesn't have a fixed size.
CLI arguments
I am a bit lost between the −fill, -binary and -offset options that I read about on http://srecord.sourceforge.net/man/man1/srec_examples.html#BINARY%20FILES. Is there a way to tell srec_cat to fill anything between 0x1000 and 0x8000 that is not occupied by bootloader.bin (regardless of what size the .bin actually has)?

I tried this myself and I believe this will do what you want.
srec_cat bootloader.bin -Binary -offset 0x00001000 -fill 0xff 0x00000000 0x00008000 conf.bin -Binary -offset 0x00008000 -fill 0xff 0x00008000 0x00010000 app.bin -Binary -offset 0x00010000 -o combined.bin -Binary

Related

Understanding Organization of the CRAM bits in bitstream .bin file

For an iCE40 1k device, Following is the snippet from the output of the command "iceunpack -vv example.bin"
I could not understand why there are 332x144 bits?
My understanding is that [1], the CRAM BLOCK[0] starts at the logic tile (1,1), and it should contain:
48 logic tiles, each 54x16,
14 IO tiles, each 18x16
How the "332 x 144" is calculated?
Where does the IO tile and logic tiles bits are mapped in CRAM BLOCK[0] bits?
e.g., which bits of CRAM BLOCK[0] indicates the bits for logic tile (1,1) and bits for IO tile (0,1)?
Set bank to 0.
Next command at offset 26: 0x01 0x01
CRAM Data [0]: 332 x 144 bits = 47808 bits = 5976 bytes
Next command at offset 6006: 0x11 0x01
[1]. http://www.clifford.at/icestorm/format.html
Thanks.
height=9x16=144 (1 I/O tile and 8 Logic tiles)
Width=18+42+5x54 = 330 (1 I/O tile, 1 ram tile and 5 Logic tiles) plus "two zero bytes" = 332.

How is this crc calculated correctly?

I'm looking for help. The chip I'm using via SPI (MAX22190) specifies:
CRC polynom: x5 + x4 + x2 + x0
CRC is calculated using the first 19 data bits padded with the 5-bit initial word 00111.
The 5-bit CRC result is then appended to the original data bits to create the 24-bit SPI data frame.
The CRC result I calculated with multiple tools is: 0x18
However, the chip shows an CRC error on this. It expects: 0x0F
Can anybody tell me where my calculations are going wrong?
My input data (19 data bits) is:
19-bit data:
0x04 0x00 0x00
0000 0100 0000 0000 000
24-bit, padded with init value:
0x38 0x20 0x00
0011 1000 0010 0000 0000 0000
=> Data sent by me: 0x38 0x20 0x18
=> Data expected by chip: 0x38 0x20 0x0F
The CRC algorithm is explained here.
I think your error comes from 00111 padding that must be padded on the right side instead on the left.

ImageMagick copy the iOS 7 blur effect

I have an image that I want to blur like on the iOS 7, see image below:
I'm not sure what combination of transformations I need to execute to get the same result. I tried something very basic so far (not sure what I'm doing), but the result isn't here:
convert {$filename} -filter Gaussian -define filter:sigma=2.5 \
-blur 0x40 {$newFilename}
The above code gets executed by php exec function.
If I take this as background.png
and a plain grey rgb(200,200,200) image with a couple of black and white bits and pieces on it, as foreground.png since I don't have any iPhone grabs of the slide-up menu thingy
convert background.png \
\( +clone -gravity south -crop 360x450+0+0 \
-filter Gaussian -define filter:sigma-2.5 -blur 0x40 \) \
-composite \
\( foreground.png -matte -channel a -fx "(u.r<0.1||u.r>0.9)?1:0.3" \) \
-composite result.png
So, I basically clone the background, and select the bottom part with the -crop and blur it, then composite it onto the real background. Then I take the foregound and anywhere it is not black or white, I set it to 30% transparent (so as not to fade the black and white aspects). Then I composite that ontop of the background which by now already has the lower part blurred.
It's probably not 100% but you can diddle around with the numbers and techniques till you achieve Apple-y perfection :-)

Is it possible to distinguish grayscale from (scanned) monochrome within a shell script?

I have several thousand images that I want to run various IM commands on depending on which of three categories they fall into:
Color (often with bright colors)
Grayscale (scanned from paper, the "white" often has a yellowish tinge)
Monochrome (scanned, with the yellowish tinge, as above)
Can this be sorted out from a shell script?
Color Example #1
Grayscale Example #1
Monochrome Examples #1 and #2
I would say that the Hue and Saturation would be good discriminants for the colour image especially. A mono or grayscale image is very unsaturated, so its mean saturation will tend to be low whereas it will be higher for a colour image. Also, the hue (basically colour) of a colour image will tend to vary a lot between the different colours whereas the hue will tend to be a fairly constant value for a grey or mono image, so the amount of variation in the Hue should be a good measure - i.e. its standard deviation.
We can calculate the mean saturation using ImageMagick like this:
convert image.png -colorspace HSL -channel S -separate -format "%[mean]" info:
and the standard deviation of the Hue like this:
convert image.png -colorspace HSL -channel H -separate -format "%[standard-deviation]" info:
So, if we put all that together in a bash script and run it over your images we get this:
#!/bin/bash
for i in colour.png grey.png mono.png; do
SatMean=$(convert $i -colorspace HSL -channel S -separate -format "%[mean]" info:)
HueStdDev=$(convert $i -colorspace HSL -channel H -separate -format "%[standard-deviation]" info:)
echo $i: Mean saturation: $SatMean, Hue Std-Dev: $HueStdDev
done
Output
colour.png: Mean saturation: 17,807.9, Hue Std-Dev: 16,308.3
grey.png: Mean saturation: 7,019.67, Hue Std-Dev: 2,649.01
mono.png: Mean saturation: 14,606.1, Hue Std-Dev: 1,097.36
And it seems to differentiate quite well - I have added the thousands separator for clarity. The range of the values is based on your IM Quantisation level - mine is Q16 so the range is 0-65535.
Differentiating the mono from the grey is harder. Essentially, in the mono image you have a more starkly bi-modal histogram, and in the grey image, you have a more continuous histogram. We can plot the histograms like this:
convert colour.png histogram:colorhist.png
convert grey.png histogram:greyhist.png
convert mono.png histogram:monohist.png
Updated
To differentiate between the greyscale and mono, I want to look at the pixels in the middle of the histogram, basically ignoring blacks (and near blacks) and whites (and near whites). So I can do this to set all blacks and near blacks and whites and near whites to fully black:
convert image.png \
-colorspace gray \
-contrast-stretch 1% \
-black-threshold 20% \
-white-threshold 80% -fill black -opaque white \
out.png
If I now clone that image and set all the pixels in the clone to black, I can then calculate the difference between the histogram-chopped image and the black one
convert image.png \
-colorspace gray \
-contrast-stretch 1% \
-black-threshold 20% \
-white-threshold 80% -fill black -opaque white \
\( +clone -evaluate set 0 \) \
-metric ae -compare -format "%[distortion]" info:
Now, if I calculate the total number of pixels in the image, I can derive the percentage of pixels that are in the midtones and use this as a measure of whether the image is very grey or lacking in midtones.
#!/bin/bash
for i in colour.png grey.png mono.png; do
SatMean=$(convert $i -colorspace HSL -channel S -separate -format "%[mean]" info:)
HueStdDev=$(convert $i -colorspace HSL -channel H -separate -format "%[standard-deviation]" info:)
NumMidTones=$(convert $i -colorspace gray -contrast-stretch 1% -black-threshold 20% -white-threshold 80% -fill black -opaque white \( +clone -evaluate set 0 \) -metric ae -compare -format "%[distortion]" info:)
NumPixels=$(convert $i -ping -format "%[fx:w*h]" info:)
PctMidTones=$((NumMidTones*100/NumPixels))
echo $i: Mean saturation: $SatMean, Hue Std-Dev: $HueStdDev, PercentMidTones: $PctMidTones
done
Output
colour.png: Mean saturation: 17807.9, Hue Std-Dev: 16308.3, PercentMidTones: 70
grey.png: Mean saturation: 7019.67, Hue Std-Dev: 2649.01, PercentMidTones: 39
mono.png: Mean saturation: 14606.1, Hue Std-Dev: 1097.36, PercentMidTones: 27
First of all: Your question's headline is misleading.
"Is it possible to distinguish grayscale from (scanned) monochrome within a shell script?"
Straightforward identify tells color space and bit depth
It is misleading, because all the example images you provide are in fact in 8-bit sRGB colorspace:
identify http://i.stack.imgur.com/lygAE.png \
http://i.stack.imgur.com/H7vBP.png \
http://i.stack.imgur.com/ZOCTK.png
http://i.stack.imgur.com/lygAE.png=>lygAE.png PNG 236x216 236x216+0+0 8-bit sRGB 127KB 0.000u 0:00.000
http://i.stack.imgur.com/H7vBP.png=>H7vBP.png[1] PNG 259x192 259x192+0+0 8-bit sRGB 86.2KB 0.000u 0:00.000
http://i.stack.imgur.com/ZOCTK.png=>ZOCTK.png[2] PNG 264x179 264x179+0+0 8-bit sRGB 86.7KB 0.000u 0:00.000
As you can see, the identify command (part of the ImageMagick suite of commands) can tell you the depth and color space of an image easily.
identify with -format parameter tells specific image properties
You can include the -format parameter with 'percent escapes' in order to get to specific properties only of the image:
f : for image file name
d : for directory component of image
z : for image depth
r : for image class and color space
So try this:
identify -format "%f %d : %z %r\n" \
http://i.stack.imgur.com/lygAE.png \
http://i.stack.imgur.com/H7vBP.png \
http://i.stack.imgur.com/ZOCTK.png
Result:
lygAE.png //i.stack.imgur.com : 8 DirectClass sRGB
H7vBP.png //i.stack.imgur.com : 8 DirectClass sRGB
ZOCTK.png //i.stack.imgur.com : 8 DirectClass sRGB
Convert one image to real monochrome
Now to show you how a real "monochrome" image looks like, let's convert one of your samples accordingly:
convert \
-colorspace gray \
http://i.stack.imgur.com/lygAE.png \
+dither \
-colors 2 \
-depth 1 \
bmp3:monochrome.bmp
and
identify -format "%f : %z %r\n" monochrome.bmp http://i.stack.imgur.com/lygAE.png
monochrome.bmp : 1 PseudoClass Gray
lygAE.png : 8 DirectClass sRGB
Here are the respective images:
Telling the number of unique colors
If you have (as you do) all your images in sRGB color space with 8-bit depth, then in theory, each image can have as many as 16.777.216 (16 million) colors (also called "TrueColor"). However, most actual images do not use the full scope of this spectrum, and the "gray-ish" appearing images will actually use an even smaller number of them.
So ImageMagick has two other 'percent escape' to return information about images:
%k : returns the number of unique colors within an image. This is a calculated value. IM has to process the image and analyse every single pixel of it to arrive at this number.
So here is a command:
identify -format "%f - number of unique colors: %k\n" \
http://i.stack.imgur.com/lygAE.png \
http://i.stack.imgur.com/H7vBP.png \
http://i.stack.imgur.com/ZOCTK.png
Results:
lygAE.png - number of unique colors: 47583
H7vBP.png - number of unique colors: 7987
ZOCTK.png - number of unique colors: 5208
As you can see, your image with obvious coloring uses about 6 times as many uniq colors than the "gray-ish" scans do.
However, this is not necessarily so. See for instance this image:
It is color, isn't it?
I generated it with this command:
convert -size 100x100 \
xc:red \
xc:green \
xc:blue \
xc:white \
xc:black \
xc:cyan \
xc:magenta \
xc:yellow \
+append \
out.png
You can even count the number of unique colors by simply looking at it: 8.
Now what does identify tell us about it?
identify \
-format "%f:\n \
-- number of unique colors: %k\n \
-- depth: %z\n \
-- class/space: %r\n \
-- image type: %[type]\n" \
out.png
Result:
out.png:
-- number of unique colors: 8
-- depth: 8
-- class/space: PseudoClass sRGB
-- image type: Palette
So a low number of unique colors does not necessarily proof that the image is "gray-ish"!
You'lll have to play with this parameters a bit and see if you can come up with a combination that helps you to correctly classify your real-world "thousands of images".
Consider image statistics too
More values you could look at with the help of identify -format %... filename.suffix:
%[gamma] : value of image gamma
%[entropy] : CALCULATED: entropy of image
%[kurtosis] : CALCULATED: kurtosis value statistic of image
%[max] : CALCULATED: maximum value statistic of image
%[mean] : CALCULATED: mean value statistic of image
%[min] : CALCULATED: minimum value statistic of image
%[profile:icc] : ICC profile info
%[profile:icm] : ICM profile info
Last hint: look at the metadata!
Just in case your images were scanned by a device that leaves its own identifying meta data behind: check for them!
The command line tool exiftool is a good utility to do so.

The xv6-rev7 (JOS) GDT

It's very difficult for me to understand GDT (Global Descriptor Table) in JOS (xv6-rev7)
For example
.word (((lim) >> 12) & 0xffff), ((base) & 0xffff);
Why shift right 12? Why AND 0xffff?
What do these number mean?
What does the formula mean?
Can anyone give me some resources or tutorials or hints?
Here, It's two parts of snippet code as following for my problem.
1st Part
0654 #define SEG_NULLASM \
0655 .word 0, 0; \
0656 .byte 0, 0, 0, 0
0657
0658 // The 0xC0 means the limit is in 4096−byte units
0659 // and (for executable segments) 32−bit mode.
0660 #define SEG_ASM(type,base,lim) \
0661 .word (((lim) >> 12) & 0xffff), ((base) & 0xffff); \
0662 .byte (((base) >> 16) & 0xff), (0x90 | (type)), \
0663 (0xC0 | (((lim) >> 28) & 0xf)), (((base) >> 24) & 0xff)
0664
0665 #define STA_X 0x8 // Executable segment
0666 #define STA_E 0x4 // Expand down (non−executable segments)
0667 #define STA_C 0x4 // Conforming code segment (executable only)
0668 #define STA_W 0x2 // Writeable (non−executable segments)
0669 #define STA_R 0x2 // Readable (executable segments)
0670 #define STA_A 0x1 // Accessed
2nd Part
8480 # Bootstrap GDT
8481 .p2align 2 # force 4 byte alignment
8482 gdt:
8483 SEG_NULLASM # null seg
8484 SEG_ASM(STA_X|STA_R, 0x0, 0xffffffff) # code seg
8485 SEG_ASM(STA_W, 0x0, 0xffffffff) # data seg
8486
8487 gdtdesc:
8488 .word (gdtdesc − gdt − 1) # sizeof(gdt) − 1
8489 .long gdt # address gdt
The complete part: http://pdos.csail.mit.edu/6.828/2012/xv6/xv6-rev7.pdf
Well, it isn't a real formula at all. Limit is shifted twelve bits to right, what's equivalent to division by 2^12, what is 4096, and that is granularity of GDT entry base, when G bit is set (in your code G bit is encoded in constants you use in your macro). Whenever address is to be accessed using correnspondig selector, only higher 20 bits are compared with limit and if they're greater, #GP is thrown. Also note that standard pages are 4KB in size, so any number greater than limit by less than 4 kilobytes is handled by page corresponding selector limit. Landing is there partly for suppressing compiler warnings about number overflow, as the operand 0xFFFF is maximal value for single word (16 bits).
Same applies for other shifts and AND, where in other expressions numbers can be shifted more to get another parts.
The structure of GDT descriptor sees above.
((lim) >> 12) & 0xffff) corresponding to Segment Limit(Bit 0-15). Shift right means minimal unit is 2^12 byte(granularity of GDT entry base); && 0xffff means we need the lower 16 bits of lim) >> 12, which fits to lowest part of 16 bits of GDT descriptor.
The rest of the 'formula' is the same.
here is a good material for learning GTD descriptor.