It's very difficult for me to understand GDT (Global Descriptor Table) in JOS (xv6-rev7)
For example
.word (((lim) >> 12) & 0xffff), ((base) & 0xffff);
Why shift right 12? Why AND 0xffff?
What do these number mean?
What does the formula mean?
Can anyone give me some resources or tutorials or hints?
Here, It's two parts of snippet code as following for my problem.
1st Part
0654 #define SEG_NULLASM \
0655 .word 0, 0; \
0656 .byte 0, 0, 0, 0
0657
0658 // The 0xC0 means the limit is in 4096−byte units
0659 // and (for executable segments) 32−bit mode.
0660 #define SEG_ASM(type,base,lim) \
0661 .word (((lim) >> 12) & 0xffff), ((base) & 0xffff); \
0662 .byte (((base) >> 16) & 0xff), (0x90 | (type)), \
0663 (0xC0 | (((lim) >> 28) & 0xf)), (((base) >> 24) & 0xff)
0664
0665 #define STA_X 0x8 // Executable segment
0666 #define STA_E 0x4 // Expand down (non−executable segments)
0667 #define STA_C 0x4 // Conforming code segment (executable only)
0668 #define STA_W 0x2 // Writeable (non−executable segments)
0669 #define STA_R 0x2 // Readable (executable segments)
0670 #define STA_A 0x1 // Accessed
2nd Part
8480 # Bootstrap GDT
8481 .p2align 2 # force 4 byte alignment
8482 gdt:
8483 SEG_NULLASM # null seg
8484 SEG_ASM(STA_X|STA_R, 0x0, 0xffffffff) # code seg
8485 SEG_ASM(STA_W, 0x0, 0xffffffff) # data seg
8486
8487 gdtdesc:
8488 .word (gdtdesc − gdt − 1) # sizeof(gdt) − 1
8489 .long gdt # address gdt
The complete part: http://pdos.csail.mit.edu/6.828/2012/xv6/xv6-rev7.pdf
Well, it isn't a real formula at all. Limit is shifted twelve bits to right, what's equivalent to division by 2^12, what is 4096, and that is granularity of GDT entry base, when G bit is set (in your code G bit is encoded in constants you use in your macro). Whenever address is to be accessed using correnspondig selector, only higher 20 bits are compared with limit and if they're greater, #GP is thrown. Also note that standard pages are 4KB in size, so any number greater than limit by less than 4 kilobytes is handled by page corresponding selector limit. Landing is there partly for suppressing compiler warnings about number overflow, as the operand 0xFFFF is maximal value for single word (16 bits).
Same applies for other shifts and AND, where in other expressions numbers can be shifted more to get another parts.
The structure of GDT descriptor sees above.
((lim) >> 12) & 0xffff) corresponding to Segment Limit(Bit 0-15). Shift right means minimal unit is 2^12 byte(granularity of GDT entry base); && 0xffff means we need the lower 16 bits of lim) >> 12, which fits to lowest part of 16 bits of GDT descriptor.
The rest of the 'formula' is the same.
here is a good material for learning GTD descriptor.
Related
For an iCE40 1k device, Following is the snippet from the output of the command "iceunpack -vv example.bin"
I could not understand why there are 332x144 bits?
My understanding is that [1], the CRAM BLOCK[0] starts at the logic tile (1,1), and it should contain:
48 logic tiles, each 54x16,
14 IO tiles, each 18x16
How the "332 x 144" is calculated?
Where does the IO tile and logic tiles bits are mapped in CRAM BLOCK[0] bits?
e.g., which bits of CRAM BLOCK[0] indicates the bits for logic tile (1,1) and bits for IO tile (0,1)?
Set bank to 0.
Next command at offset 26: 0x01 0x01
CRAM Data [0]: 332 x 144 bits = 47808 bits = 5976 bytes
Next command at offset 6006: 0x11 0x01
[1]. http://www.clifford.at/icestorm/format.html
Thanks.
height=9x16=144 (1 I/O tile and 8 Logic tiles)
Width=18+42+5x54 = 330 (1 I/O tile, 1 ram tile and 5 Logic tiles) plus "two zero bytes" = 332.
So, I am working on a program in ARM that takes a bunch of numbers from a file and tells if they are even or odd.
The problem is that I know how to multiply by 0.5, but I don't know how to do something like this high level statement in ARM
if (A / 2 == 0)
print even
else
print odd
Here's what I have in terms of code:
#open input file
ldr r0,=FileName # set Name for input file
mov r1,#0 # mode is input
swi SWI_Open # open file for input
bcs InFileError # if error?
ldr r1,=InFileHandle # load input file handle
str r0,[r1] # save the file handle
#read integers from input file
NUMBERS:
ldr r0,=InputFileHandle # load input file handle
ldr r0,[r0]
swi SWI_RdInt # read the integer into R0
bcs EofReached # Check Carry-Bit (C): if= 1 then EOF reached
#multiplication by 0.5 to test for odd or even
MUL R2 R0 0.5
#what is the test in ARM
#for ( R0 / 0.5 ) == a multiple of 1?
B NUMBERS
LOOP:
#end of program
Message1: .asciz"Hello World!"
EOL: .asciz "\n"
NewL: .ascii "\n"
Blank: .ascii " "
FileName: .asciz"input.txt"
.end
So I think the first two things inputting from the file and reading the integers works. I don't know how to test for the condition that it is divisible by 2. I think it's multiplied by 0.5 and then the next step is to say even if that number doesn't have a decimal place with anything after it at the end, but if it does then then number A that was divided into number B is odd. Otherwise it is even?
A brief answer: you don't need to multiply by 0.5 or anything like that. You need to check the value of LSB (least significant bit) of the value. It will be 0 for even numbers and 1 for odd numbers.
Upd.: your "C" code is also wrong. You want to use A % 2, not A / 2
What and why is the developer adding a hex value of 16, then using the bitwise operations AND followed by a NOT in this line:
size_t bytesPerRow = ((width * 4) + 0x0000000F) & ~0x0000000F;
He comments that "16 byte aligned is good", what does he mean?
- (CGContextRef)createBitmapContext {
CGRect boundingBox = CGPathGetBoundingBox(_mShape);
size_t width = CGRectGetWidth(boundingBox);
size_t height = CGRectGetHeight(boundingBox);
size_t bitsPerComponent = 8;
size_t bytesPerRow = ((width * 4) + 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
ANDing with ~0x0000000F = 0xFFFFFFF0 (aka -16) rounds down to a multiple of 16, simply by resetting those bits that could make it anything other than a multiple of 16 (the 8's, 4's, 2's and 1's).
Adding 15 (0x0000000F) first makes it round up instead of down.
purpose of size_t bytesPerRow = ((width * 4) + 0x0000000F) & ~0x0000000F; is to round up this value to 16 bytes
The goal is to set bytesPerRow to be the smallest multiple of 16 that is capable of holding a row of data. This is done so that a bitmap can be allocated where every row address is 16 byte aligned, i.e. a multiple of 16. There are many possible benefits to alignment, including optimizations that take advantage of it. Some APIs may also require alignment.
The code sets the 4 least significant bits to zero. If the value is an address it will be on an even 16 byte boundary, "16 byte aligned".
this ia a one's complement so
~0x0000000F
becomes
0xFFFFFFF0
and-ing it with another value will clear 4 least significant bits.
This is the kind of thing we used to do all the time "back in the day"!
He's adding 0xf, and then masking out the lower 4 bits (& ~0xf), to make sure the value is rounded up. If he didn't add the 0xf, it would round down.
I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits.
While there are other ways to solve this problem fixing the bit-twiddling in the echo algorithm might be the most straight forward approach.
The*AudioStreamBasicDescription* that works with echo has a mFormatFlag of: kAudioFormatFlagsAudioUnitCanonical; It's specifics are:
AudioUnit Stream Format (ECHO works, NO AUDIO UNITS)
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116 = kAudioFormatFlagsAudioUnitCanonical
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
Set ASBD on input
Set ASBD on output
au SampleRate rate: 0.000000, 2 channels, 12 formatflags, 1819304813 mFormatID, 16 bits per channel
The stream format that works with AudioUnits is the same except for the mFormatFlag: kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved -- Its specifics are:
AudioUnit Stream Format (NO ECHO, AUDIO UNITS WORK)
Sample Rate: 44100
Format ID: lpcm
Format Flags: 41
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
Set ASBD on input
Set ASBD on output
au SampleRate rate: 44100.000000, 2 channels, 41 formatflags, 1819304813 mFormatID, 32 bits per channel
In order to create the echo effect I use two functions that bit-shift sample data into SInt16 space, and back. As I said, this works for the kAudioFormatFlagsAudioUnitCanonical, format but not the other. When it fails, the sounds are clipped and distorted, but they are there. I think this indicates that the difference between these two formats is how the data is arranged in the Float32.
// convert sample vector from fixed point 8.24 to SInt16
void fixedPointToSInt16( SInt32 * source, SInt16 * target, int length ) {
int i;
for(i = 0;i < length; i++ ) {
target[i] = (SInt16) (source[i] >> 9);
//target[i] *= 0.003;
}
}
*As you can see I tried modifying the amplitude of the samples to get rid of the clipping -- clearly that didn't work.
// convert sample vector from SInt16 to fixed point 8.24
void SInt16ToFixedPoint( SInt16 * source, SInt32 * target, int length ) {
int i;
for(i = 0;i < length; i++ ) {
target[i] = (SInt32) (source[i] << 9);
if(source[i] < 0) {
target[i] |= 0xFF000000;
}
else {
target[i] &= 0x00FFFFFF;
}
}
}
If I can determine the difference between kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved, then I can modify the above methods accordingly. But I'm not sure how to figure that out. Documentation in CoreAudio is enigmatic, but from what I've read there, and gleaned from the CoreAudioTypes.h file, both mFormatFlag(s) refer to the same Fixed Point 8.24 format. Clearly something is different, but I can't figure out what.
Thanks for reading through this long question, and thanks in advance for any insight you can provide.
kAudioFormatFlagIsFloat means that the buffer contains floating point values. If mBitsPerChannel is 32 then you are dealing with float data (also called Float32), and if it is 64 you are dealing with double data.
kAudioFormatFlagsNativeEndian refers to the fact that the data in the buffer matches the endianness of the processor, so you don't have to worry about byte swapping.
kAudioFormatFlagIsPacked means that every bit in the data is significant. For example, if you store 24 bit audio data in 32 bits, this flag will not be set.
kAudioFormatFlagIsNonInterleaved means that each individual buffer consists of one channel of data. It is common for audio data to be interleaved, with the samples alternating between L and R channels: LRLRLRLR. For DSP applications it is often easier to deinterleave the data and work on one channel at a time.
I think in your case the error is that you are treating floating point data as fixed point. Float data is generally scaled to the interval [-1, +1). To convert float to SInt16 you need to multiply each sample by the maximum 16-bit value (1u << 15, 32768) and then clip to the interval [-32768, 32767].
I am facing a task where one of my hlsl shaders require multiple texture lookups per pixel. My 2d textures are fixed to 256*256, so two bytes should be sufficient to address any given texel given this constraint. My idea is then to put two xy-coordinates in each float, giving me eight xy-coordinates in pixel space when packed in a Vector4 format image. These eight coordinates are then used to sample another texture(s).
The reason for doing this is to save graphics memory and an attempt to optimize processing time, since then I don't require multiple texture lookups.
By the way: Does anyone know if encoding/decoding 16 bytes from/to 4 floats using 1 sampling is slower than 4 samplings with unencoded data?
Edit: This is for Shader Model 3
If you are targeting SM 4.0-SM 5.0 you can use Binary Casts and Bitwise operations:
uint4 myUInt4 = asuint(myFloat4);
uint x0 = myUInt4.x & 0x000000FF; //extract two xy-coordinates (x0,y0), (x1,y1)
uint y0 = (myUInt4.x & 0x0000FF00) >> 8;
uint x1 = (myUInt4.x & 0x00FF0000) >> 16;
uint y1 = (myUInt4.x & 0xFF000000) >> 24;
//repeat operation for .y .z and .w coordinates
For previous Shader Models, I think it could be more complicated since it depends on FP precision.