how to create two separate segments in Global Descriptor Table - gdt

I have gone through the basics of Global Descriptor Table (GDT) and i have successfully written a "GDT.inc" using asm , so that we can easily include it in our bootloader. As a baby step i have configured the Code Descriptor and Data Descriptor to read and write from the first byte to byte 0xFFFFFFFF in memory (any portion in memory)
; null descriptor
dd 0 ; null descriptor--just fill 8 bytes with zero
dd 0
; code descriptor: ; code descriptor. Right after null descriptor
dw 0FFFFh ; limit low
dw 0 ; base low
db 0 ; base middle
db 10011010b ; access
db 11001111b ; granularity
db 0 ; base high
; data descriptor: ; data descriptor
dw 0FFFFh ; limit low (Same as code)
dw 0 ; base low
db 0 ; base middle
db 10010010b ; access
db 11001111b ; granularity
db 0 ; base high
Now my purpose is to create two separate regions using GDT .For example , first 512B as one region and next 512B as another region and leaving the space left as unused.
What can i do for that ?

you can just change where your base address & limit registers .
so in the example you gave
for code descriptor
.base = 0x0
.limit = 0x200 //512 byte
for data descriptor
.base = 0x200
.limit = 0x200
then you have the rest of your memory after the 1 KB empty
you can check "http://wiki.osdev.org/GDT_Tutorial" for more explanation

Related

SAS - Importing variable length Binary records without delimiters

I have a binary data set with no delimiters and no fixed length records. I know each record contains 22 bytes of data then an unknown number of 23 byte blocks, up to 50 blocks. The problem is that it's only reading 1 line of 32767 bytes for a total of 728 obs. I'm expecting 2.7MM output obs. How can I make this read the input file to the end? I've already tried adding an "OBS=" option and "lrecl=" option to the infile line. Adding the "end=" option had no effect on the result.
DATA INFILE.MYDATA (drop= i);
INFILE "&Path./UGLYDATA" end=eof;
INPUT
MY_KEY s370fPD9.
...
OCCURS s370fPD2.
#
;
ARRAY MyData{50} MyData1-MyData50;
...
ARRAY Filler{50} $ Filler1-Filler50;
DO I = 1 TO min(50,OCCURS);
INPUT
MyData{I} s370fPD4.
...
Filler{I} $ebcdic10.
##
;
End;
RUN;
Relevant Log:
NOTE: 1 record was read from the infile "UGLYDATA".
The minimum record length was 32767.
The maximum record length was 32767.
One or more lines were truncated.
NOTE: SAS went to a new line when INPUT statement reached past the end of a line.
NOTE: The data set INFILE.MYDATA has 728 observations and 356 variables.
NOTE: Compressing data set INFILE.MYDATA decreased size by 47.06 percent.
Compressed is 9 pages; un-compressed would require 17 pages.
NOTE: DATA statement used (Total process time):
real time 2.69 seconds
user cpu time 0.02 seconds
system cpu time 0.11 seconds
memory 1890.40k
OS Memory 10408.00k
Timestamp 12/07/2021 05:17:34 PM
Step Count 1 Switch Count 0
Page Faults 3
Page Reclaims 1028
Page Swaps 0
Voluntary Context Switches 272
Involuntary Context Switches 1226
Block Input Operations 309648
Block Output Operations 2312
Sounds like the file does not consists of lines of text. So try using RECFM=N on your INFILE statement so that SAS will not be looking for LINEFEED character (or CARRIAGE RETURN and LINEFEED combination) to mark the end of the lines.
INFILE "&Path./UGLYDATA" recfm=n ;
If you are unsure what the file contains just run a simple data step to look at the first few hundred bytes and then figure it out. If any of the bytes in a "line" are not printable characters the LIST command will include the hexcodes for the bytes under the lines when it writes to the SAS log.
data _null_;
INFILE "&Path./UGLYDATA" recfm-=f lrecl=100 obs=10 ;
input;
list;
run;
Per #Tom, indeed RECFM=N.
Example:
Create and read back a binary file.
filename foo '%temp%/foo.bin' recfm=n;
data _null_;
file foo;
call streaminit(2021);
filler = repeat('*', 10);
do recnum = 1001 to 1010;
put recnum s370fPD9. #;
put filler $char11. #;
occurs = rand('integer',1,26);
put occurs s370fPD2. #;
do z = 0 to occurs-1;
record = repeat(byte(rank('A')+z), 22);
put record $ebcdic23.;
end;
putlog 'NOTE: ' recnum= occurs=;
end;
stop;
run;
data want;
infile foo;
* read master;
input recnum s370fPD9. filler $char11. occurs s370fPD2.;
* read details;
do index = 1 to occurs;
input content $ebcdic23.;
output;
end;
run;
dm 'vt want';

PIC CAN test code not working as expected

I had created a simple test code for CAN using PIC18F4580. Which consists of 2 nodes sending data between each other. Uses 11-bit standard ID for communication. Node-1 is given ID of 10 and Node-2 is given ID of 20. I tried to display the registers like COMSTAT, TXB0CON, RXB0CON on the LCD here are the register contents.
COMSTAT = 0x00
TXB0CON = 0x00
RXB0CON = 0x01
When key is pressed it will first show the contents of COMSTAT, TXB0CON and RXB0CON on LCD in sequence.
Then at the end it will put the message frame ID = XX, Data = NDx on CAN Bus
Node-1 puts ND2 on data field D0(N), D1(D) and D2(2) with DLC is 3 and ID = 20.
Similarly Node-2 puts ND1 on data field D0(N), D1(D) and D2(1) with DLC is 3 and ID = 10 (Please refer the Test code given).
Both the nodes send the data but Node-1 is only receiving. And showing ID = 00 instead of 10, and received data is displayed as some garbage on LCD.
Test code link

FASM - Boot sector on USB don't work

in first, sorry for my bad english, i'm french.
At the moment, i learn asm with fasm to test boot sector programming.
I have make a simple boot program, i have compiled it and i write boot.bin in first sector of my usb.
But when i boot on my PC or in virtualbox, drive isn't found....
Boot sector code:
;=======================================================================
; a simpliest 1.44 bootable image by shoorick ;)
;=======================================================================
_bs equ 512
_st equ 18
_hd equ 2
_tr equ 80
;=======================================================================
org 7C00h
jmp start
nop
;=====================================================
db "HE-HE OS"; ; 8
dw _bs ; b/s
db 1 ; s/c
dw 1 ; rs
db 2 ; fats
dw 224 ; rde
dw 2880 ; as
db 0F0h ; media
dw 9 ; s/fat
dw _st ; s/t
dw _hd ; h
dd 0 ; hs
dd 0 ; --
db 0 ; drv
db 0 ; --
db 29h ; ebr
dd 0 ; sn
db "NO NAME "; ; 11
db "FAT12 "; ; 8
;=====================================================
start:
mov ax,cs
mov ds,ax
mov cx,count
mov si,hello
mov bx,7
mov ah,0Eh
##:
lodsb
int 10h
loop #B
xor ah,ah
int 16h
int 19h
hello db "Hi! This is disk-invalid!"
count = $ - hello
;=======================================================================
rb 7E00h-2-$
db 055h,0AAh
;=======================================================================
This code is provide by examples of fasm's website.
there are couple of reasons why a bootloader wont work:
the bootloader is not in the first sector of the USB/Floppy/etc.
the bootloader is not EXACTLY 512 bytes long
you are missing the 0xAA55 signature at the last 2 bytes of the bootloader
in your example i assume you have the wrong bootloader size ( it is not 512 bytes )
try replacing
rb 7E00h-2-$
db 055h,0AAh
with
TIMES 510-($-$$) DB 0
DW 0xAA55
this ensures that your file is exactly 512 bytes long and that is has the required bootloader signature

pig latin - not showing the right record numbers

I have written a pig script for wordcount which works fine. I could see the results from pig script in my output directory in hdfs. But towards the end of my console, I see the following:
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_local1695568121_0002 1 1 0 0 0 0 0 0 0 0 words_sorted SAMPLER
job_local2103470491_0003 1 1 0 0 0 0 0 0 0 0 words_sorted ORDER_BY /output/result_pig,
job_local696057848_0001 1 1 0 0 0 0 0 0 0 0 book,words,words_agg,words_grouped GROUP_BY,COMBINER
Input(s):
Successfully read 0 records from: "/data/pg5000.txt"
Output(s):
Successfully stored 0 records in: "/output/result_pig"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local696057848_0001 -> job_local1695568121_0002,
job_local1695568121_0002 -> job_local2103470491_0003,
job_local2103470491_0003
2014-07-01 14:10:35,241 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
As you can see, the job is success. but not the Input(s) and output(s). Both of the them say successfully read/stored 0 records and the counter values are all 0.
why the value is zero. These should not be zero.
I am using hadoop2.2 and pig-0.12
Here is the script:
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
words = foreach book generate FLATTEN(TOKENIZE(lines)) as word;
words_grouped = group words by word;
words_agg = foreach words_grouped generate group as word, COUNT(words);
words_sorted = ORDER words_agg BY $1 DESC;
STORE words_sorted into '/output/result_pig' using PigStorage(':','-schema');
NOTE: my data is present in /data/pg5000.txt and not in default directory which is /usr/name/data/pg5000.txt
EDIT: here is the output of printing my file to console
hadoop fs -cat /data/pg5000.txt | head -10
The Project Gutenberg EBook of The Notebooks of Leonardo Da Vinci, Complete
by Leonardo Da Vinci
(#3 in our series by Leonardo Da Vinci)
Copyright laws are changing all over the world. Be sure to check the
copyright laws for your country before downloading or redistributing
this or any other Project Gutenberg eBook.
This header should be the first thing seen when viewing this Project
Gutenberg file. Please do not remove it. Do not change or edit the
cat: Unable to write to output stream.
Please correct the following line
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
to
book = load '/data/pg5000.txt' using PigStorage(',') as (lines:chararray);
I am assuming the delimiter as comma here use the one which is used to separate the records in your file. This will solve the issue
Also note --
If no argument is provided, PigStorage will assume tab-delimited format. If a delimiter argument is provided, it must be a single-byte character; any literal (eg: 'a', '|'), known escape character (eg: '\t', '\r') is a valid delimiter.

Structure of a Record which has a text column

I have a record which I viewed using DBCC page command. Here is how it looks:
Memory Dump #0x00E5C060
00000000: 30000800 01000000 02000001 001f8000 †0...............
00000010: 00d10700 0000009a 00000001 000000††††...............
Slot 0 Column 0 Offset 0x4 Length 4
col1 = 1
col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16
TextTimeStamp = 131137536 RowId = (1:154:0)
Here col1 is of type int and col2 is of type ntext.
I know that ntext column values are stored in text page.
But I don't know how to interpret col2 info above, i.e.
col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16
TextTimeStamp = 131137536 RowId = (1:154:0)
Can anybody help me understand this?
Thanks for replying,
"col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16"
00000000: 30000800 01000000 02000001 001f8000 †0...............
00000010: 00d10700 0000009a 00000001 000000††††...............
In this, it's said that the length of info is 16.
Its equivalent hex values are:
00 00d10700 0000009a 00000001 000000†††
I can find information about
TextTimeStamp = 131137536 RowId = (1:154:0)
in the above hex values. But how can I find info that it is a text pointer?
Moreover, in another instance, I came across [Inline Blob root] for an nvarchar datatype value.
Here's how it looked:
col6= [BLOB Inline Root] Slot 1 Column 38 Offset 0x16d Length 24
Level = 0 Unused = 0 UpdateSeq = 1
TimeStamp = 1969553408
Link 0
Here if you notice the length is 24 in contrast to the previous instance (Text pointer)
It has some additional information as well like update sequence is
UpdateSeq = 1.
How can I differentiate between the two instances by looking at the sequence of bytes?
col2 is a pointer to the BLOB allocation unit. The ntext column is on slot 0 on the page (1:154). You can DBCC dump the page 1:154 to find the content of the ntext column col2.
There is a more detailed example at http://blogs.msdn.com/sqlserverstorageengine/archive/2006/12/13/More-undocumented-fun_3A00_-DBCC-IND_2C00_-DBCC-PAGE_2C00_-and-off_2D00_row-columns.aspx