Patricia/radix trees and ipv4 addresses - patricia-trie

Is there a document that will help me understand how ipv4 addresses are inserted into the patricia/radix trees? I am confused around calculating mask length and if the mask length is for the complete address or one octet in the address.
Any explaination would be appreciated.

I don't know about document (Design and Implementation of 4.4 BSD Unix by Sam Leffler might be a good source, but I don't have it handy) but have a look at this C code:
http://cpansearch.perl.org/src/PHILIPP/Net-Patricia-1.15_07/libpatricia/patricia.c
If you're good at C you can probably consider it a document.

Related

Accessing a combination of ports by adding both their offsets to a base address. How would this work?

Context: I am following an embedded systems course https://www.edx.org/course/embedded-systems-shape-the-world-microcontroller-i
In the lecture on bit specific addressing they present the following example on a "peanut butter and jelly port".
Given you a port PB which has a base address of 0x40005000 and you wanted to access both port 4 and port 6 from PB which would be PB6 and PB4 respectively. One could add the offset of port 4(0x40) and port 6(0x100) to the base address(0x40005000) and define that as their new address 0x40005140.
Here is where I am confused. If I wanted to define the address for PB6 it would be base(0x40005000) + offset(0x100) = 0x40005100 and the address for PB4 would be base(0x40005000) + offset(0x40) = 0x40005040. So how is it that to access both of them I could use base(0x40005000) + offset(0x40) + offset(0x100) = 0x40005140? Is this is not an entirely different location in memory for them individually?
Also why is bit 0 represented as 0x004. In binary that would be 0000 0100. I suppose it would represent bit 0 if you disregard the first two binary bits but why are we disregarding them?
Lecture notes on bit specific addressing:
Your interpretation of how memory-mapped registers are addressed is quite reasonable for any normal peripheral on an ARM based microcontroller.
However, if you read the GPIODATA register definition on page 662 of the TM4C123GH6PM datasheet then you will see that this "register" behaves very differently.
They map a huge block of the address space (1024 bytes) to a single 32-bit register. This means that bits[9:2] of the the address bus are not needed, and are in fact overloaded with data. They contain the mask of the bits to be updated. This is what the "offset" calculation you have copied is trying to describe.
Personally I think this hardware interface could be a very clever way to let you set only some of the outputs within a bank using a single atomic write, but it makes this a very bad choice of device to use for teaching, because this isn't the way things normally work.

What is the math behind the min function in Geogebra?

So as of yet, I am not profficient in programming.
I am writing a paper in mathematics and obtained different results from the GeoGebra min function and algebraical methods.
I am sure that the algebraical methods are correct, but I really want to know why the min function was faulty.
The result from the algebra was an interval between 1010 and 1011.
From GeoGebra I got a single point as a solution (1010.15898).
If you could explain to me why GeoGebra ommits all of these other solutions, I would be very thankful.
I would also appreciate if someone could direct me to the math behind the function, so I could include it in my paper and discuss its' relevance.
Thanks in advance!
GeoGebra is using a modification of the (local) optimization algorithm given in Richard Brent, Algorithms For Minimization Without Derivatives, Prentice-Hall, Inc. 1973.
See the source code for more information.

check FCS ethernet frame CRC-32 online tools

I'm trying to check the FCS of an ethernet frame thanks to tools on different website.
I first used this website:
http://depa.usst.edu.cn/chenjq/www2/software/crc/CRC_Javascript/CRCcalculation.htm and find the next FCS : 0xD4C3C62F (the frame below)
Then, I tried this one : http://www.scadacore.com/field-applications/programming-calculators/online-checksum-calculator/ and I found the correct CRC : 0x7AD56BB3 but nothing of the different kind of CRC-32 (normal, reversed...) correspond to the CRC find on the first website.
Is there any link between algorithms?
Thank you!
Here is the hexadecimal frame (no start of frame) :
000AE6F005A3001234567890080045000030B3FE0000801172BA0A0000030A00000204000400001C894D000102030405060708090A0B0C0D0E0F10111213
Beware of online CRC calculators.
The Ethernet CRC of your string is actually 0xb36bd57a. It is stored in reverse order in the stream, which is why you wrote it incorrectly as 0x7AD56BB3.
There are many CRC definitions, including many 32-bit CRC definitions. See the RevEng catalog for examples. The one you want happens to be called "CRC-32", with this definition.
The "CCITT-32" (a name I have not seen before) being calculated in your first link is some other definition. It does not even appear in the RevEng catalog.
A more descriptive and clear update to #Mark Adler's answer (I am new here so I can't edit or comment)
The CRC you are searching for is called CRC-32/ISO-HDLC.
You can check the following online calculator and check the one named "CRC-32":
https://crccalc.com/
Each CRC32 algorithm has its own parameters for its generation, like polynomial, init,...etc
The IEEE802.3 standard defined the CRC32 algorithm parameters for the FCS field to have the polynomial(0x04c11db7), init/xorIn(0xffffffff), xorout(0xffffffff)

Trying to understand nbits value from stratum protocol

I'm looking at the stratum protocol and I'm having a problem with the nbits value of the mining.notify method. I have trouble calculating it, I assume it's the currency difficulty.
I pull a notify from a dogecoin pool and it returned 1b3cc366 and at the time the difficulty was 1078.52975077.
I'm assuming here that 1b3cc366 should give me 1078.52975077 when converted. But I can't seem to do the conversion right.
I've looked here, here and also tried the .NET function BitConverter.Int64BitsToDouble.
Can someone help me understand what the nbits value signify?
You are right, nbits is current network difficulty.
Difficulty encoding is throughly described here.
Hexadecimal representation like 0x1b3cc366 consists of two parts:
0x1b -- number of bytes in a target
0x3cc366 -- target prefix
This means that valid hash should be less than 0x3cc366000000000000000000000000000000000000000000000000 (it is exactly 0x1b = 27 bytes long).
Floating point representation of difficulty shows how much current target is harder than the one used in the genesis block.
Satoshi decided to use 0x1d00ffff as a difficulty for the genesis block, so the target was
0x00ffff0000000000000000000000000000000000000000000000000000.
And 1078.52975077 is how much current target is greater than the initial one:
$ echo 'ibase=16;FFFF0000000000000000000000000000000000000000000000000000 / 3CC366000000000000000000000000000000000000000000000000' | bc -l
1078.52975077482646448605

Is this GPGSV sentence valid?

While parsing the NMEA output of a GPS receiver I get the following lines:
$GPGSV,4,1,16,02,17,228,35,03,04,048,37,05,59,285,29,06,02,030,34*73
$GPGSV,4,2,16,07,58,061,46,08,80,159,40,09,11,227,32,10,51,167,47*77
$GPGSV,4,3,16,13,15,089,38,15,00,279,,16,00,018,,26,34,279,42*7A
$GPGSV,4,4,16,28,20,154,39*4C
As I understand it, from various sources on the web (e.g. here), this is wrong. According to the 3rd number, there should be 16 satellites, which was true for all those GPS receivers I previously encountered, but the sentence from this one only contains the data for 13 satellites.
Is this an error? Or do I read the specification wrongly?
Nmea is a weakly specified file format. GPS chip manufactures provide documenttaion how they interpret the NMEA specification.
For example ublox and Sirf each have a chapter of about 40 pages describing how to interpret the NMEA format.
So if you write " Or do I read the specification wrongly?", then the question is which specification you are reading. That of the GPS chip manufacturer? The NMEA 0183 spec does not contain enough info to correctly parse the sentences.
Especially in your case: the NMEA protocol does not desribe how to handle empty values vs invalid ones.
In your case the receiver theretically expects to see 16 satellites, but found only 13.
I would expect that the missing 3 sats would have empty ",,,,,,,,". But obviously the manufacturer decided to just stop and append the checksum string. (Its simply not speciefied that it is mandatory to print out empty semicolons for the missing 3 sats.
Unfortunaetly you have to expect to write a NMEA parser for each CHPS chip manufacturer.
Therfore I always recommend to use the binary format of the Chip manufactureres protocol. (e,.g uBlox bianry or Sirf binary because these are exactly specified).
You can further look at the docu for GpsBable: they show how different manufacturres produce different GSV data sets.
Update:
As you now told that it is a ublox receiver:
The answer is, yes the NMEA sentences are valid. Look at the ublox protocol spec. i use spec for ublox 5:
On page where the GSV sentence is described look at the "Message Structure":
{,sv,elv,az,cno}*cs
the curly braces enclose the sequence that is repeated.
And below look at "1..4": this means 1,2,3 or 4 blocks. There is not written "4", its "1..4" therefore satelite info is optional, and has not to be empty.
If you further look at the example ublox gives, then you see, that the last GPGSV message contains less than 4 satellites, exactly as you are showing in your question.
Yes, it's inconsistent; the last message should have described more than one satellite (four, actually) so as to total the 16 advertised. The GPS receiver should have reported at least the satellite IDs (PRN), even if their viewing direction in the sky and SNR were unknown at the time, e.g.: {,01,,,}.
That being said, it's better to write programs tolerant against ill-formed messages; in this case, updating the number of satellites in view to 13, as counted.
(I've checked the checksums and they're okay.)