Representing objects of properties and methods in memory , if anyone have picture or drawing to expalin how computer deal with it and store properties in memory?
Computers do not really store abstract information of that sort at the basic level. There, you essentially have numbers--in binary, but that is not important--and it is generally up to software to interpret these numbers.
In the Von Neuman model, that close to every system is based on, you have one big address space. You can index into it, so your CPU can, for example, fetch the number that sits on a given address, or write a new number to an address, and that is mostly what there is to storing data. Usually, but not always, the addresses pick individual bytes of your memory, but your computer could address into larger or smaller word sizes, for example, you might have a computer that would address into 32 bit words instead of 8 bit words. It doesn't matter for the overall model, though. You just have a big block of memory and you can get the data at individual addresses.
How you interpret this data is up to the program. Well, almost. In this figure, I've tried to illustrate memory and where we have some data. The data is the zero-terminated string "Hello, World\n", but only if we interpret it as an ASCII-encoded string. If we interpreted it as an array of integers instead, then it would be that. The hardware doesn't care how you interpret the data.
What makes a computer a Neuman model is that both data and program is represented in the same memory. Not only can we get to any data via its address, but we can get to the code we want to run as well. There isn't any difference between the two. A program, or a function, or a method, is just an address where you have a sequence of numbers, and the CPU can interpret these numbers as executable code. You can, in theory, point to "Hello, World\n" and then tell the CPU to run it as a program. (I won't recommend it).
When it comes to executable code, there is the slight difference that the CPU does the interpretation. In your own program, you can mostly choose how to represent data (although there might be some penalties if you want different representations than what you get from the raw hardware), but the CPU will interpret the different numbers as specific instructions and execute them as such. At least that is how it works if you run native code; if you have a virtual machine, then the virtual machine is a program that interprets your code, and its interpretation of the data can be quite different from the CPU's. The virtual machine, though, will typically run native code, so you are still relying on the CPU's interoperation of numbers, although indirectly.
I should also mention that modern hardware and operating systems do not usually stick with the simple Von Neuman model. If you treat program and data as interchangeable, you get some massive security holes. In practise, you have some form of permission set on different memory blocks, and your code has to sit in a block that you are allowed to execute, and your data (typically) is not. You can switch the permissions, though, if you want to autogenerate native executable code, and virtual machines often do this.
Anyway, for simplicity, let's just say that we have a simple Von Neuman model. Then both program and data are just chunks of memory that we either interpret as program (and it will then be executed by the CPU when we tell it to run the code at a given address) or as data (and then our software is responsible for interpreting the numbers in memory as some higher data structure).
There aren't any differences between object, properties, or other higher-level concepts at this level. Those are entirely dealt with at the level(s) above the hardware. They are simply interpretations of the raw numbers that sit in memory.
Update: a few more details...
Storing objects
The hardware doesn’t know anything about objects. It has addresses and there are numbers (or bit-patterns, if you prefer) at those addresses. Most data types span more than one address. If, for example, we can address bytes, but integers take up four bytes (i.e. they are 32-bit integers), then naturally we need four bytes, at four addresses, to represent an integer. They will be represented as four contiguous bytes, and depending on the architecture you might have the most-significant byte first or last (this is known as endianess) So, the number 10 (which fits in a single byte, but is still a four-byte integer) might be represented as 0x00 0x00 0x00 0x0a or 0x0a 0x00 0x00 0x00. The 0x0a byte is 10 and it might be first or last.
What then about structures, which is what is closest to what we think of as objects? They are larger blocks of attributes/properties/entries/whatever, and they are represented the same way. Blocks of memory is all we have.
If you have an object that contains two integers, say a representation of a rectangle, then the object sits somewhere in memory and will contain the representation of those two integers.
rect:
h, w: int
I’ve intentionally made up the syntax for this, since it isn’t language specific, and different languages and runtime systems have different variations on how they do this, but they all do something similar.
Here, one representation could be a block of 8 bytes, two 4-byte integers, where the first is h and the second is w. There might be padding between elements, so the objects are aligned the way the hardware prefers, but I will ignore that here.
If the object sits at address 0xafode4, that means that h also sits there (assuming that there is no extra information stored in the object), and that means that w sits four bytes later, if integers take up four bytes of space. Again, the details will differ, but this is generally how it is done if you know the layout of objects at compile time. (If you don’t know them until runtime, you will instead have a table of attributes, and the object contains the table instead).
Now, what happens if an object contains other objects? Say, what if the rectangle is represented by two points instead, and the points are objects
point:
x, y: int
rect:
p1, p2: point
In the simplest version, nothing changes. The rect object contains two points, so the points are embedded in the memory that represents the rect.
This doesn’t always work, though. If you have polymorphic types, you might not know the concrete type of a contained object, so you cannot allocate memory. In that case, instead of containing the other object, you will have a reference to it, a pointer. The rect object would hold the addresses of the two points, and the points would sit elsewhere in memory. This is also what you have to do if you want to build non-trivial data structures, so it isn’t specific to object orientation or objects.
In an OOP context, there might be a bit more work to it, but we will get to that. First, let’s consider functions (and let’s go back to a rectangle that just holds h and w).
Representation of functions
Code is just blocks of memory as well, but where the numbers represent instructions to the CPU. Let’s say we want to multiply two numbers, then we might have an instruction that looks like
mul a, b, c
that says that the CPU should take the numbers in registers a and b, multiply them, and put the result in register c. You usually have instructions that take the input from memory or as constants or such as well, but let’s just consider a single simple instruction: multiply two numbers you have in registers and put the result in a third register.
The mul instruction has a number. Completely arbitrarily we can say that it is the byte 0xef. The three arguments specify registers, and if they are a byte each we can have up to 256 registers. The full instruction would contain four bytes, the mul instruction 0xef and the three arguments. If we want to multiply register r1 with register r2 and put the result in register r0, the instruction would be
mul r1, r2, r0
0xef 0x01 0x02 0x00
so what the computer sees is the program 0xef 0x01 0x02 0x00.
For functions, we need two things more: a way to return, and a way to handle input and output.
The return bit is easy. There will be a ret instruction that returns to where the function was called, handling stack registers and such in the process. We can pretend that ret has code 0xab.
Input and output is specified by a calling convention, and it isn’t tied to the hardware as such. You need an agreed upon way to pass arguments to functions and you need to know where the result is when the function returns, but that is all there is to it. On our imaginary architecture, we could say that input one and two will be in registers r1 and r2 and that the output should be in r0 when we return. That way, we can make a simple multiplication function
fun mult(a, b): return a * b
with the instructions
mul r1, r2, r0 ; 0xef 0x01 0x02 0x00
ret ; 0xab
and the computer will store it as the numbers 0xef 0x01 0x02 0x00 0xab. If you know where this code/data sits in memory, e.g. 0x00beef, you can call the function call 0x00beef with some other instruction call (that also has a number, say 0x10) and the address (here an address is typically 8 bytes on a desktop, or 64 bits, so the three bytes in 0x00beef would have zeros before or after it, depending on endianes. I will pretend that we have three byte addresses to make it more readable).
To call the function, you first need to get the arguments into the correct registers, so if you want to get the area of our rect object, you want to get h and w into registers r1 and r2.
What you want to do is call
area = mult(rect.h, rect.w)
so how do you get rect.h and rect.w into registers? You need instructions for that. Let’s say that we have a mov instruction (0x12) that looks like this:
mov adr, reg
where adr is an address (3 bytes on this imaginary architecture) and reg is a register (1 byte). The full instruction is 5 bytes (the 0x12 instruction, the 3 byte address and the 1 byte register). If your rect object sits at 0xaf0de4, then we have rect.h at 0xaf0de4 as well, and we have rect.w four bytes later, at 0xaf0de8. Calling mult(rect.h, rect.w) involves these instructions
mov 0xaf0de4, r1 ; rect.h -> r1
mov 0xaf0de8, r2 ; rect.h -> r2
call 0x00beef ; mult(rect.h, rect.w)
; now rect.h * rect.w is in r0
The actual data stored on the computer is the codes for this:
; mov 0xaf0de4, r1
0x12 0xaf 0x0d 0xe4 0x01
; mov 0xaf0de8, r2
0x12 0xaf 0x0d 0xe8 0x02
; call 0x00beef
0x10 0x00 0xbe 0xef
Everything is still just numbers that we can access through addresses.
Here, of course, the addresses we have used are hardwired into the program, and that doesn’t work in real life. You don’t know where all the objects will be when you compile your program. Some addresses you do know, once you fire up your executable. The location of functions, for example, will be known, and the linker can insert the correct addresses where you need them. Locations of objects, typically not. But there will be instructions like mov that takes the address from a register instead of from the program. We could, for example, have an instruction
mov a[offset], b
that moves data from the address stored in register a + offset into register b. It might have a another number, say 0x13 instead of 0x12, but in assembly you typically have the same code so you don’t see it there.
You would also have an instruction for putting a constant into a register, and I wouldn’t be surprised if that is also called mov and would have the form
mov a, b
where a is now a constant, i.e. some number, and you put that number in register b. The assembly looks the same, but the instruction might have number 0x14.
Anyway, we could use that to call mult(rect.h, rect.w) instead. Then the code would be
mov 0xaf0de4, r3 ; put the address of rect in r3
; 0x14 0xaf 0x0d 0xe4 0x03
mov r3[0], r1 ; put the value at r3+0 into r1
; 0x13 0x03 0x00 0x01
mov r3[4], r2 ; put the value at r3+4 into r2
; 0x13 0x03 0x04 0x02
call 0x00beef
; 0x10 0x00 0xbe 0xef
If we have these instructions, we could also modify our function mult(a,b) to one that takes a rectangle as input and returns the area
fun area(rect): rect.h * rect.w
The function can get the address of the object as its single argument, where it would go in register r1, and from there it could load rect.h and rect.w to multiply them.
; area(rect) -- address of rect in r1
mov r1[0], r2 ; rect.h -> r2
mov r1[4], r3 ; rect.w -> r3
mul r2, r3, r0 ; rect.h * rect.w -> r0
ret ; return rect.h * rect.w
It gets more complicated than this, but you should have the idea now. Our functions are sequences of such instructions, and the arguments to them, and the result value, is passed back and forth, usually through registers, by some calling convention. If you want to pass a value to a function, you need to put it in the right register (or on the stack, depending on the calling convention), and then the function will operate on it. What it does with the object is entirely software; the hardware doesn’t care that much.
Classes and polymorphism
What then if we want polymorphic methods? If we have a class hierarchy of geometric objects and rect is just one of them, and all of them should have an area method that, when called, is dispatched based on the objects’ class?
When you have polymorphic methods, what you really have is a bunch of different functions. If you call x.area() on an object x that happens to be a circle, then you are really calling circle_area(x), while if x is a rect you are calling rect_area(x). The only thing you need to make this work is having a mechanism for dispatching to the right function call.
Here, again, the details differ (a lot), but a simple solution is to put pointers to the correct function in the objects. If you call x.area() maybe you know that the first element in the memory of x is a pointer to its specific area function. So, instead of calling a function directly, you fetch the address of the function from x and then you call it.
x.area() == (x.area_func)(x)
All objects you can call area() on should have this function, and they should have it at the same offset from the address of the object, and then it can be as simple as that.
This can, of course, be wasteful in memory if your classes have lots of methods. You are storing a pointer to each method in each object (and you also have to spend time on initialising this, so there is additional overhead there as well).
Then another solution can be to add a level of indirection. If the methods are the same for all objects of a class (which they often are, but not for all languages) then you can put the table of methods in a class object and have a single pointer to the class in each object. When you need to get the right function you first get the class and then you get the function from it.
x.area() == (x.class.area_func)(x)
With single inheritance, the tables in the different classes can have different sizes, and it doesn’t get more complicated because of that. With multiple inheritance, it does get more complicated, but that is handled very differently in different languages so it is hard to say anything general about that.
I am novice in Xilinx HLS. I am following tutorial ug871-vivado-high-level-synthesis-tutorial.pdf(page 77).
The code is
#define N 32
void array_io (dout_t d_o[N], din_t d_i[N])
{
//..do something
}
After synthesis, I got report like
I am confused that how the width of the address port has been automatically sized match to the number of addresses that must be accessed (5-bit for 32 addresses)?
Please help.
From the UG871, it seems that the size of the array is from 0 to 16 samples, hence you need 32 addresses to access all values (see Figure 69). I guess that the number N is somewhere limited to be less than 32 (or be exactly 16). This means that Vivado knows this limitation, and generates only as many address bits as are needed. Most synthesis tools check the constraints on size and optimize unnecessary code away.
When you synthetise a function you create, also, some registers to store the variables. It means that the address that you put as input is the one of the data that you are concurrently writing in d_o or d_in.
In your case, where N=32, you have 32 different variables (in both input and output). To adress 32 different variables you need 32 different combination of bit (to point to a specific one, without ambiguity). With 5 bit you have 2^5=32 different combination of addresses: the minimum number of bit to address all your data.
For instance if you have 32
The address number of bit is INDIPENDENT from the size of data (i.e. they can be int, float, char, short, double, arbitrary precision and so on)
According to the original specification '98, Ben Olmstead Malbolge VM fill empty memory cells using crazy op on two previous cells. "Cells which are not initialized are set by performing op on the previous two cells repetitively." I.e.
[m] = crz [m-2], [m-1]
For the sake of sanity what should I do if the program contains only 1 instruction?
Or should I assume the last character always to be EOF?
Judging by the implementation and language-lawyering, there are two options -
If we consider the definition of "two previous cells" as, literally, the two previous cells, then a single-char or empty malbolge program is illegal in the language, because it can not be executed according to the specs.
If we consider the definition of [m] = crz [m-2], [m-1], it gets interesting. The main implementation (alongside probably most of the rest) uses unsigned short (or int) for the memory pointer. When you try subtracting 2 from 1 (m-2) it results in 0xffff, decimal 65535 (see this answer for details), which is just a bit over malbolge's 59049 memory limit. That glitch runs (almost) perfectly on a normal machine, using the 0xffff cell for crazy-op computing (without even harming the out-of-environment memory!), but will fail on a limited-memory or virtual machine.
You might end up with 0xffffffff instead of 0xffff, depending on the way you use the pointer.
In short,
If you run it by hand, assume it fails.
If you run it on a virtual machine, it fails.
If you run it on a simulator, it will probably work, but will fail the point of running itself, since 0xffff is a random-valued memory cell, leading to random values along the environment memory. On the other hand, what can you expect from a single-byte malbolge program?
From time to time, I'll have an off-by-one error like the following:
unsigned int* x = calloc(2000, sizeof(unsigned int));
printf("%d", x[2000]);
I've gone beyond the end of the allocated region, so I get an EXC_BAD_ACCESS signal at runtime. My question is: how is this detected? It seems like this would just silently return garbage, since I'm only off by one byte and not, say, a full page. What part of the system prevents me from just returning the garbage byte at x + 2000?
The memory system has sentinel values at the beginning and end of its memory fields, beyond your allocated bytes. When you free the memory, it checks to see if those values are intact. If not, it tells you.
Perhaps you are just lucky because you are using 2000 as a size. Depending on the size of int the total size is divisible by 32 or 64, so chances are high that the end of it really terminates the "real" allocation. Try with some odd number of bytes (better use a char array for that) and see if your systems still detects it.
In any case you shouldn't rely on finding these bugs this way. Always use valgrind or similar to check your memory accesses.
At http://cr.yp.to/primegen.html you can find sources of program that uses Atkin's sieve to generate primes. As the author says that it may take few months to answer an e-mail sent to him (I understand that, he sure is an occupied man!) I'm posting this question.
The page states that 'primegen can generate primes up to 1000000000000000'. I am trying to understand why it is so. There is of course a limitation up to 2^64 ~ 2 * 10^19 (size of long unsigned int) because this is how the numbers are represented. I know for sure that if there would be a huge prime gap (> 2^31) then printing of numbers would fail. However in this range I think there is no such prime gap.
Either the author overestimated the bound (and really it is around 10^19) or there is a place in the source code where the arithmetic operation can overflow or something like that.
The funny thing is that you actually MAY run it for numbers > 10^15:
./primes 10000000000000000 10000000000000100
10000000000000061
10000000000000069
10000000000000079
10000000000000099
and if you believe Wolfram Alpha, it is correct.
Some facts I had "reverse-engineered":
numbers are sifted in batches of 1,920 * PRIMEGEN_WORDS = 3,932,160 numbers (see primegen_fill function in primegen_next.c)
PRIMEGEN_WORDS controls how big a single sifting is - you can adjust it in primegen_impl.h to fit your CPU cache,
the implementation of the sieve itself is in primegen.c file - I assume it is correct; what you get is a bitmask of primes in pg->buf (see primegen_fill function)
The bitmask is analyzed and primes are stored in pg->p array.
I see no point where the overflow may happen.
I wish I was on my computer to look, but I suspect you would have different success if you started at 1 as your lower bound.
Just from the algorithm, I would conclude that the upper bound comes from the 32 bit numbers.
The page mentiones Pentium-III as CPU so my guess it is very old and does not use 64 bit.
2^32 are approx 10^9. Sieve of Atkins (which the algorithm uses) requires N^(1/2) bits (it uses a big bitfield). Which means in 2^32 big memory you can make (conservativ) N approx 10^15. As this number is a rough conservative upper bound (you have system and other programs occupying memory, reserving address ranges for IO,...) the real upper bound is/might be higher.