What does EEMEM stand for? - definition

I'm looking for the definition of EEMEM... I'm guessing it's electronically erasable memory, but not sure. Is this different than EEPROM?
The reason behind this -- I'm looking at an entry of a digital potentiometer that has the acronym next to it, and I'm looking to fit it into a matrix of memory types that I have, so I need to know if it is EEPROM or something different.

Answering my own question.
Although I couldn't find an exact acronym definition for EEMEM, the word is interchangeable with Non-volatile memory.
For the exact pot that I was looking at, the memory type is EEPROM as found in the specifications here

Related

What's the point of "make move" and "undo move" in chessengines?

I want to experiment with massive parallel chess computing. From what I saw and understood in wikis and source code of some engines is that in most (all?) implementations of the min-max(negamax, alpha-beta, ...)-algorithm there is one internal position that gets updated for every new branch and then gets undone after receiving the evaluation of that branch.
What are the benefits of that technic compared to just generating a new position-object and passing that to the next branch?
This is what I have done in my previous engines and I believe this method is superior for the purpose of parallelism.
The answer to your question depends heavily on a few factors, such as how you are storing the chess board, and what your make/unmake move function looks like. I'm sure there are ways of storing the board that would be better suited towards your method, and indeed historically some top tiered chess engine (in particular crafty) used to use that method, but you are correct in saying that modern engines no longer do it that way. Here is a link to a discussion about this very point:
http://www.talkchess.com/forum/viewtopic.php?t=50805
If you want to understand why this is, then you must understand how today's engines represent the board. The standard implementation revolves around bitboards, and that requires 12- 64 bit integers per position, in addition to a redundant mailbox (array in non computer chess jargon) used in conjunction. Copying all that is usually more expensive than a good makeMove/unMakeMove function.
I also want to point out that a hybrid approach is also common. Usually make and unmake is used on the board itself, where as other information like en passant squares and castle rights are copied and changed like you suggested.
Interestingly, for board representations of < ~300 bytes, it IS cheaper to copy the board on each move (on modern x86), as opposed to making and unmaking the move.
As you suggest the immutable properties of copying the board on each move are from a programming perspective, including parallelising, very attractive.
My board rep is 208 bytes. C++ compiled with g++ 7.4.0.
Empricism shows that board copy is 20% faster than move make/unmake. I presume the code is using 32/64 byte-wide AVX instructions for the copy. In theory you can do 32/64 byte copy per cycle.
Just for you: https://github.com/rolandpj1968/oom-skaak/tree/init-impl-legal-move-gen-move-do-undo-2

How variables are stored in RAM memory?

I've just made a simple RAM memory in Minecraft (with redstone), with 4bits for the adress and 4bits stored in each cell. Our next goal is to store different kinds of variables in it and to process them differently.
We are not engineers, so we don't really know, but we have made some quite complex things and we think we can do this. The problem is that we can't figure out how to store variables of more bits that can be stored in a single cell. I'll give an example.
Think of a 16bit variable. We thought that there's no sense in creating big cells so we decided to store that data storing 4bits in each cell. But that's not enough, we had to relate those 4 cells. So we thought that we had to create 8bit cells, with 4bits of content and 4bits to store the address where the next 4bits of the variable are stored. However, 4bits of address is nothing for RAM, we can't store nothing there. So we would need at least 8bits for the address. 4bits of content also seems quite low, and we also need at least other 4bits to store the type of the variable.
Well, finally we thought that technique was absurd and that it coudn't be done like that in real life. And we don't know how to do it now. I've searched on the web about how RAM works and the few that I've find was too complex for our needs.
Could someone please explain us how this is done in real life?
Heh you're playing the blame game, trying to pin all the responsibility of memory management on the physical RAM implementation.
In fact, RAM is just that, a storage device (your redstone tiles), actually storing data in it is your program's responsibility. Put in other words, there doesn't need to be a standardized memory cell "linking" strategy for RAM, because it's your program that writes to it and then reads it back, so it knows its own common practices.
With that in mind, storing values is easy. Say you want a 16bit integer stored in your 4bit/word RAM (so 4 words of data). Simply refer to addresses 0 through 4 as your variable and that's it. No "linking" necessary because you both know how to read from it and write to it, and you won't step on your own toes (in theory).
Additional thoughts for growing your construct: special locations for specialized registries (stack pointer to use a stack for recursive computing, program pointer for a turing machine etc). I had one more but I forgot it while writing that one, if I'll remember it I'll edit..

Machine code alignment

I am trying to understand the principles of machine code alignment. I have an assembler implementation which can generate machine code in run-time. I use 16-bytes alignment on every branch destination, but looks like it is not the optimal choice, since I've noticed that if I remove alignment than sometimes same code works faster. I think that something to do with cache line width, so that some commands are cut by a cache line and CPU experiences stalls because of that. So if some bytes of alignment inserted at one place, it will move instructions somewhere further pass the cache border line...
I was hoping to implement an automatic alignment procedure, which can process a code as a whole and insert alignment according to the specification of the CPU (cache line width, 32/64 bits and so on)...
Can someone give some hints about this procedure? As an example the target CPU could be Intel Core i7 CPU 64-bit platform.
Thank you.
I'm not qualified to answer your question because this is such a vast and complicated topic. There are probably many more mechanisms in play here, other than cache line size.
However, I would like to point you to Agner Fog's site and the optimization manuals for compiler makers that you can find there. They contain a plethora of information on these kind of subjects - cache lines, branch prediction and data/code alignment.
Paragraph (16-byte) alignment is usually the best. However, it can force some "local" JMP instructions to no longer be local (due to code size bloat). May also result in not as much code being cached. I would only align major segments of code, I would not align every tiny subroutine/JMP section.
Not an expert, however... Branches to places that are not going to be in the instruction cache should benefit from alignment the most because you'll read whole cache-line of instructions to fill the pipeline. Given that statement, forward branches will benefit on the first run of a function. Backward branches ("for" and "while" loops for example) will probably not benefit because the branch target and following instructions have been read into cache already. Do follow the links in Martins answer.
As mentioned previously this is a very complex area. Agner Fog seems like a good place to visit. As to the complexities I ran across the article here Torbjörn Granlund on "Improved Division by Invariant Integers" and in the code he uses to illustrate his new algorithm the first instruction at - I guess - the main label is nop - no operation. According to the commentary it improves performance significantly. Go figure.

Fast rgb565 to YUV (or even rgb565 to Y)

I'm working on a thing where I want to have the output option to go to a video overlay. Some support rgb565, If so sweet, just copy the data across.
If not I have to copy data across with a conversion and it's a frame buffer at a time. I'm going to try a few things, but I thought this might be one of those things that optimisers would be keen on having a go at for a bit of a challenge.
There a variety of YUV formats that are commonly supported easiest would be the Plane of Y followed by either interleaved or individual planes of UV.
Using Linux / xv, but at the level I'm dealing with it's just bytes and an x86.
I'm going to focus on speed at the cost of quality, but there are potentially hundreds of different paths to try out. There's a balance in there somewhere.
I looked at mmx but I'm not sure if there is anything useful there. There's nothing that strikes me as particularly suited to the task and it's a lot of shuffling to get things into the right place in registers.
Trying a crude version with Y = Green*0.5 + R*0.25 + Blue*notmuch. The U and V are even less of a concern quality wise. You can get away with murder on those channels.
For a simple loop.
loop:
movzx eax,[esi]
add esi,2
shr eax,3
shr al,1
add ah,ah
add al,ah
mov [edi],al
add edi,1
dec count
jnz loop
of course every instruction depends on the one before and word reads aren't the best so interleaving two might gain a bit
loop:
mov eax,[esi]
add esi,4
mov ebx,eax
shr eax,3
shr ebx,19
add ah,ah
add bh,bh
add al,ah
add bl,bh
mov ah,bl
mov [edi],ax
add edi,2
dec count
jnz loop
It would be quite easy to do that with 4 at a time, maybe for a benefit.
Can anyone come up with anything faster, better?
An interesting side point to this is whether or not a decent compiler can produce similar code.
A decent compiler, given the appropriate switches to tune for the CPU variants of most interest, almost certainly knows a lot more about good x86 instruction selection and scheduling than any mere mortal!
Take a look at the Intel(R) 64 and IA-32 Architectures Optimization Reference Manual...
If you want to get into hand-optimising code, a good strategy might be to get the compiler to generate assembly source for you as a starting point, and then tweak that; profile before and after every change to ensure that you're actually making things better.
What you really want to look at, I think, is using MMX or the integer SSE instructions for this. That will let you work with a few pixels at a time. I imagine your compiler will be able to generate such code if you specify the correct switches, especially if your code is written nicely enough.
Regarding your existing codes, I wouldn't bother with interleaving instructions of different iterations to gain performance. The out-of-order engine of all x86 processors (excluding Atom) and the caches should handle that pretty well.
Edit: If you need to do horizontal adds you might want to use the PHADDD and PHADDW instructions. In fact, if you have the Intel Software Designer's Manual, you should look for the PH* instructions. They might have what you need.

What would be a good (de)compression routine for this scenario

I need a FAST decompression routine optimized for restricted resource environment like embedded systems on binary (hex data) that has following characteristics:
Data is 8bit (byte) oriented (data bus is 8 bits wide).
Byte values do NOT range uniformly from 0 - 0xFF, but have a poisson distribution (bell curve) in each DataSet.
Dataset is fixed in advanced (to be burnt into Flash) and each set is rarely > 1 - 2MB
Compression can take as much as time required, but decompression of a byte should take 23uS in the worst case scenario with minimal memory footprint as it will be done on a restricted resource environment like an embedded system (3Mhz - 12Mhz core, 2k byte RAM).
What would be a good decompression routine?
The basic Run-length encoding seems too wasteful - I can immediately see that adding a header setion to the compressed data to put to use unused byte values to represent oft repeated patterns would give phenomenal performance!
With me who only invested a few minutes, surely there must already exist much better algorithms from people who love this stuff?
I would like to have some "ready to go" examples to try out on a PC so that I can compare the performance vis-a-vis a basic RLE.
The two solutions I use when performance is the only concern:
LZO Has a GPL License.
liblzf Has a BSD License.
miniLZO.tar.gz This is LZO, just repacked in to a 'minified' version that is better suited to embedded development.
Both are extremely fast when decompressing. I've found that LZO will create slightly smaller compressed data than liblzf in most cases. You'll need to do your own benchmarks for speeds, but I consider them to be "essentially equal". Both are light-years faster than zlib, though neither compresses as well (as you would expect).
LZO, in particular miniLZO, and liblzf are both excellent for embedded targets.
If you have a preset distribution of values that means the propability of each value is fixed over all datasets, you can create a huffman encoding with fixed codes (the code tree has not to be embedded into the data).
Depending on the data, I'd try huffman with fixed codes or lz77 (see links of Brian).
Well, the main two algorithms that come to mind are Huffman and LZ.
The first basically just creates a dictionary. If you restrict the dictionary's size sufficiently, it should be pretty fast...but don't expect very good compression.
The latter works by adding back-references to repeating portions of output file. This probably would take very little memory to run, except that you would need to either use file i/o to read the back-references or store a chunk of the recently read data in RAM.
I suspect LZ is your best option, if the repeated sections tend to be close to one another. Huffman works by having a dictionary of often repeated elements, as you mentioned.
Since this seems to be audio, I'd look at either differential PCM or ADPCM, or something similar, which will reduce it to 4 bits/sample without much loss in quality.
With the most basic differential PCM implementation, you just store a 4 bit signed difference between the current sample and an accumulator, and add that difference to the accumulator and move to the next sample. If the difference it outside of [-8,7], you have to clamp the value and it may take several samples for the accumulator to catch up. Decoding is very fast using almost no memory, just adding each value to the accumulator and outputting the accumulator as the next sample.
A small improvement over basic DPCM to help the accumulator catch up faster when the signal gets louder and higher pitch is to use a lookup table to decode the 4 bit values to a larger non-linear range, where they're still 1 apart near zero, but increase at larger increments toward the limits. And/or you could reserve one of the values to toggle a multiplier. Deciding when to use it up to the encoder. With these improvements, you can either achieve better quality or get away with 3 bits per sample instead of 4.
If your device has a non-linear μ-law or A-law ADC, you can get quality comparable to 11-12 bit with 8 bit samples. Or you can probably do it yourself in your decoder. http://en.wikipedia.org/wiki/M-law_algorithm
There might be inexpensive chips out there that already do all this for you, depending on what you're making. I haven't looked into any.
You should try different compression algorithms with either a compression software tool with command line switches or a compression library where you can try out different algorithms.
Use typical data for your application.
Then you know which algorithm is best-fitting for your needs.
I have used zlib in embedded systems for a bootloader that decompresses the application image to RAM on start-up. The licence is nicely permissive, no GPL nonsense. It does make a single malloc call, but in my case I simply replaced this with a stub that returned a pointer to a static block, and a corresponding free() stub. I did this by monitoring its memory allocation usage to get the size right. If your system can support dynamic memory allocation, then it is much simpler.
http://www.zlib.net/