I'm starting a school project: we have to code an efficient text editor in c. To command this i can use:
(row1,row2)c
(row1,row2)d
(row1,row2)p
(number)u
(number)r
These commands are used to change text between row1 and row2 (the c), delete text between row1 and row2 (the d, the text will be replaced with a single dot), print on stdout rows between row1 and row2 (the p), undo (number) times or redo (number times) (this last two commands doesn't affect print, just c and d).
To start I was thinking what data structure I can use.
I thought, for the rows, to use a single link list with number of row and a second list (for the text itself).
This because the code has to be efficient in time and space
But I don't find a good way to implement undo/redo in my case: I was thinking two create two stacks, one for undo and one for redo. Every command I give it's inserted in undo stack and, if I undo something i delete the first action in undo stack and put it in redo stack
But I don't know ho to write how to write these commands: i was thinking to save a complementary command, so I can run this command and return in a previous statement. Then, when I undo, i create complementary command in redo stack, and I delete this stack every new command to free space
I hope it's understandable, I just want your opinion about this possible structure
NB I can code only in c11 with stdlib and stdio theoretically, but I can copy and modify other libraries' functions if it's needed
---UPDATE---
I was thinking if it's better to use a R/B Tree for keeping the rows structure. This because it would take O(log(n)) to search the X-th row and edit it, instead of O(n)
The only problem it's, when I have to change many rows in just a command (e.g 1,521c) it takes longer to search every row
Maybe a sort of hybrid could be a good choice: i use RBT structure to find the start row address, then I use the list structure to find the others. So every node of this tree has 2 address for RBT and 1 address for list
Your design ideas are spot on.
The part needed is how to represent the undo and redo entries.
A redo entry could be a struct that indicates what span of text to replace and the text to replace it with. A "span" here gives the offsets into the text, and since it could be an empty span (just a position), that suggests using a half-open interval [start .. end) or a start and length. That can express any single text-change operation. In theory the replacement text could be a 0-length string; if not yet, anticipate that future assignments may add feature requests.
An undo entry can be the same struct, describing as you noted the complementary text replacement operation.
The other design decision is how to represent the document text. The simplest thing is a sequential buffer of characters, in which case every insertion requires moving all following text downwards after ensuring the memory buffer is large enough.
An alternative is a list of lines of text, each line being a separate memory node. That way, inserting, deleting, and replacing lines doesn't have to move the bulk of the text around in memory, just some of the line node pointers. Furthermore, for line replacement commands the redo/undo entries can just list which range of line pointers to replace with other line pointers.
Suppose you create just one stack (array?) and call it 'history'. As commands are made, add their 'antidote' (or pointers to them) to the stack, and adjust a pointer/counter to the last command. As your user steps back ('undo'), replace each command with it's 'antidote' (The code to put it there in the first place could be reused), so it's there for subsequent 'redo', and reposition the counter as needed . You'll have to allocate storage for deleted text, and link it to the stack position (2 dimensional (pointer?) array, or perhaps a struct?). If your stack gets full, delete the oldest- it's now 'out of range', and move everything accordingly... Or... Just allocate more memory... ;-)
Just an idea...
Remember, if it works properly, it isn't wrong. Perhaps just not the most efficient or effective way of doing it...
Don't forget to clear the stack on 'save', and most importantly, release any allocated memory on 'terminate'.
Mike.
Related
I'm implementing negamax with alpha/beta transposition table based on the pseudo code here, with roughly this algorithm:
NegaMax():
1. Transposition Table lookup
2. Loop through moves
2a. **Bail if I'm out of time**
2b. Make move, call -NegaMax, undo move
2c. Update bestvalue, alpha/beta but if appropriate
3. Transposition table store/update
4. Return bestvalue
I'm also using iterative deepening, calling NegaMax with progressively higher depths.
My question is: when I determine I've run out of time (2a. in the beginning of move loop) what is the right thing to do? Do I bail immediately (not updating the transposition table) or do I just break the loop (saving whatever partial work I've done)?
Currently, I return null at that point, signifying that the search was canceled before "completing" that node (whether by trying every move or the alpha/beta cut). The null gets propagated up and up the stack, and each node on the way up bails by return, so step 3 never runs.
Essentially, I only store values in the TT if the node "completed". The scenario I keep seeing with the iterative deepening:
I get through depths 1-5 really quick, so the TT has a depth = 5, type = Exact entry.
The depth = 6 search is taking a long time, so I bail.
I ultimately return the best move in the transposition table, which is the move I found during the depth = 5 search. The problem is, if I start a new depth = 6 search, it feels like I'm starting it from scratch. However, if I save whatever partial results I found, I worry that I'll have corrupted my TT, potentially by overwriting the completed depth = 5 entry with an incomplete depth = 6 entry.
If the search wasn't completed, the score is inaccurate and should likely not be added to the TT. If you have a best move from the previous ply and it is still best and the score hasn't dropped significantly, you might play that.
On the other hand, if at depth 6 you discover that the opponent has a mate in 3 (oops!) or could win your queen, you might have to spend even more time to try to resolve that.
That would leave you with less time for the remaining moves (if any...), but it might be better to be slightly short on time than to get mated with plenty of time remaining. :-)
I'm looking for a faster, more efficient method of assigning data gathered from a DAQ to its proper location in a large cluster containing arrays of subclusters.
My current method 1 relies heavily on the OpenG cluster manipulation tools, but with a large data-set the performance is far too slow.
The array and cluster location of each element of data from the DAQ is determined during an initialization phase and doesn't change during acquisition.
Because the data element origin and end points are the same throughout acquisition, I would think an array of memory locations could be created and the data directly assigned to its proper place. I'm just not sure how to implement such a thing.
The following code does what you want:
For each of your cluster elements (AMC, ANLG_PM and PA) you should add a case in the string case structure, for the elements AMC and PA you will need to place a second case structure.
This is really more of a comment, but I do not have the reputation to leave those yet, so here it is:
Regarding adding cases for every possible value of Array name, is there any reason why you cannot use an enum here? Since you are placing it into a cluster anyway, I would suggest making a type-defined enum of your possible array names. That way, when you want to add or remove one, you only have to do it in one place.
You will still need to right-click on your case structures that use this enum and select Add item for every value if you are adding a value, or manually delete the obsolete value if you are removing one. I suppose some maintenance is required either way...
Approximate program behavior:
I have a map image with data associated with the map indicated by RGB index. The data has been populated into an MS Access database. I imported the information in the database into my program as an array and sorted them to go in the order I want the program to run.
I want the program to find the nearest pixel that has a different color from the incumbent pixel being compared. (Colors are stored as string attributes of object Pixel)
First question: Should I use integers to represent my colors instead of string? Would this make the comparison function run significantly faster?
In order to find the nearest pixel of different color, the program begins with all 8 adjacent pixels around the incumbent. If a nonMatch is not found, it then continues onto the next "degree", and in this fashion, it spirals out from the incumbent pixel until it hits a nonMatch. When found, the color of the nonMatch is saved as an attribute of incumbent. After I find the nonMatch for each of the Pixels, the data is re-inserted into the database
The program accomplishes what I want in the manner I've written it, but it is very very slow. After 24 hours, I am only about 3% through with execution.
Question Two: Does my program behavior sound about right? Is this algorithm you would use if you had to accomplish this task?
Question Three: Would it be appropriate for me to use threads in order to finish execution of the program faster? How exactly does that work? (I am brand new to threads, but know a little of the syntax)
Question Four: Would it be more "intelligent" for my program to find the nonMatch for each pixel and insert it into the database immediately after finding it? (I'm making a guess that this would be good in multi-threading, because while one record is accessing the database (to insert), another record is accessing the array of pixels (shared global variable in program).
Question Five: If threading is a good idea, I'm guessing I would split the records up into more manageable chunks (i.e. quarters), and have each thread run the same functions for their specified number of records? Am I close at all?
Please let me know if I can clarify or provide code samples, I just figured that this is more of a conceptual topic so do not want to overburden the post.
1.) Yes, integers compare much faster than strings. Additionally the y use much less memory
2.) I would adapt the algorithm in this way:
E.g.: #1: Let's say, for pixel(87,23) you found the nearest nonMatch to be (88,24) at degree=1 - you can immediately invert the relation and record, that the nearest nonMatch to (88,24) is (87,23). On degree=1 you finished 2 pixels with 1 search.
E.g. #2: Let's say, for pixel (17,18) you found the nearest nonMatch to be (17,20) at degree=2. You can immediately record, that all pixels, that border on both (16,19), (17,19) and (18,19) have the nearest noMatch (17,20) at degree=1, and that one of them is the nearest noMatch to (17,20). On degree=2 (or higher), you finished 5 pixels with 1 search.
3.) Using threads is a two-sided sword: You can do searches in parallel, but you need locking if you write to your array. So this depends on how many CPU cores you can throw at the problem. If this is 3 or more, threads will surely speed up the search.
4.) The results from 2.) make it necessary to mark a pixel as "done" in your array, as you might have finished up to 5 pixels with 1 search. I recommend you put those into a queue and use a dedicated thread to write the queue back into the database: MS Access can't handle concurrent updates, so a single database writer thread looks like a good idea.
5.) I recommend you NOT chunk up the array: You will run into problems with pixels on the edges of a chunk having their nearest nonMatch in a different chunk. Instead if you use e.g. 4 Threads, let them run 1.) From NW corner E, then S 2.) From SE Corner W, then N 3.) From NE Corner S, then W 4. From SW Corner N, then E
Yes. Using a integer would make it much faster
You can reuse the work you have done for previous pixel. Eg. If (a,b) is the nearest non-equal pixel of (x,y), it is likely that points around (x,y) might also have (a,b) as the nearest non-equal pixel
You can use different threads to work on different pixels instead of dividing searching for one pixel
IMHO, steps 1&2 should make your program much faster and you might not need multi-threading.
Yes, I'd convert colour strings to Integers for speed, or even Color structures if you intend to display them on the screen.
Don't work directly with the database if you can avoid it. Copy the necessary data out of the database into an array before you start, and copy your results back when you're finished.
I need to store items of varying length in a circular queue in a flash chip. Each item will have its encapsulation so I can figure out how big it is and where the next item begins. When there are enough items in the buffer, it will wrap to the beginning.
What is a good way to store a circular queue in a flash chip?
There is a possibility of tens of thousands of items I would like to store. So starting at the beginning and reading to the end of the buffer is not ideal because it will take time to search to the end.
Also, because it is circular, I need to be able to distinguish the first item from the last.
The last problem is that this is stored in flash, so erasing each block is both time consuming and can only be done a set number of times for each block.
First, block management:
Put a smaller header at the start of each block. The main thing you need to keep track of the "oldest" and "newest" is a block number, which simply increments modulo k. k must be greater than your total number of blocks. Ideally, make k less than your MAX value (e.g. 0xFFFF) so you can easily tell what is an erased block.
At start-up, your code reads the headers of each block in turn, and locates the first and last blocks in the sequence that is ni+1 = (ni + 1) MODULO k. Take care not to get confused by erased blocks (block number is e.g. 0xFFFF) or data that is somehow corrupted (e.g. incomplete erase).
Within each block
Each block initially starts empty (each byte is 0xFF). Each record is simply written one after the other. If you have fixed-size records, then you can access it with a simple index. If you have variable-size records, then to read it you have to scan from the start of the block, linked-list style.
If you want to have variable-size records, but avoid linear scan, then you could have a well defined header on each record. E.g. use 0 as a record delimiter, and COBS-encode (or COBS/R-encode) each record. Or use a byte of your choice as a delimiter, and 'escape' that byte if it occurs in each record (similar to the PPP protocol).
At start-up, once you know your latest block, you can do a linear scan for the latest record. Or if you have fixed-size records or record delimiters, you could do a binary search.
Erase scheduling
For some Flash memory chips, erasing a block can take significant time--e.g. 5 seconds. Consider scheduling an erase as a background task a bit "ahead of time". E.g. when the current block is x% full, then start erasing the next block.
Record numbering
You may want to number records. The way I've done it in the past is to put, in the header of each block, the record number of the first record. Then the software has to keep count of the numbers of each record within the block.
Checksum or CRC
If you want to detect corrupted data (e.g. incomplete writes or erases due to unexpected power failure), then you can add a checksum or CRC to each record, and perhaps to the block header. Note the block header CRC would only cover the header itself, not the records, since it could not be re-written when each new record is written.
Keep a separate block that contains a pointer to the start of the first record and the end of the last record. You can also keep more information like the total number of records, etc.
Until you initially run out of space, adding records is as simple as writing them to the end of the buffer and updating the tail pointer.
As you need to reclaim space, delete enough records so that you can fit your current record. Update the head pointer as you delete records.
You'll need to keep track of how much extra space has been freed. If you keep a pointer to end of the last record, the next time you need to add a record, you can compare that with the pointer to the first record to determine if you need to delete any more records.
Also, if this is NAND, you or the flash controller will need to do deblocking and wear-leveling, but that should all be at a lower layer than allocating space for the circular buffer.
I think I get it now. It seems like your largest issue will be, having filled the available space for recording, what happens next? The new data should overwrite the oldest data, which is I believe what you mean by a circular buffer. But since the data is not fixed length you may overwrite more than one record.
I'm assuming that the amount of variability in length is high enough that padding everything out to a fixed length isn't an option.
Your write segment needs to keep track of the address that represents the start of the next record to write. If you know the size of a block to write ahead of time, you can tell if you are going to end up at the end of the logical buffer and start over at '0'. I wouldn't split a record up with some at the end and some at the beginning.
A separate register can track the beginning; this is the oldest data that hasn't been overwritten yet. If you went to read out the data this is where you would start.
The data writer then would check, given the write-start address and the length of data its about to commit, if it should bump the read register, which would examine the first block and see the length, and advance to the next record, until there is enough room to write whatever the data is. There will be a gap of junk data that lives between the end of the written data and the start of the oldest data, probably. But this way, you can just be writing an address or two as overhead, and not rearranging blocks.
At least, that's probably what I would do. HTH
The "circular" in a flash can be done on basis of block size, which means that you must declare how much blocks of the flash you allocate for this buffer.
The actual size of the buffer will be at each particular time between n-1 (n is the number of blocks) and n.
Each block should start with an header that contains sequential number or timestamp that could be used to determine which block is older than the other.
Each Item encapsulated with an header and a footer. the default header contains whatever you want but according to this header you must know the size of the item. The default footer is 0xFFFFFFFF. This value indicates a null termination.
In your RAM you must save a pointer to the oldest block and the latest block and pointer to the oldest item and latest item. On power up you go over all blocks find the relevant blocks and load this members.
When you want to store a new item, you check if the latest block contain enough space for this item. If it does you save the item at the end of the previous item and the change the previous footer to point to this item. If it does not contain enough space you need to erase the oldest block. Before you erase this block change the oldest block members (RAM) to point on the next block and the oldest item to point on the first item in this block.
Then you can save the new item in this block and change the footer of the latest item to point this item.
I know that the explanation may sounds complicated but the process is very simple and if you write it correct you can make it even power fail safe (always keep in you mind the order of the writes).
Pay attention that the circularity of the buffer is not saved in the flash but the flash only contains a blocks with items that you can decide according to the blocks headers and items headers what is the order of these items
I see three options:
option1: is to pad everything out to the same size, this is simple, store a pointer to the head and tail of the buffer so you know where to write and where to start reading from, use the size of each object to get an offset to the next, this means you need to transverse the buffer as you would a linked list, aka its slow if you need item 5000.
option2: is to store only pointers to the real data in the circular buffer, that way when you loop around you don't have to deal with size mis-matchs. if you store the real data in a circular buffer and don't pad it out you could run into a situations where your over witting multiple items with 1 new data object, i assume this is not ok.
store the actual data elsewhere in flash, most flash will have some sort of wear leveling built in, if so you don't need to worry about overwriting the same location multiple times, the IC will figure out where to actually store it on chip, just write to to the next available free space.
this means you need to pick a maximum size for the circular buffer how you do this depends on the data variability. If the size of the data just change much, say by only a few bytes, then you should just pad it out and use option 1. If the size changes wildly and unpredictably, choose the largest size it could be and figure out how many objects of that size would fit in your flash, use that as the max number of entries in the buffer. This means you waste a bunch of space.
option 3: if the object can really be any size, your at the point where you should just use a file system, name the files in order and loop back when your full keeping in mind if your new entry is large you may have to delete multiple old entries to fit it in. This is really just an extension of option 2 as option2 is in many ways a simple file system.
I have a collection of objects in a database. Images in a photo gallery, products in a catalog, chapters in a book, etc. Each object is represented as a row. I want to be able to arbitrarily order these images, storing that ordering in the database so when I display the objects, they will be in the right order.
For example, let's say I'm writing a book, and each chapter is an object. I write my book, and put the chapters in the following order:
Introduction, Accessibility, Form vs. Function, Errors, Consistency, Conclusion, Index
It goes to the editor, and comes back with the following suggested order:
Introduction, Form, Function, Accessibility, Consistency, Errors, Conclusion, Index
How can I store this ordering in the database in a robust, efficient way?
I've had the following ideas, but I'm not thrilled with any of them:
Array. Each row has an ordering ID, when order is changed (via a removal followed by an insertion), the order IDs are updated. This makes retrieval easy, since it's just ORDER BY, but it seems easy to break.
// REMOVAL
UPDATE ... SET orderingID=NULL WHERE orderingID=removedID
UPDATE ... SET orderingID=orderingID-1 WHERE orderingID > removedID
// INSERTION
UPDATE ... SET orderingID=orderingID+1 WHERE orderingID > insertionID
UPDATE ... SET orderID=insertionID WHERE ID=addedID
Linked list. Each row has a column for the id of the next row in the ordering. Traversal seems costly here, though there may by some way to use ORDER BY that I'm not thinking of.
Spaced array. Set the orderingID (as used in #1) to be large, so the first object is 100, the second is 200, etc. Then when an insertion happens, you just place it at (objectBefore + objectAfter)/2. Of course, this would need to be rebalanced occasionally, so you don't have things too close together (even with floats, you'd eventually run into rounding errors).
None of these seem particularly elegant to me. Does anyone have a better way to do it?
An other alternative would be (if your RDBMS supports it) to use columns of type array. While this breaks the normalization rules, it can be useful in situations like this. One database which I know about that has arrays is PostgreSQL.
The acts_as_list mixin in Rails handles this basically the way you outlined in #1. It looks for an INTEGER column called position (of which you can override to name of course) and using that to do an ORDER BY. When you want to re-order things you update the positions. It has served me just fine every time I've used it.
As a side note, you can remove the need to always do re-positioning on INSERTS/DELETES by using sparse numbering -- kind of like basic back in the day... you can number your positions 10, 20, 30, etc. and if you need to insert something in between 10 and 20 you just insert it with a position of 15. Likewise when deleting you can just delete the row and leave the gap. You only need to do re-numbering when you actually change the order or if you try to do an insert and there is no appropriate gap to insert into.
Of course depending on your particular situation (e.g. whether you have the other rows already loaded into memory or not) it may or may not make sense to use the gap approach.
If the objects aren't heavily keyed by other tables, and the lists are short, deleting everything in the domain and just re-inserting the correct list is the easiest. But that's not practical if the lists are large and you have lots of constraints to slow down the delete. I think your first method is really the cleanest. If you run it in a transaction you can be sure nothing odd happens while you're in the middle of the update to screw up the order.
Just a thought considering option #1 vs #3: doesn't the spaced array option (#3) only postpone the problem of the normal array (#1)? Whatever algorithm you choose, either it's broken, and you'll run into problems with #3 later, or it works, and then #1 should work just as well.
I did this in my last project, but it was for a table that only occasionally needed to be specifically ordered, and wasn't accessed too often. I think the spaced array would be the best option, because it reordering would be cheapest in the average case, just involving a change to one value and a query on two).
Also, I would imagine ORDER BY would be pretty heavily optimized by database vendors, so leveraging that function would be advantageous for performance as opposed to the linked list implementation.
Use a floating point number to represent the position of each item:
Item 1 -> 0.0
Item 2 -> 1.0
Item 3 -> 2.0
Item 4 -> 3.0
You can place any item between any other two items by simple bisection:
Item 1 -> 0.0
Item 4 -> 0.5
Item 2 -> 1.0
Item 3 -> 2.0
(Moved item 4 between items 1 and 2).
The bisection process can continue almost indefinitely due to the way floating point numbers are encoded in a computer system.
Item 4 -> 0.5
Item 1 -> 0.75
Item 2 -> 1.0
Item 3 -> 2.0
(Move item 1 to the position just after Item 4)
Since I've mostly run into this with Django, I've found this solution to be the most workable. It seems that there isn't any "right way" to do this in a relational database.
I'd do a consecutive number, with a trigger on the table that "makes room" for a priority if it already exists.
I had this problem as well. I was under heavy time pressure (aren't we all) and I went with option #1, and only updated rows that changed.
If you swap item 1 with item 10, just do two updates to update the order numbers of item 1 and item 10. I know it is algorithmically simple, and it is O(n) worst case, but that worst case is when you have a total permutation of the list. How often is that going to happen? That's for you to answer.
I had the same issue and have probably spent at least a week concerning myself about the proper data modeling, but I think I've finally got it. Using the array datatype in PostgreSQL, you can store the primary key of each ordered item and update that array accordingly using insertions or deletions when your order changes. Referencing a single row will allow you to map all your objects based on the ordering in the array column.
It's still a bit choppy of a solution but it will likely work better than option #1, since option 1 requires updating the order number of all the other rows when ordering changes.
Scheme #1 and Scheme #3 have the same complexity in every operation except INSERT writes. Scheme #1 has O(n) writes on INSERT and Scheme #3 has O(1) writes on INSERT.
For every other database operation, the complexity is the same.
Scheme #2 should not even be considered because its DELETE requires O(n) reads and writes. Scheme #1 and Scheme #3 have O(1) DELETE for both read and write.
New method
If your elements have a distinct parent element (i.e. they share a foreign key row), then you can try the following ...
Django offers a database-agnostic solution to storing lists of integers within CharField(). One drawback is that the max length of the stored string can't be greater than max_length, which is DB-dependent.
In terms of complexity, this would give Scheme #1 O(1) writes for INSERT, because the ordering information would be stored as a single field in the parent element's row.
Another drawback is that a JOIN to the parent row is now required to update ordering.
https://docs.djangoproject.com/en/dev/ref/validators/#django.core.validators.validate_comma_separated_integer_list