Is it safe, to share an array between threads? - variables

Is it safe, to share an array between promises like I did it in the following code?
#!/usr/bin/env perl6
use v6;
sub my_sub ( $string, $len ) {
my ( $s, $l );
if $string.chars > $len {
$s = $string.substr( 0, $len );
$l = $len;
}
else {
$s = $string;
$l = $s.chars;
}
return $s, $l;
}
my #orig = <length substring character subroutine control elements now promise>;
my $len = 7;
my #copy;
my #length;
my $cores = 4;
my $p = #orig.elems div $cores;
my #vb = ( 0..^$cores ).map: { [ $p * $_, $p * ( $_ + 1 ) ] };
#vb[#vb.end][1] = #orig.elems;
my #promise;
for #vb -> $r {
#promise.push: start {
for $r[0]..^$r[1] -> $i {
( #copy[$i], #length[$i] ) = my_sub( #orig[$i], $len );
}
};
}
await #promise;

It depends how you define "array" and "share". So far as array goes, there are two cases that need to be considered separately:
Fixed size arrays (declared my #a[$size]); this includes multi-dimensional arrays with fixed dimensions (such as my #a[$xs, $ys]). These have the interesting property that the memory backing them never has to be resized.
Dynamic arrays (declared my #a), which grow on demand. These are, under the hood, actually using a number of chunks of memory over time as they grow.
So far as sharing goes, there are also three cases:
The case where multiple threads touch the array over its lifetime, but only one can ever be touching it at a time, due to some concurrency control mechanism or the overall program structure. In this case the arrays are never shared in the sense of "concurrent operations using the arrays", so there's no possibility to have a data race.
The read-only, non-lazy case. This is where multiple concurrent operations access a non-lazy array, but only to read it.
The read/write case (including when reads actually cause a write because the array has been assigned something that demands lazy evaluation; note this can never happen for fixed size arrays, as they are never lazy).
Then we can summarize the safety as follows:
| Fixed size | Variable size |
---------------------+----------------+---------------+
Read-only, non-lazy | Safe | Safe |
Read/write or lazy | Safe * | Not safe |
The * indicating the caveat that while it's safe from Perl 6's point of view, you of course have to make sure you're not doing conflicting things with the same indices.
So in summary, fixed size arrays you can safely share and assign to elements of from different threads "no problem" (but beware false sharing, which might make you pay a heavy performance penalty for doing so). For dynamic arrays, it is only safe if they will only be read from during the period they are being shared, and even then if they're not lazy (though given array assignment is mostly eager, you're not likely to hit that situation by accident). Writing, even to different elements, risks data loss, crashes, or other bad behavior due to the growing operation.
So, considering the original example, we see my #copy; and my #length; are dynamic arrays, so we must not write to them in concurrent operations. However, that happens, so the code can be determined not safe.
The other posts already here do a decent job of pointing in better directions, but none nailed the gory details.

Just have the code that is marked with the start statement prefix return the values so that Perl 6 can handle the synchronization for you. Which is the whole point of that feature.
Then you can wait for all of the Promises, and get all of the results using an await statement.
my #promise = do for #vb -> $r {
start
do # to have the 「for」 block return its values
for $r[0]..^$r[1] -> $i {
$i, my_sub( #orig[$i], $len )
}
}
my #results = await #promise;
for #results -> ($i,$copy,$len) {
#copy[$i] = $copy;
#length[$i] = $len;
}
The start statement prefix is only sort-of tangentially related to parallelism.
When you use it you are saying, “I don't need these results right now, but probably will later”.
That is the reason it returns a Promise (asynchrony), and not a Thread (concurrency)
The runtime is allowed to delay actually running that code until you finally ask for the results, and even then it could just do all of them sequentially in the same thread.
If the implementation actually did that, it could result in something like a deadlock if you instead poll the Promise by continually calling it's .status method waiting for it to change from Planned to Kept or Broken, and only then ask for its result.
This is part of the reason the default scheduler will start to work on any Promise codes if it has any spare threads.
I recommend watching jnthn's talk “Parallelism, Concurrency,
and Asynchrony in Perl 6”.
slides

This answer applies to my understanding of the situation on MoarVM, not sure what the state of art is on the JVM backend (or the Javascript backend fwiw).
Reading a scalar from several threads can be done safely.
Modifying a scalar from several threads can be done without having to fear for a segfault, but you may miss updates:
$ perl6 -e 'my $i = 0; await do for ^10 { start { $i++ for ^10000 } }; say $i'
46785
The same applies to more complex data structures like arrays (e.g. missing values being pushed) and hashes (missing keys being added).
So, if you don't mind missing updates, changing shared data structures from several threads should work. If you do mind missing updates, which I think is what you generally want, you should look at setting up your algorithm in a different way, as suggested by #Zoffix Znet and #raiph.

No.
Seriously. Other answers seem to make too many assumptions about the implementation, none of which are tested by the spec.

Related

How to avoid usize going negative?

I'm translating a chunk (2000 lines) of proprietary C code into Rust. In C, it is common to run a pointer, array index, etc. down, for as long as it is non-negative. In Rust, simplified to the bone, it would look something like:
while i >= 0 && more_conditions {
more_work;
i -= 1;
}
Of course, when i is usize, you get an under-overflow from subtraction. I have learned to work around this by using for loops with .rev(), offsetting my indexes by one, or using a different type and casting with as usize, etc.
Usually it works, and usually I can make it legible, but the code I'm modifying is chock-full of indexes running towards each other, and eventually tested with i_low > i_high
Something like (in Rust)
loop {
while condition1(i_low) { i_low += 1; }
while condition2(i_high) { j_high -= 1; }
if i_low > i_high { return something; }
do_something_else;
}
Every now and then this panics, as i_high runs past 0.
I have been inserting a lot of j_high >= 0 && in the code, and it become a lot less readable.
How do experienced Rust programmers avoid usize variables going to -1?
for loops? for i in (0..size).rev()
casting? i as usize, after checking for i < 0
offsetting your variable by one, and using i-1 when safe?
extra conditionals?
catching exceptions?
Or do you just eventually learn to write programs around these situations?
Clarification: The C code is not broken - it has been supposedly in production for ten years, structuring video segments on multiple servers 24/7. It is just not following Rust conventions - it often returns -1 as an index, it recurses with -1 for the low index of an array to process, and indexes go negative all the time. All of these are handled before problems occurs - ugly, but functional. Something like:
incident_segment = detect_incident(array, start, end);
attach(array, incident_segment);
store(array, start, incident_segment - 1);
process(array, incident_segment + 1, end);
In the above code, every single of the three resulting calls may be getting a segment index that's -1 (attach, store) or out of bounds (process) It's handled, but after the call.
My Rust code appears to be working as well. As a matter of fact, in order to deal with the negative usize, I added additional logic that pruned a number of recursions, so it runs about as fast as the C code (apparently faster, but that's also because I distributed the output on multiple drives)
The issue is that the client does not not want a full rewrite, and wants the 'native' programmers to be able to check the two programs against each other. Based on the answers so far, I'm thinking that using i64 and casting/shadowing as needed may be the best way to produce code that's easy to read for the 'natives'. Which I personally do not have to like...
If you want to do it idiomatically:
for j in (0..=i).rev() {
if conditions {
break;
}
//use j as your new i here
}
Note the use of ..=i here in the iterator, this means that it'll actually iterate including i: [0, 1, 2, ..., i-1, i], otherwise, you end up with [0, 1, 2, ..., i-2, i-1]
Otherwise, here is the code:
while (i as isize - 1) != -2 && more_conditions {
more_work;
i -= 1;
}
playground
I'd probably start by using saturating_sub (and _add for parallel structure):
while condition1(i_low) { i_low = i_low.saturating_add(1); }
while condition2(i_high) { j_high = j_high.saturating_sub(1); }
You need to be careful to ensure that your logic handles the value saturating at zero. You could also use more C-like semantics with wrapping_sub.
Truthfully, there's no one-size-fits-all solution. Many times, complicated logic becomes simpler if you abstract it a bit, or turn it slightly sideways. You haven't provided any concrete examples, so we cannot give any useful advice. I solve way too many problems with iterators, so that's often my first solution.
catching exceptions
Absolutely not. That's exceedingly inefficient and non-idiomatic.

Kotlin: Why is Sequence more performant in this example?

Currently, I am looking into Kotlin and have a question about Sequences vs. Collections.
I read a blog post about this topic and there you can find this code snippets:
List implementation:
val list = generateSequence(1) { it + 1 }
.take(50_000_000)
.toList()
measure {
list
.filter { it % 3 == 0 }
.average()
}
// 8644 ms
Sequence implementation:
val sequence = generateSequence(1) { it + 1 }
.take(50_000_000)
measure {
sequence
.filter { it % 3 == 0 }
.average()
}
// 822 ms
The point here is that the Sequence implementation is about 10x faster.
However, I do not really understand WHY that is. I know that with a Sequence, you do "lazy evaluation", but I cannot find any reason why that helps reducing the processing in this example.
However, here I know why a Sequence is generally faster:
val result = sequenceOf("a", "b", "c")
.map {
println("map: $it")
it.toUpperCase()
}
.any {
println("any: $it")
it.startsWith("B")
}
Because with a Sequence you process the data "vertically", when the first element starts with "B", you don't have to map for the rest of the elements. It makes sense here.
So, why is it also faster in the first example?
Let's look at what those two implementations are actually doing:
The List implementation first creates a List in memory with 50 million elements.  This will take a bare minimum of 200MB, since an integer takes 4 bytes.
(In fact, it's probably far more than that.  As Alexey Romanov pointed out, since it's a generic List implementation and not an IntList, it won't be storing the integers directly, but will be ‘boxing’ them — storing references to Int objects.  On the JVM, each reference could be 8 or 16 bytes, and each Int could take 16, giving 1–2GB.  Also, depending how the List gets created, it might start with a small array and keep creating larger and larger ones as the list grows, copying all the values across each time, using more memory still.)
Then it has to read all the values back from the list, filter them, and create another list in memory.
Finally, it has to read all those values back in again, to calculate the average.
The Sequence implementation, on the other hand, doesn't have to store anything!  It simply generates the values in order, and as it does each one it checks whether it's divisible by 3 and if so includes it in the average.
(That's pretty much how you'd do it if you were implementing it ‘by hand’.)
You can see that in addition to the divisibility checking and average calculation, the List implementation is doing a massive amount of memory access, which will take a lot of time.  That's the main reason it's far slower than the Sequence version, which doesn't!
Seeing this, you might ask why we don't use Sequences everywhere…  But this is a fairly extreme example.  Setting up and then iterating the Sequence has some overhead of its own, and for smallish lists that can outweigh the memory overhead.  So Sequences only have a clear advantage in cases when the lists are very large, are processed strictly in order, there are several intermediate steps, and/or many items are filtered out along the way (especially if the Sequence is infinite!).
In my experience, those conditions don't occur very often.  But this question shows how important it is to recognise them when they do!
Leveraging lazy-evaluation allows avoiding the creation of intermediate objects that are irrelevant from the point of the end goal.
Also, the benchmarking method used in the mentioned article is not super accurate. Try to repeat the experiment with JMH.
Initial code produces a list containing 50_000_000 objects:
val list = generateSequence(1) { it + 1 }
.take(50_000_000)
.toList()
then iterates through it and creates another list containing a subset of its elements:
.filter { it % 3 == 0 }
... and then proceeds with calculating the average:
.average()
Using sequences allows you to avoid doing all those intermediate steps. The below code doesn't produce 50_000_000 elements, it's just a representation of that 1...50_000_000 sequence:
val sequence = generateSequence(1) { it + 1 }
.take(50_000_000)
adding a filtering to it doesn't trigger the calculation itself as well but derives a new sequence from the existing one (3, 6, 9...):
.filter { it % 3 == 0 }
and eventually, a terminal operation is called that triggers the evaluation of the sequence and the actual calculation:
.average()
Some relevant reading:
Kotlin: Beware of Java Stream API Habits
Kotlin Collections API Performance Antipatterns

Kotlin stdlib operatios vs for loops

I wrote the following code:
val src = (0 until 1000000).toList()
val dest = ArrayList<Double>(src.size / 2 + 1)
for (i in src)
{
if (i % 2 == 0) dest.add(Math.sqrt(i.toDouble()))
}
IntellJ (in my case AndroidStudio) is asking me if I want to replace the for loop with operations from stdlib. This results in the following code:
val src = (0 until 1000000).toList()
val dest = ArrayList<Double>(src.size / 2 + 1)
src.filter { it % 2 == 0 }
.mapTo(dest) { Math.sqrt(it.toDouble()) }
Now I must say, I like the changed code. I find it easier to write than for loops when I come up with similar situations. However upon reading what filter function does, I realized that this is a lot slower code compared to the for loop. filter function creates a new list containing only the elements from src that match the predicate. So there is one more list created and one more loop in the stdlib version of the code. Ofc for small lists it might not be important, but in general this does not sound like a good alternative. Especially if one should chain more methods like this, you can get a lot of additional loops that could be avoided by writing a for loop.
My question is what is considered good practice in Kotlin. Should I stick to for loops or am I missing something and it does not work as I think it works.
If you are concerned about performance, what you need is Sequence. For example, your above code will be
val src = (0 until 1000000).toList()
val dest = ArrayList<Double>(src.size / 2 + 1)
src.asSequence()
.filter { it % 2 == 0 }
.mapTo(dest) { Math.sqrt(it.toDouble()) }
In the above code, filter returns another Sequence, which represents an intermediate step. Nothing is really created yet, no object or array creation (except a new Sequence wrapper). Only when mapTo, a terminal operator, is called does the resulting collection is created.
If you have learned java 8 stream, you may found the above explaination somewhat familiar. Actually, Sequence is roughly the kotlin equivalent of java 8 Stream. They share similiar purpose and performance characteristic. The only difference is Sequence isn't designed to work with ForkJoinPool, thus a lot easier to implement.
When there is multiple steps involved or the collection may be large, it's suggested to use Sequence instead of plain .filter {...}.mapTo{...}. I also suggest you to use the Sequence form instead of your imperative form because it's easier to understand. Imperative form may become complex, thus hard to understand, when there are 5 or more steps involved in the data processing. If there is just one step, you don't need a Sequence, because it just creates garbage and gives you nothing useful.
You're missing something. :-)
In this particular case, you can use an IntProgression:
val progression = 0 until 1_000_000 step 2
You can then create your desired list of squares in various ways:
// may make the list larger than necessary
// its internal array is copied each time the list grows beyond its capacity
// code is very straight forward
progression.map { Math.sqrt(it.toDouble()) }
// will make the list the exact size needed
// no copies are made
// code is more complicated
progression.mapTo(ArrayList(progression.last / 2 + 1)) { Math.sqrt(it.toDouble()) }
// will make the list the exact size needed
// a single intermediate list is made
// code is minimal and makes sense
progression.toList().map { Math.sqrt(it.toDouble()) }
My advice would be to choose whichever coding style you prefer. Kotlin is both object-oriented and functional language, meaning both of your propositions are correct.
Usually, functional constructs favor readability over performance; however, in some cases, procedural code will also be more readable. You should try to stick with one style as much as possible, but don't be afraid to switch some code if you feel like it's better suited to your constraints, either readability, performance, or both.
The converted code does not need the manual creation of the destination list, and can be simplified to:
val src = (0 until 1000000).toList()
val dest = src.filter { it % 2 == 0 }
.map { Math.sqrt(it.toDouble()) }
And as mentioned in the excellent answer by #glee8e you can use a sequence to do a lazy evaluation. The simplified code for using a sequence:
val src = (0 until 1000000).toList()
val dest = src.asSequence() // change to lazy
.filter { it % 2 == 0 }
.map { Math.sqrt(it.toDouble()) }
.toList() // create the final list
Note the addition of the toList() at the end is to change from a sequence back to a final list which is the one copy made during the processing. You can omit that step to remain as a sequence.
It is important to highlight the comments by #hotkey saying that you should not always assume that another iteration or a copy of a list causes worse performance than lazy evaluation. #hotkey says:
Sometimes several loops. even if they copy the whole collection, show good performance because of good locality of reference. See: Kotlin's Iterable and Sequence look exactly same. Why are two types required?
And excerpted from that link:
... in most cases it has good locality of reference thus taking advantage of CPU cache, prediction, prefetching etc. so that even multiple copying of a collection still works good enough and performs better in simple cases with small collections.
#glee8e says that there are similarities between Kotlin sequences and Java 8 streams, for detailed comparisons see: What Java 8 Stream.collect equivalents are available in the standard Kotlin library?

What is the quickest way to iterate through a Iterator in reverse

Let's say I'd like to iterate through a generic iterator in reverse, without knowing about the internals of the iterator and essentially not cheating via untyped magic and assuming this could be any type of iterable, which serves a iterator; can we optimise the reverse of a iterator at runtime or even via macros?
Forwards
var a = [1, 2, 3, 4].iterator();
// Actual iteration bellow
for(i in a) {
trace(i);
}
Backwards
var a = [1, 2, 3, 4].iterator();
// Actual reverse iteration bellow
var s = [];
for(i in a) {
s.push(i);
}
s.reverse();
for(i in s) {
trace(i);
}
I would assume that there has to be a simpler way, or at least fast way of doing this. We can't know a size because the Iterator class doesn't carry one, so we can't invert the push on to the temp array. But we can remove the reverse because we do know the size of the temp array.
var a = [1,2,3,4].iterator();
// Actual reverse iteration bellow
var s = [];
for(i in a) {
s.push(i);
}
var total = s.length;
var totalMinusOne = total - 1;
for(i in 0...total) {
trace(s[totalMinusOne - i]);
}
Is there any more optimisations that could be used to remove the possibility of the array?
It bugs me that you have to duplicate the list, though... that's nasty. I mean, the data structure would ALREADY be an array, if that was the right data format for it. A better thing (less memory fragmentation and reallocation) than an Array (the "[]") to copy it into might be a linked List or a Hash.
But if we're using arrays, then Array Comprehensions (http://haxe.org/manual/comprehension) are what we should be using, at least in Haxe 3 or better:
var s = array(for (i in a) i);
Ideally, at least for large iterators that are accessed multiple times, s should be cached.
To read the data back out, you could instead do something a little less wordy, but quite nasty, like:
for (i in 1-s.length ... 1) {
trace(s[-i]);
}
But that's not very readable and if you're after speed, then creating a whole new iterator just to loop over an array is clunky anyhow. Instead I'd prefer the slightly longer, but cleaner, probably-faster, and probably-less-memory:
var i = s.length;
while (--i >= 0) {
trace(s[i]);
}
First of all I agree with Dewi Morgan duplicating the output generated by an iterator to reverse it, somewhat defeats its purpose (or at least some of its benefits). Sometimes it's okay though.
Now, about a technical answer:
By definition a basic iterator in Haxe can only compute the next iteration.
On the why iterators are one-sided by default, here's what we can notice:
if all if iterators could run backwards and forwards, the Iterator classes would take more time to write.
not all iterators run on collections or series of numbers.
E.g. 1: an iterator running on the standard input.
E.g. 2: an iterator running on a parabolic or more complicated trajectory for a ball.
E.g. 3: slightly different but think about the performance problems running an iterator on a very large single-linked list (eg the class List). Some iterators can be interrupted in the middle of the iteration (Lambda.has() and Lambda.indexOf() for instance return as soon as there is a match, so you normally don't want to think of what's iterated as a collection but more as an interruptible series or process iterated step by step).
While this doesn't mean you shouldn't define two-ways iterators if you need them (I've never done it in Haxe but it doesn't seem impossible), in the absolute having two-ways iterators isn't that natural, and enforcing Iterators to be like that would complicate coding one.
An intermediate and more flexible solution is to simply have ReverseXxIter where you need, for instance ReverseIntIter, or Array.reverseIter() (with using a custom ArrayExt class). So it's left for every programmer to write their own answers, I think it's a good balance; while it takes more time and frustration in the beginning (everybody probably had the same kind of questions), you end up knowing the language better and in the end there are just benefits for you.
Complementing the post of Dewi Morgan, you can use for(let i = a.length; --i >= 0;) i; if you wish to simplify the while() method. if you really need the index values, I think for(let i=a.length, k=keys(a); --i in k;) a[k[i]]; is the best that give to do keeping the performance. There is also for(let i of keys(a).reverse()) a[i]; which has cleaner writing, but its iteration rate increases 1n using .reduce()

Juggling multiple object instances

This question is coded in pseudo-PHP, but I really don't mind what language I get answers in (except for Ruby :-P), as this is purely hypothetical. In fact, PHP is quite possibly the worst language to be doing this type of logic in. Unfortunately, I have never done this before, so I can't provide a real-world example. Therefore, hypothetical answers are completely acceptable.
Basically, I have lots of objects performing a task. For this example, let's say each object is a class that downloads a file from the Internet. Each object will be downloading a different file, and the downloads are run in parallel. Obviously, some objects may finish downloading before others. The actual grabbing of data may run in threads, but that is not relevant to this question.
So we can define the object as such:
class DownloaderObject() {
var $url = '';
var $downloading = false;
function DownloaderObject($v){ // constructor
$this->url = $v;
start_downloading_in_the_background(url=$this->$url, callback=$this->finished);
$this->downloading = true;
}
function finished() {
save_the_data_somewhere();
$this->downloading = false;
$this->destroy(); // actually destroys the object
}
}
Okay, so we have lots of these objects running:
$download1 = new DownloaderObject('http://somesite.com/latest_windows.iso');
$download2 = new DownloaderObject('http://somesite.com/kitchen_sink.iso');
$download3 = new DownloaderObject('http://somesite.com/heroes_part_1.rar');
And we can store them in an array:
$downloads = array($download1, $download2, $download3);
So we have an array full of the downloads:
array(
1 => $download1,
2 => $download2,
3 => $download3
)
And we can iterate through them like this:
print('Here are the downloads that are running:');
foreach ($downloads as $d) {
print($d->url . "\n");
}
Okay, now suppose download 2 finishes, and the object is destroyed. Now we should have two objects in the array:
array(
1 => $download1,
3 => $download3
)
But there is a hole in the array! Key #2 is being unused. Also, if I wanted to start a new download, it is unclear where to insert the download into the array. The following could work:
$i = 0;
while ($i < count($downloads) - 1) {
if (!is_object($downloads[$i])) {
$downloads[$i] = new DownloaderObject('http://somesite.com/doctorwho.iso');
break;
}
$i++;
}
However, that is terribly inefficient (and while $i++ loops are nooby). So, another approach is to keep a counter.
function add_download($url) {
global $downloads;
static $download_counter;
$download_counter++;
$downloads[$download_counter] = new DownloaderObject($url);
}
That would work, but we still get holes in the array:
array(
1 => DownloaderObject,
3 => DownloaderObject,
7 => DownloaderObject,
13 => DownloaderObject
)
That's ugly. However, is that acceptable? Should the array be "defragmented", i.e. the keys rearranged to eliminate blank spaces?
Or is there another programmatic structure I should be aware of? I want a structure that I can add stuff to, remove stuff from, refer to keys in a variable, iterate through, etc., that is not an array. Does such a thing exist?
I have been coding for years, but this question has bugged me for very many of those years, and I am still not aware of an answer. This may be obvious to some programmers, but is extremely non-trivial to me.
The problem with PHP's "associative arrays" is that they aren't arrays at all, they're Hashmaps. Having holes there is perfectly fine. You might look at a linked list, as well, but a Hashmap seems perfectly suited to what you're doing.
What is maintaining your array of downloaders?
If you encapsulate the array in a class that is notified by the downloader when it is finished you won't have to worry about stale references to destroyed objects.
This class can manage the organisation of the array internally and present an interface to its users that looks more like an iterator than an array.
"$i++ loops" are nooby, but only because the code becomes much clearer if you use a for loop:
$i = 0;
while ($i < count($downloads) - 1) {
if (!is_object($downloads[$i])) {
$downloads[$i] = new DownloaderObject('http://somesite.com/doctorwho.iso');
break;
}
$i++;
}
Becomes
for($i=0;$i<count($downloads)-1;++$i){
if (!is_object($downloads[$i])) {
$downloads[$i] = new DownloaderObject('http://somesite.com/doctorwho.iso');
break;
}
}
Coming from a C# perspective, my first thought would be that you need a different data structure to an array - you need to think about the problem using a higher-level data structure. Perhaps a Queue, List or Stack would suit your purposes better?
The short answer to your question is that in PHP arrays are used for almost everything and you rarely end up using other data structures. Having holes in your array indexes isn't anything to worry about. In other programming languages such as Java you have a more diverse set of data structures to choose from: Sets, Hashes, Lists, Vectors and more. It seems that you would also need to have a closer interaction between the Array and DownloaderObject class. Just because the object $download2 has "destroyed()" itself the array will maintain a reference to that object.
Some good answers to this question, which reflect on the relative experience on the answerers. Thank you very much — they proved very educational.
I posted this question nearly three years ago. In hindsight, I can see my knowledge in that area was severely lacking. The biggest problem I had was that I was coming from a PHP perspective, which does not have the ability to arbitrarily pop elements. Something the other answers to this question helped me to discover was that a fundamentally superior model is 'linked lists'.
For C, I wrote a blog post about linked lists which contains code samples (too numerous to post here) but would neatly fill the original question's use case.
For PHP, a linked list implementation appears here, which I have never tried, but imagine it would also be the right way to deal with the above.
Interestingly, Python lists contain the pop() method which, unlike PHP's array_pop(), can pop arbitrary elements and keep everything in order. For example:
>>> x = ['baa', 'ram', 'ewe'] # our starting point
>>> x[1] # making sure element 1 is 'ram'
'ram'
>>> x.pop(1) # let's arbitrarily pop an element in the middle
'ram'
>>> x # the one we popped ('ram') is now gone
['baa', 'ewe']
>>> x[1] # and there are no holes: item 2 has become item 1
'ewe'