How to add same method with 2 different names in GNU Smalltalk? - smalltalk

How can I have a class expose the same method with 2 different names?
E.g. that the asDescripton function does the same thing / re-exports the asString function without simply copy-pasting the code.
Object subclass: Element [
| width height |
Element class >> new [
^super new init.
]
init [
width := 0.
height := 0.
]
asString [
^ 'Element with width ', width, ' and height ', height.
]
asDescription [ "???" ]
]

In Smalltalk you usually implement #printOn: and get #asString from the inherited version of it which goes on the lines of
Object >> asString
| stream |
stream := '' writeStream.
self printOn: stream.
^stream contents
The actual implementation of this method may be slightly different in your environment, the idea remains the same.
As this is given, it is usually a good idea to implement #printOn: rather than #asString. In your case you would have it implemented as
Element >> printOn: aStream
aStream
nextPutAll: 'Element with width ';
nextPutAll: width asString;
nextPutAll: ' and height ';
nextPutAll: height asString
and then, as JayK and luker indicated,
Element >> asDescription
^self asString
In other words, you (usually) don't want to implement #asString but #printOn:. This approach is better because it takes advantage of the inheritance and ensures consistency between #printOn: and #asString, which is usually expected. In addition, it will give you the opportunity to start becoming familiar with Streams, which play a central role in Smalltalk.
Note by the way that in my implementation I've used width asString and heigh asString. Your code attempts to concatenate (twice) a String with a Number:
'Element with width ', width, ' and height ', height.
which won't work as you can only concatenate instances of String with #,.
In most of the dialects, however, you can avoid sending #asString by using #print: instead of #nextPutAll:, something like:
Element >> printOn: aStream
aStream
nextPutAll: 'Element with width ';
print: width;
nextPutAll: ' and height ';
print: height
which is a little bit less verbose and therefore preferred.
One last thing. I would recommend changing the first line above with this one:
nextPutAll: self class name;
nextPutAll: ' with width ';
instead of hardcoding the class name. This would prove to be useful if in the future you subclass Element because you will have no need to tweak #printOn: and any of its derivatives (e.g., #asDescription).
Final thought: I would rename the selector #asDescription to be #description. The preposition as is intended to convert an object to another of a different class (this is why #asString is ok). But this doesn't seem to be the case here.
Addendum: Why?
There is a reason why #asString is implemented in terms of #printOn:, and not the other way around: generality. While the effort (code complexity) is the same, #printOn: is clearly a winner because it will work with any character Stream. In particular, it will work with no modification whatsoever with
Files (instances of FileStream)
Sockets (instances of SocketStream)
The Transcript
In other words, by implementing #printOn: one gets #asString for free (inheritance) and --at the same time-- the ability to dump a representation of the object on files and sockets. The Transcript is particularly interesting because it supports the Stream protocol for writing, and thus can be used for testing purposes before sending any bytes to external devices.
Remember!
In Smalltalk, the goal is to have objects whose behavior is simple and general at once, not just simple!

As lurker wrote in the comments, send the asString message in asDescription.
asDescription
^ self asString
This is usually done to expose additional interfaces/protocols from a class, for compatibility or as a built-in adapter. If you create something new that does not have to fit in anywhere else, consider to stick to just one name for each operation.
Edit: if you really are after the re-export semantics and do not want the additional message sends involved in the delegation above, there might be a way to put the CompiledMethod of asString in the method dictionary of the class a second time under the other name. But neither am I sure that this would work, nor do I know the protocol in GNU Smalltalk how to manipulate the method dictionary. Have a look at the documentation of the Behavior class. Also, I would not consider this as programming Smalltalk, but tinkering with the system.

Related

How to use hyperoperators with Scalars that aren't really scalar?

I want to make a hash of sets. Well, SetHashes, since they need to be mutable.
In fact, I would like to initialize my Hash with multiple identical copies of the same SetHash.
I have an array containing the keys for the new hash: #keys
And I have my SetHash already initialized in a scalar variable: $set
I'm looking for a clean way to initialize the hash.
This works:
my %hash = ({ $_ => $set.clone } for #keys);
(The parens are needed for precedence; without them, the assignment to %hash is part of the body of the for loop. I could change it to a non-postfix for loop or make any of several other minor changes to get the same result in a slightly different way, but that's not what I'm interested in here.)
Instead, I was kind of hoping I could use one of Raku's nifty hyper-operators, maybe like this:
my %hash = #keys »=>» $set;
That expression works a treat when $set is a simple string or number, but a SetHash?
Array >>=>>> SetHash can never work reliably: order of keys in SetHash is indeterminate
Good to know, but I don't want it to hyper over the RHS, in any order. That's why I used the right-pointing version of the hyperop: so it would instead replicate the RHS as needed to match it up to the LHS. In this sort of expression, is there any way to say "Yo, Raku, treat this as a scalar. No, really."?
I tried an explicit Scalar wrapper (which would make the values harder to get at, but it was an experiment):
my %map = #keys »=>» $($set,)
And that got me this message:
Lists on either side of non-dwimmy hyperop of infix:«=>» are not of the same length while recursing
left: 1 elements, right: 4 elements
So it has apparently recursed into the list on the left and found a single key and is trying to map it to a set on the right which has 4 elements. Which is what I want - the key mapped to the set. But instead it's mapping it to the elements of the set, and the hyperoperator is pointing the wrong way for that combination of sizes.
So why is it recursing on the right at all? I thought a Scalar container would prevent that. The documentation says it prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
The error message says the version of the hyperoperator I'm using is "non-dwimmy", which may explain why it's not in fact doing what I mean, but is there maybe an even-less-dwimmy version that lets me be even more explicit? I still haven't gotten my brain aligned well enough with the way Raku works for it to be able to tell WIM reliably.
I'm looking for a clean way to initialize the hash.
One idiomatic option:
my %hash = #keys X=> $set;
See X metaoperator.
The documentation says ... a Scalar container ... prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
A cat is an animal, but an animal is not necessarily a cat. Flattening may act recursively, but some operations that act recursively don't flatten. Recursive flattening stops if it sees a Scalar. But hyperoperation isn't flattening. I get where you're coming from, but this is not the real problem, or at least not a solution.
I had thought that hyperoperation had two tests controlling recursing:
Is it hyperoperating a nodal operation (eg .elems)? If so, just apply it like a parallel shallow map (so don't recurse). (The current doc quite strongly implies that nodal can only be usefully applied to a method, and only a List one (or augmentation thereof) rather than any routine that might get hyperoperated. That is much more restrictive than I was expecting, and I'm sceptical of its truth.)
Otherwise, is a value Iterable? If so, then recurse into that value. In general the value of a Scalar automatically behaves as the value it contains, and that applies here. So Scalars won't help.
A SetHash doesn't do the Iterable role. So I think this refusal to hyperoperate with it is something else.
I just searched the source and that yields two matches in the current Rakudo source, both in the Hyper module, with this one being the specific one we're dealing with:
multi method infix(List:D \left, Associative:D \right) {
die "{left.^name} $.name {right.^name} can never work reliably..."
}
For some reason hyperoperation explicitly rejects use of Associatives on either the right or left when coupled with the other side being a List value.
Having pursued the "blame" (tracking who made what changes) I arrived at the commit "Die on Associative <<op>> Iterable" which says:
This can never work due to the random order of keys in the Associative.
This used to die before, but with a very LTA error about a Pair.new()
not finding a suitable candidate.
Perhaps this behaviour could be refined so that the determining factor is, first, whether an operand does the Iterable role, and then if it does, and is Associative, it dies, but if it isn't, it's accepted as a single item?
A search for "can never work reliably" in GH/rakudo/rakudo issues yields zero matches.
Maybe file an issue? (Update I filed "RFC: Allow use of hyperoperators with an Associative that does not do Iterable role instead of dying with "can never work reliably".)
For now we need to find some other technique to stop a non-Iterable Associative being rejected. Here I use a Capture literal:
my %hash = #keys »=>» \($set);
This yields: {a => \(SetHash.new("b","a","c")), b => \(SetHash.new("b","a","c")), ....
Adding a custom op unwraps en passant:
sub infix:« my=> » ($lhs, $rhs) { $lhs => $rhs[0] }
my %hash = #keys »my=>» \($set);
This yields the desired outcome: {a => SetHash(a b c), b => SetHash(a b c), ....
my %hash = ({ $_ => $set.clone } for #keys);
(The parens seem to be needed so it can tell that the curlies are a block instead of a Hash literal...)
No. That particular code in curlies is a Block regardless of whether it's in parens or not.
More generally, Raku code of the form {...} in term position is almost always a Block.
For an explanation of when a {...} sequence is a Hash, and how to force it to be one, see my answer to the Raku SO Is that a Hash or a Block?.
Without the parens you've written this:
my %hash = { block of code } for #keys
which attempts to iterate #keys, running the code my %hash = { block of code } for each iteration. The code fails because you can't assign a block of code to a hash.
Putting parens around the ({ block of code } for #keys) part completely alters the meaning of the code.
Now it runs the block of code for each iteration. And it concatenates the result of each run into a list of results, each of which is a Pair generated by the code $_ => $set.clone. Then, when the for iteration has completed, that resulting list of pairs is assigned, once, to my %hash.

Understanding GNU Smalltalk Closure

The following piece of code is giving the error error: did not understand '#generality'
pqueue := SortedCollection new.
freqtable keysAndValuesDo: [:key :value |
(value notNil and: [value > 0]) ifTrue: [
|newvalue|
newvalue := Leaf new: key count: value.
pqueue add: newvalue.
]
].
[pqueue size > 1] whileTrue:[
|first second new_internal newcount|
first := pqueue removeFirst.
second := pqueue removeFirst.
first_count := first count.
second_count := second count.
newcount := first_count + second_count.
new_internal := Tree new: nl count: newcount left: first right: second.
pqueue add: new_internal.
].
The inconsistency is in the line pqueue add: new_internal. When I remove this line, the program compiles. I think the problem is related to the iteration block [pqueue size > 1] whileTrue: and pqueue add: new_internal.
Note: This is the algorithm to build the decoding tree based on huffman code.
error-message expanded
Object: $<10> error: did not understand #generality
MessageNotUnderstood(Exception)>>signal (ExcHandling.st:254)
Character(Object)>>doesNotUnderstand: #generality (SysExcept.st:1448)
SmallInteger(Number)>>retryDifferenceCoercing: (Number.st:357)
SmallInteger(Number)>>retryRelationalOp:coercing: (Number.st:295)
SmallInteger>><= (SmallInt.st:215)
Leaf>><= (hzip.st:30)
optimized [] in SortedCollection class>>defaultSortBlock (SortCollect.st:7)
SortedCollection>>insertionIndexFor:upTo: (SortCollect.st:702)
[] in SortedCollection>>merge (SortCollect.st:531)
SortedCollection(SequenceableCollection)>>reverseDo: (SeqCollect.st:958)
SortedCollection>>merge (SortCollect.st:528)
SortedCollection>>beConsistent (SortCollect.st:204)
SortedCollection(OrderedCollection)>>removeFirst (OrderColl.st:295)
optimized [] in UndefinedObject>>executeStatements (hzip.st:156)
BlockClosure>>whileTrue: (BlkClosure.st:328)
UndefinedObject>>executeStatements (hzip.st:154)
Object: $<10> error: did not understand #generality
MessageNotUnderstood(Exception)>>signal (ExcHandling.st:254)
Character(Object)>>doesNotUnderstand: #generality (SysExcept.st:1448)
SmallInteger(Number)>>retryDifferenceCoercing: (Number.st:357)
SmallInteger(Number)>>retryRelationalOp:coercing: (Number.st:295)
SmallInteger>><= (SmallInt.st:215)
Leaf>><= (hzip.st:30)
optimized [] in SortedCollection class>>defaultSortBlock (SortCollect.st:7)
SortedCollection>>insertionIndexFor:upTo: (SortCollect.st:702)
[] in SortedCollection>>merge (SortCollect.st:531)
SortedCollection(SequenceableCollection)>>reverseDo: (SeqCollect.st:958)
SortedCollection>>merge (SortCollect.st:528)
SortedCollection>>beConsistent (SortCollect.st:204)
SortedCollection(OrderedCollection)>>do: (OrderColl.st:64)
UndefinedObject>>executeStatements (hzip.st:164)
One learning we can take from this question is to acquire the habit of reading the stack trace trying to make sense of it. Let's focus in the last few messages:
1. Object: $<10> error: did not understand #generality
2. MessageNotUnderstood(Exception)>>signal (ExcHandling.st:254)
3. Character(Object)>>doesNotUnderstand: #generality (SysExcept.st:1448)
4. SmallInteger(Number)>>retryDifferenceCoercing: (Number.st:357)
5. SmallInteger(Number)>>retryRelationalOp:coercing: (Number.st:295)
6. SmallInteger>><= (SmallInt.st:215)
7. Leaf>><= (hzip.st:30)
8. optimized [] in SortedCollection class>>defaultSortBlock (SortCollect.st:7)
Each of these lines represents the activation of a method. Every line represents a message and the sequence of messages goes upwards (as it happens in any Stack.) The full detail of every activation can be seen in the debugger. Here, however, we only are presented with the class >> #selector pair. There are several interesting facts we can identify from this summarized information:
In line 1 we get the actual error. In this case we got a MessageNotUnderstood exception. The receiver of the message was the Character $<10>, i.e., the linefeed character.
Lines 2 and 3 confirm that the not understood message was #generality.
Lines 4, 5 and 6 show the progression of messages that ended up sending #generality to the wrong object (linefeed). While 4 and 5 might look obscure for the non-experienced Smalltalker, line 6 has the key information: some SmallInteger received the <= message. This message would fail because the argument wasn't the appropriate one. From the information we already got we know that the argument was the linefeed character.
Line 7 shows that SmallInteger >> #<= came from the way the same selector #<= is implemented in Leaf. It tells us that a Leaf delegates #<= to some Integer known to it.
Line 8 says why we are dealing with the comparison selector #<=. The reason is that we are sorting some collection.
So, we are trying to sort a collection of Leaf objects which rely on some integers for their comparison and somehow one of those "integers" wasn't a Number but the Character linefeed.
If we take a look at the Smalltalk code with this information in mind we see:
The SortedCollection is pqueue and the Leaf objects are the items being added to it.
The invariant property of a SortedCollection is that it always has its elements ordered by a given criterion. Consequently, every time we add: an element to it, the element will be inserted in the correct position. Hence the comparison message #<=.
Now let's look for #add: in the code. Besides of the one above, there is another below:
new_internal := Tree new: nl count: newcount left: first right: second.
pqueue add: new_internal.
This one is interesting because is where the error happens. Note however that we are not adding a Leaf here but a Tree. But wait, it might be that a Tree and a Leaf belong to the same hierarchy. In fact, both Tree and Leaf represent nodes in an acyclic graph. Moreover, the code confirms this idea when it reads:
Leaf new: key count: value.
...
Tree new: nl count: newcount left: first right: second.
See? Both Leaf and Tree have some key (the argument of new:) and some count. In addition, Trees have left and right branches, which Leafs not (of course!)
So, in principle, it would be ok to add instances of Tree to our pqueue collection. This cannot be what causes the error.
Now, if we look closer to the way the Tree is created we can see a suspicious argument nl. This is interesting because of two reasons: (i) the variable nl is not defined in the part of the code we were given and (ii) the variable nl is the key that will be used by the Tree to respond to the #<= message. Therefore, nl must be the linefeed character $<10>. Which makes a lot of sense because nl is an abbreviation of newline and in the Linux world newlines are linefeeds.
Conclusion: The problem seems to be caused by the wrong argument nl used for the Tree's key.

Rebol: Dynamic binding of block words

In Rebol, there are words like foreach that allow "block parametrization" over a given word and a series, e.g., foreach w [1 2 3] [print w]. Since I find that syntax very convenient (as opposed to passing func blocks), I'd like to use it for my own words that operate on lazy lists, e.g map/stream x s [... x ... ].
How is that syntax idiom called? How is it properly implemented?
I was searching the docs, but I could not find a straight answer, so I tried to implement foreach on my own. Basically, my implementation comes in two parts. The first part is a function that binds a specific word in a block to a given value and yields a new block with the bound words.
bind-var: funct [block word value] [
qw: load rejoin ["'" word]
do compose [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
]
Using that, I implemented foreach as follows:
my-foreach: func ['word s block] [
if empty? block [return none]
until [
do bind-var block word first s
s: next s
tail? s
]
]
I find that approach quite clumsy (and it probably is), so I was wondering how the problem can be solved more elegantly. Regardless, after coming up with my contraption, I am left with two questions:
In bind-var, I had to do some wrapping in bind [(block)] (:qw) because (block) would "dissolve". Why?
Because (?) of 2, the bind operation is performed on a new block (created by the [(block)] expression), not the original one passed to my-foreach, with seperate bindings, so I have to operate on that. By mistake, I added [(block)] and it still works. But why?
Great question. :-) Writing your own custom loop constructs in Rebol2 and R3-Alpha (and now, history repeating with Red) has many unanswered problems. These kinds of problems were known to the Rebol3 developers and considered blocking bugs.
(The reason that Ren-C was started was to address such concerns. Progress has been made in several areas, though at time of writing many outstanding design problems remain. I'll try to just answer your questions under the historical assumptions, however.)
In bind-var, I had to do some wrapping in bind [(block)] (:qw) because (block) would "dissolve". Why?
That's how COMPOSE works by default...and it's often the preferred behavior. If you don't want that, use COMPOSE/ONLY and blocks will not be spliced, but inserted as-is.
qw: load rejoin ["'" word]
You can convert WORD! to LIT-WORD! via to lit-word! word. You can also shift the quoting responsibility into your boilerplate, e.g. set quote (word) value, and avoid qw altogether.
Avoiding LOAD is also usually preferable, because it always brings things into the user context by default--so it loses the binding of the original word. Doing a TO conversion will preserve the binding of the original WORD! in the generated LIT-WORD!.
do compose [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
Presumably you meant COMPOSE/DEEP here, otherwise this won't work at all... with regular COMPOSE the embedded PAREN!s cough, GROUP!s for [(block)] will not be substituted.
By mistake, I added [(block)] and it still works. But why?
If you do a test like my-foreach x [1] [print x probe bind? 'x] the output of the bind? will show you that it is bound into the "global" user context.
Fundamentally, you don't have any MAKE OBJECT! or USE to create a new context to bind the body into. Hence all you could potentially be doing here would be stripping off any existing bindings in the code for x and making sure they are into the user context.
But originally you did have a USE, that you edited to remove. That was more on the right track:
bind-var: func [block word value /local qw] [
qw: load rejoin ["'" word]
do compose/deep [
use [(qw)] [
set (:qw) value
bind [(block)] (:qw)
[(block)] ; This shouldn't work? see Question 2
]
]
]
You're right to suspect something is askew with how you're binding. But the reason this works is because your BIND is only redoing the work that USE itself does. USE already deep walks to make sure any of the word bindings are adjusted. So you could omit the bind entirely:
do compose/deep [
use [(qw)] [
set (:qw) value
[(block)]
]
]
the bind operation is performed on a new block (created by the [(block)] expression), not the original one passed to my-foreach, with separate bindings
Let's adjust your code by taking out the deep-walking USE to demonstrate the problem you thought you had. We'll use a simple MAKE OBJECT! instead:
bind-var: func [block word value /local obj qw] [
do compose/deep [
obj: make object! [(to-set-word word) none]
qw: bind (to-lit-word word) obj
set :qw value
bind [(block)] :qw
[(block)] ; This shouldn't work? see Question 2
]
]
Now if you try my-foreach x [1 2 3] [print x]you'll get what you suspected... "x has no value" (assuming you don't have some latent global definition of x it picks up, which would just print that same latent value 3 times).
But to make you sufficiently sorry you asked :-), I'll mention that my-foreach x [1 2 3] [loop 1 [print x]] actually works. That's because while you were right to say a bind in the past shouldn't affect a new block, this COMPOSE only creates one new BLOCK!. The topmost level is new, any "deeper" embedded blocks referenced in the source material will be aliases of the original material:
>> original: [outer [inner]]
== [outer [inner]]
>> composed: compose [<a> (original) <b>]
== [<a> outer [inner] <b>]
>> append original/2 "mutation"
== [inner "mutation"]
>> composed
== [<a> outer [inner "mutation"] <b>]
Hence if you do a mutating BIND on the composed result, it can deeply affect some of your source.
until [
do bind-var block word first s
s: next s
tail? s
]
On a general note of efficiency, you're running COMPOSE and BIND operations on each iteration of your loop. No matter how creative new solutions to these kinds of problems get (there's a LOT of new tech in Ren-C affecting your kind of problem), you're still probably going to want to do it only once and reuse it on the iterations.

What's the logic behind some WriteStream messages?

Consider the following sequence of messages:
1. s := '' writeStream.
2. s nextPutAll: '123'.
3. s skip: -3.
4. s position "=> 0".
5. s size "=> 3".
6. s isEmpty "=> false".
7. s contents isEmpty "=> true (!)"
Aren't 6 and 7 contradictory or, at least, confusing? What's the logic behind this behavior? (Dolphin has a similar functioning.)
UPDATE
As #MartinW observed (see the comment below) ReadStream and ReadWriteStream behave differently (we could say, as expected.)
From a practical viewpoint the compatibility that worries me more is with FileStream where the contents message is not limited to the current position. Such a difference invalidates an otherwise nice "law" by which any code that works with a memory stream (string or byte array) also works with a file stream, and conversely. This kind of equivalence is very useful for testing and also for pedagogical reasons. The apparent loss of functionality would be easily recovered by adding the method #truncate to WriteStream which would explicitly shorten the size to the current position [cf., the discussion below with Derek Williams]
In many Smalltalks (like VAST), the method comments explain it well:
WriteStream>>contents
"Answer a Collection which is a copy collection that the receiver is
streaming over, truncated to the current position reference."
PositionableStream>>isEmpty
"Answer a Boolean which is true if the receiver can access any
objects and false otherwise."
Note "truncated to the current position reference."
In your example, contents is an empty string because you set the position back to 0 with skip: -3.
This behavior is not only correct, it's quite useful. For example, consider a method that builds a comma-separated list from a collection:
commaSeparatedListFor: aCollection
ws := '' writeStream.
aCollection do: [ :ea |
ws print: ea; nextPutAll: ', ' ].
ws isEmpty
ifFalse: [ ws position: ws position - 2 ].
^ws contents
In this case, the method may have written the final trailing ", ", but we want contents to exclude it.

How do I access an integer array within a struct/class from in-line assembly (blackfin dialect) using gcc?

Not very familiar with in-line assembly to begin with, and much less with that of the blackfin processor. I am in the process of migrating a legacy C application over to C++, and ran into a problem this morning regarding the following routine:
//
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
:: "a" ( buffer ), "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
I have a class that contains an array of shorts that is used for audio processing:
class AudProc
{
enum { buffer_size = 512 };
short M_samples[ buffer_size * 2 ];
// remaining part of class omitted for brevity
};
Within the AudProc class I have a method that calls clear_buffer, passing it the samples array:
clear_buffer ( M_samples, sizeof ( M_samples ) / 2 );
This generates a "Bus Error" and aborts the application.
I have tried making the array public, and that produces the same result. I have also tried making it static; that allows the call to go through without error, but no longer allows for multiple instances of my class as each needs its own buffer to work with. Now, my first thought is, it has something to do with where the buffer is in memory, or from where it is being accessed. Does something need to be changed in the in-line assembly to make this work, or in the way it is being called?
Thought that this was similar to what I was trying to accomplish, but it is using a different dialect of asm, and I can't figure out if it is the same problem I am experiencing or not:
GCC extended asm, struct element offset encoding
Anyone know why this is occurring and how to correct it?
Does anyone know where there is helpful documentation regarding the blackfin asm instruction set? I've tried looking on the ADSP site, but to no avail.
I would suspect that you could define your clear_buffer as
inline void clear_buffer (short * buffer, int len) {
memset (buffer, 0, sizeof(short)*len);
}
and probably GCC is able to optimize (when invoked with -O2 or -O3) that cleverly (because GCC knows about memset).
To understand assembly code, I suggest running gcc -S -O -fverbose-asm on some small C file, then to look inside the produced .s file.
I would have take a guess, because I don't know Blackfin assembler:
That LC0 sounds like "loop counter", LSETUP looks like a macro/insn, which, well, setups a loop between two labels and with a certain loop counter.
The "%0" operands is apparently the address to write to and we can safely guess it's incremented in the loop, in other words it's both an input and output operand and should be described as such.
Thus, I suggest describing it as in input-output operand, using "+" constraint modifier, as follows:
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
: "+a" ( buffer )
: "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
This is, of course, just a hypothesis, but you could disassemble the code and check if by any chance GCC allocated the same register for "%0" and "%2".
PS. Actually, only "+a" should be enough, early-clobber is irrelevant.
For anyone else who runs into a similar circumstance, the problem here was not with the in-line assembly, nor with the way it was being called: it was with the classes / structs in the program. The class that I believed to be the offender was not the problem - there was another class that held an instance of it, and due to other members of that outer class, the inner one was not aligned on a word boundary. This was causing the "Bus Error" that I was experiencing. I had not come across this before because the classes were not declared with __attribute__((packed)) in other code, but they are in my implementation.
Giving Type Attributes - Using the GNU Compiler Collection (GCC) a read was what actually sparked the answer for me. Two particular attributes that affect memory alignment (and, thus, in-line assembly such as I am using) are packed and aligned.
As taken from the aforementioned link:
aligned (alignment)
This attribute specifies a minimum alignment (in bytes) for variables of the specified type. For example, the declarations:
struct S { short f[3]; } __attribute__ ((aligned (8)));
typedef int more_aligned_int __attribute__ ((aligned (8)));
force the compiler to ensure (as far as it can) that each variable whose type is struct S or more_aligned_int is allocated and aligned at least on a 8-byte boundary. On a SPARC, having all variables of type struct S aligned to 8-byte boundaries allows the compiler to use the ldd and std (doubleword load and store) instructions when copying one variable of type struct S to another, thus improving run-time efficiency.
Note that the alignment of any given struct or union type is required by the ISO C standard to be at least a perfect multiple of the lowest common multiple of the alignments of all of the members of the struct or union in question. This means that you can effectively adjust the alignment of a struct or union type by attaching an aligned attribute to any one of the members of such a type, but the notation illustrated in the example above is a more obvious, intuitive, and readable way to request the compiler to adjust the alignment of an entire struct or union type.
As in the preceding example, you can explicitly specify the alignment (in bytes) that you wish the compiler to use for a given struct or union type. Alternatively, you can leave out the alignment factor and just ask the compiler to align a type to the maximum useful alignment for the target machine you are compiling for. For example, you could write:
struct S { short f[3]; } __attribute__ ((aligned));
Whenever you leave out the alignment factor in an aligned attribute specification, the compiler automatically sets the alignment for the type to the largest alignment that is ever used for any data type on the target machine you are compiling for. Doing this can often make copy operations more efficient, because the compiler can use whatever instructions copy the biggest chunks of memory when performing copies to or from the variables that have types that you have aligned this way.
In the example above, if the size of each short is 2 bytes, then the size of the entire struct S type is 6 bytes. The smallest power of two that is greater than or equal to that is 8, so the compiler sets the alignment for the entire struct S type to 8 bytes.
Note that although you can ask the compiler to select a time-efficient alignment for a given type and then declare only individual stand-alone objects of that type, the compiler's ability to select a time-efficient alignment is primarily useful only when you plan to create arrays of variables having the relevant (efficiently aligned) type. If you declare or use arrays of variables of an efficiently-aligned type, then it is likely that your program also does pointer arithmetic (or subscripting, which amounts to the same thing) on pointers to the relevant type, and the code that the compiler generates for these pointer arithmetic operations is often more efficient for efficiently-aligned types than for other types.
The aligned attribute can only increase the alignment; but you can decrease it by specifying packed as well. See below.
Note that the effectiveness of aligned attributes may be limited by inherent limitations in your linker. On many systems, the linker is only able to arrange for variables to be aligned up to a certain maximum alignment. (For some linkers, the maximum supported alignment may be very very small.) If your linker is only able to align variables up to a maximum of 8-byte alignment, then specifying aligned(16) in an __attribute__ still only provides you with 8-byte alignment. See your linker documentation for further information.
.
packed
This attribute, attached to struct or union type definition, specifies that each member (other than zero-width bit-fields) of the structure or union is placed to minimize the memory required. When attached to an enum definition, it indicates that the smallest integral type should be used.
Specifying this attribute for struct and union types is equivalent to specifying the packed attribute on each of the structure or union members. Specifying the -fshort-enums flag on the line is equivalent to specifying the packed attribute on all enum definitions.
In the following example struct my_packed_struct's members are packed closely together, but the internal layout of its s member is not packed—to do that, struct my_unpacked_struct needs to be packed too.
struct my_unpacked_struct
{
char c;
int i;
};
struct __attribute__ ((__packed__)) my_packed_struct
{
char c;
int i;
struct my_unpacked_struct s;
};
You may only specify this attribute on the definition of an enum, struct or union, not on a typedef that does not also define the enumerated type, structure or union.
The problem which I was experiencing was specifically due to the use of packed. I attempted to simply add the aligned attribute to the structs and classes, but the error persisted. Only removing the packed attribute resolved the problem. For now, I am leaving the aligned attribute on them and testing to see if I find any improvements in the efficiency of the code as mentioned above, simply due to their being aligned on word boundaries. The application makes use of arrays of these structures, so perhaps there will be better performance, but only profiling the code will say for certain.