I didn't think it fair to post a comment on Fredrik Mörk's answer in this 2 year old post, so I thought I'd just ask it as a new question instead...
NB: This is not a critiscm of the answer in any way, I'm simply trying to understand this all before delving into memory management / the marshal class.
In that answer, the function GetByteArray allocates memory to each object within the given array, within a loop.
Would the GetByteArray function on the aforementioned post have benefited at all from allocating memory for the total size of the provided array:
Dim arrayBufferPtr = Marshal.AllocHGlobal(Marshal.SizeOf(<arrayElement>) * <array>.Count)
I just wonder if allocating the memory, as shown in the answer, causes any kind of fragmentation? Assuming there may be fragmentation, would there be much of an impact to be concerned with? Would allocating the memory in the way I've shown force you to call IntPtr.ToInt## to obtain pointer offsets from the overall allocation pointer, and therefore force you to check the underlying architecture to ensure the correct method is used*1 or is there a better way? (ToInt32/ToInt64 depending on x86/64?)
*1 I read elsewhere that calling the wrong IntPtr.ToInt## will cause overflow exceptions. What I mean by that statement is would I use:
Dim anOffsetPtr As New IntPtr(arrayBufferPtr.ToInt## + (loopIndex * <arrayElementSize>))
I've read through a few articles on the VB.Net Marshal class and memory allocation; listed below, but if you know fo any other good articles I'm all ears!
http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.aspx
http://www.dotnetbips.com/articles/44bad06d-3662-41d3-b712-b45546cd8fa8.aspx
My favourite so far:
http://www.codeproject.com/KB/vb/Marshal.aspx
It is possible to allocate unmanaged memory for the whole array, and then copy every array element with SizeOf(arrayElement)*loopIndex offset. It is better to use appropriate ToInt32/ToInt64 method, according to the current platform, like:
Dim anOffsetPtr
if arrayBufferPtr.Size = 4 then
anOffsetPtr = New IntPtr(arrayBufferPtr.ToInt32() + (loopIndex * arrayElementSize))
else
anOffsetPtr = New IntPtr(arrayBufferPtr.ToInt64() + (loopIndex * arrayElementSize))
endif
Related
I am using Cocoa/Objective-C and I am using NSBitmapImageRep getPixel:atX:y: to test whether R is 0 or 255. That is the only piece of data I need (the bitmap is only black and white).
I am noticing that this one function is the biggest draw on CPU power in my application, accounting for something like 95% of the overhead. Would it be faster for me to preload the bitmap into a 2 dimensional integer array
NSUInteger pixels[1280][1024];
and read the values like so:
if(pixels[x][y]!=0){
//....do stuff
}
?
One thing that might be helpful could be converting the data into something more "dense". Since you're only interested in a single bit per pixel location, it doesn't make sense to store more than that. Storing more data than necessary means you get less usage out of your cache, which can really slow things down if the image is big and/or the accesses very random.
For instance, you could use the platform's largest "native" integer and pack in the pixels to use a single bit for each pixel. That will make the access a bit more involved since you need to do a single-bit testing, but it might be a win.
You would do something like this:
uint32_t image[HEIGHT * ((WIDTH + 31) / 32)];
Then initialize this array by using the slow getter method, once per pixel. Then you can read out the value of a pixel using something like image[y * ((WIDTH + 31) / 32) + (x / 32)] & (1 << (x & 31)).
I'm being vague ("might", "can" and so on) since it really depends on your access pattern, the size of the image, and other things. You should probably test it.
I'm not familiar with Objective-C or the NSBitmapImageRep object, but a reasonable guess is that the getPixel routine employs clipping to avoid reading outside of memory, which could a possible slowdown (among other things).
Have a look inside it and see what it does.
(update)
Having learnt that this is Apple code, you probably can't take a look inside it.
However, the documentation for NSBitmapImageRep_Class seems to indicate that getPixel:atX:y: performs at least some type magic. You could test if the result is clipped by accessing a pixel outside of the image boundary and observing the result.
The bitmapData seems to be something you'd be interested in: get the pointer to the data, then read the array yourself avoiding type conversion or clipping.
I have a program in VB.net that uses a 3D array:
Private gridList(10, 900, 900) As GridElement
Now, I just used a Memory Profiler on it (because my application is having some major leak issues or something) and apparently, this array (containing at the moment of testing 0-30 elements at one time) is using 94% of the memory currently in use by my application. Even when it is empty it takes up huge amounts of memory.
My only assumption is that even empty arrays take up space! This puts a major blow into my plans!
My Question:
Is there any alternative to this that allows me to still have the same abilities to map
i.g. I've been using it like this:
Dim cGE as GridElement = gridList(3, 5, 7)
but doesn't hog up so much memory for things that aren't using memory?
Thanks!
Do Arrays take up space even without values in them in .net?
No. But your array has values in it. And hence takes up space.
To avoid keeping a lot of elements in memory when you only access a few of all the possible elements, you need to use a so-called sparse array. In .NET, this is easiest implemented via a Dictionary, where the key in your case would be a three-element structure*, and the value would be a GridElement.
* If you’re using an up-to-date version of .NET, then you can model this via a Tuple(Of Integer, Integer, Integer)
I’ve got a massive memory leak in my program. This is the first time I’ve used IronPython in a tight loop, so I’m wondering if this could be the cause.
For Each column In m_Columns
Dim rawValue As String
rawValue = line.Substring(column.FileColumnIndex.Value - 1, column.FileColumnEndIndex.Value - column.FileColumnIndex.Value + 1)
If column.IncludeColumnFunction <> "" Then
Dim scope = m_ScriptEngine.CreateScope
scope.SetVariable("Line", line)
scope.SetVariable("Row", targetRow)
If Not CBool(m_ScriptEngine.Execute(column.IncludeColumnFunction, scope)) Then Continue For 'skip this column
End If
targetRow(column.DatabaseColumnName) = column.ParseValue(rawValue, targetRow)
Next
The string named column.IncludeColumnFunction never changes for a given column. It is usually something simple like “Row['Foo'] == 'Bar'”.
Can I/should I be caching the compiled function?
Should I be destroying the scope variable in some way when I’m done with it?
Nothing in particular stands out in the code sample that you put up there. I would say it's unlikely that this particular piece of code is causing the issue (although more context is needed to be definitive).
With memory leaks you'll track down the problem much faster by directly investigating the problem vs. digging through code. Direct investigations will often quickly tell you what is leaking and once you know what is leaking then you can start investigating code which manages that type of object.
There are lots of tools and articles out there to help track down memory issues in .Net. Here is a particularly good one.
http://blogs.msdn.com/b/tess/archive/2008/02/15/net-debugging-demos-lab-3-memory.aspx?wa=wsignin1.0
Ok I finally found the problem. It was inside the C function(CarbonTuner2) not the objC method. I was creating inside the function an array of the same size as the file size so if the filesize was big it created a really big array and my guess is that when I called another function from there, the local variables were put on the stack which created the EXC_BAD_ACCESS. What I did then is instead of using a variable to declare to size of the array I put the number directly. Then the code didnt even compile. it knew. The error wassomething like: Array size too big. I guess working 20+hours in a row isnt good XD But I am definitly gonna look into tools other than step by step debuggin to figure these ones out. Thanks for your help. Here is the code. If you divide gFileByteCount by 2 you dont get the error anymore:
// ConverterController.h
# import <Cocoa/Cocoa.h>
# import "Converter.h"
#interface ConverterController : NSObject {
UInt64 gFileByteCount ;
}
-(IBAction)ProcessFile:(id)sender;
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp);
#end
// ConverterController.m
# include "ConverterController.h"
#implementation ConverterController
-(IBAction)ProcessFile:(id)sender{
UInt32 packets = gTotalPacketCount;//alloc a buffer of memory to hold the data read from disk.
gFileByteCount=250000;
long LENGTH=(long)gFileByteCount;
CarbonTuner2(LENGTH,(long)8192/2, (long)4*2);
}
#end
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp)
{
long numFrames = numSampsToProcess / fftFrameSize * osamp;
float g2DFFTworksp[numFrames+2][2 * fftFrameSize];
double hello=sin(2.345);
}
Your crash has nothing to do with incompatibilities between C and ObjC.
And as previous posters said, you don't need to include math.h.
Run your code under gdb, and see where the crash happens by using backtrace.
Are you sure you're not sending bad arguments to the math functions?
E.g. this causes BAD_ACCESS:
double t = cos(*(double *)NULL);
Objective C is built directly on C, and the C underpinnings can and do work.
For an example of using math.h and parts of standard library from within an Objective C module, see:
http://en.wikibooks.org/wiki/Objective-C_Programming/syntax
There are other examples around.
Some care is needed around passing the variables around; use the C variables for the C and standard library calls; don't mix the C data types and Objective C data types incautiously. You'll usually want a conversion here.
If that is not the case, then please consider posting the code involved, and the error(s) you are receiving.
And with all respect due to Mr Hellman's response, I've hit errors when I don't have the header files included; I prefer to include the headers. But then, I tend to dial the compiler diagnostics up a couple of notches, too.
For what it's worth, I don't include math.h in my Cocoa app but have no problem using math functions (in C).
For example, I use atan() and don't get compiler errors, or run time errors.
Can you try this without including math.h at all?
First, you should add your code to your question, rather than posting it as an answer, so people can see what you're asking about. Second, you've got all sorts of weird problems with your memory management here - gFileByteCount is used to size a bunch of buffers, but it's set to zero, and doesn't appear to get re-set anywhere.
err = AudioFileReadPackets (fileID,
false, &bytesReturned, NULL,0,
&packets,(Byte *)rawAudio);
So, at this point, you pass a zero-sized buffer to AudioFileReadPackets, which prompty overruns the heap, corrupting the value of who knows what other variables...
fRawAudio =
malloc(gFileByteCount/(BITS/8)*sizeof(fRawAudio));
Here's another, minor error - you want sizeof(*fRawAudio) here, since you're trying to allocate an array of floats, not an array of float pointers. Fortunately, those entities are the same size, so it doesn't matter.
You should probably start with some example code that you know works (SpeakHere?), and modify it. I suspect there are other similar problems in the code yoou posted, but I don't have time to find them right now. At least get the rawAudio buffer appropriately-sized and use the values returned from AudioFileReadPackets appropriately.
I've got a Lua program that seems to be slower than it ought to be. I suspect the issue is that I'm adding values to an associative array one at a time and the table has to allocate new memory each time.
There did seem to be a table.setn function, but it fails under Lua 5.1.3:
stdin:1: 'setn' is obsolete
stack traceback:
[C]: in function 'setn'
stdin:1: in main chunk
[C]: ?
I gather from the Google searching I've done that this function was depreciated in Lua 5.1, but I can't find what (if anything) replaced the functionality.
Do you know how to pre-size a table in Lua?
Alternatively, is there some other way to avoid memory allocation when you add an object to a table?
Let me focus more on your question:
adding values to an associative array
one at a time
Tables in Lua are associative, but using them in an array form (1..N) is optimized. They have double faces, internally.
So.. If you indeed are adding values associatively, follow the rules above.
If you are using indices 1..N, you can force a one-time size readjust by setting t[100000]= something. This should work until the limit of optimized array size, specified within Lua sources (2^26 = 67108864). After that, everything is associative.
p.s. The old 'setn' method handled the array part only, so it's no use for associative usage (ignore those answers).
p.p.s. Have you studied general tips for keeping Lua performance high? i.e. know table creation and rather reuse a table than create a new one, use of 'local print=print' and such to avoid global accesses.
static int new_sized_table( lua_State *L )
{
int asize = lua_tointeger( L, 1 );
int hsize = lua_tointeger( L, 2 );
lua_createtable( L, asize, hsize );
return( 1 );
}
...
lua_pushcfunction( L, new_sized_table );
lua_setglobal( L, "sized_table" );
Then, in Lua,
array = function(size) return sized_table(size,0) end
a = array(10)
As a quick hack to get this running you can add the C to lua.c.
I don't think you can - it's not an array, it's an associative array, like a perl hash or an awk array.
http://www.lua.org/manual/5.1/manual.html#2.5.5
I don't think you can preset its size meaningfully from the Lua side.
If you're allocating the array on the C side, though, the
void lua_createtable (lua_State *L, int narr, int nrec);
may be what you need.
Creates a new empty table and pushes
it onto the stack. The new table has
space pre-allocated for narr array
elements and nrec non-array elements.
This pre-allocation is useful when you
know exactly how many elements the
table will have. Otherwise you can use
the function lua_newtable.
There is still an internal luaL_setn and you can compile Lua so that
it is exposed as table.setn. But it looks like that it won't help
because the code doesn't seem to do any pre-extending.
(Also the setn as commented above the setn is related to the array part
of a Lua table, and you said that your are using the table as an associative
array)
The good part is that even if you add the elements one by one, Lua does not
increase the array that way. Instead it uses a more reasonable strategy. You still
get multiple allocations for a larger array but the performance is better than
getting a new allocation each time.
Although this doesn't answer your main question, it answers your second question:
Alternatively, is there some other way to avoid memory allocation when you add an object to a table?
If your running Lua in a custom application, as I can guess since your doing C coding, I suggest you replace the allocator with Loki's small value allocator, it reduced my memory allocations 100+ fold. This improved performance by avoiding round trips to the Kernel, and made me a much happier programmer :)
Anyways I tried other allocators, but they were more general, and provide guarantee's that don't benefit Lua applications (such as thread safety, and large object allocation, etc...), also writing your own small-object allocator can be a good week of programming and debugging to get just right, and after searching for an available solution Loki's allocator wasthe easiest and fastest I found for this problem.
If you declare your table in code with a specific amount of items, like so:
local tab = { 0, 1, 2, 3, 4, 5, ... , n }
then Lua will create the table with memory already allocated for at least n items.
However, Lua uses the 2x incremental memory allocation technique, so adding an item to a table should rarely force a reallocation.