Definition:
As defined here, CGGetDisplaysWithPoint takes 4 parameters:
A CGPoint object
An int32 representing the maximum number of displays returned
A mutable array passed by reference, which will be filled with the displayIDs found.
An int32 representing the matching display count
Syntax:
CGError CGGetDisplaysWithPoint(CGPoint point, uint32_t maxDisplays, CGDirectDisplayID *displays, uint32_t *matchingDisplayCount);
This is fine and I can get this function working however I am quite confused as to how I should deal with the maxDisplays parameter?
As I understand it, if I set maxDisplays to 5 then if someone has 6 displays, there is a 1/6 chance that a randomly selected pixel will find no displays?
So do we just set maxDisplays to something unrealistic, like 99, and release the array afterwards? What's the point in this argument?
The point of the argument is to prevent the function from writing past the end of your array. You have to tell it the capacity of the array. Note that the displays parameter is neither a Cocoa nor Core Foundation mutable array object. It's a C-style array. It's "mutable" in the sense that it's not "const", but it's not an object that manages its own storage. You are responsible for managing that storage and must communicate its capacity to any function that is intended to store data in it (or otherwise guarantee that such function won't overrun it).
So, your question should really be how to decide on the capacity of the array. There are two basic approaches:
1) Call the function passing NULL for the displays parameter and any arbitrary value (best to use 0) for maxDisplays. As documented, when displays is NULL, maxDisplays is ignored and the function outputs via matchingDisplayCount the number of displays whose bounds contain the given point. Then, allocate an array with (at least) that many elements to use to receive the display IDs and call the function again, passing that array for displays and its capacity for maxDisplays.
2) Use an array with capacity of 32. It's not explicitly documented but it's implicit in the API that that's the maximum number of supported displays. A display ID can be converted to an OpenGL display mask using CGDisplayIDToOpenGLDisplayMask(). The type CGOpenGLDisplayMask is used to hold OpenGL display masks. It is defined as uint32_t, a 32-bit value. Therefore, there can be at most 32 active displays.
This technique is used in some Apple docs, like here, here, here, and here. That last one even makes a direct connection between the number of bits in CGOpenGLDisplayMask and the maximum number of displays.
Related
So I have an external C function that returns a pointer to an array. I'm trying to figure out how to convert the pointer to something that can be be displayed on the screen using the latest version of LabView (2019, assume I have all the toolkits).
The C function signature imports fine and is designed to display 16 bit images.
STATUS DemoImage(unsigned short** ptr, int64* rows, int64* columns, int64 image_idx)
with ptr eventually filling a pointer containing the memory location to 16 bit image. rows, columns work as expected.
Whats the name of the controllers that convert the data type to something that can be displayed? I'd also appreciate answers that only address how to display a 8 bit images as I can convert them in my own library if worst, comes to worst.
There is a vi.lib VI not on the palette that you can use: GetValueByPointer.
Detailed walkthrough
For the step-by-step explanation, see this NI document.
2D arrays are represented as an array of arrays. Since an array is really a pointer, a 2D array is a pointer to an array of pointers, where each pointer points to the individual rows of the array. So in order to dereference a 2D Array, you must first dereference the individual pointers to each row, and then dereference in individual elements in each row. The following snippet shows an example of this:
Download examples
For a download with examples, see this one instead, section 4.d.
Returning values by reference (pass by ref)
Function:void ReturningValuesByReference_2DArrayOfIntegers (int rows, int cols, int ***newArray);
VI:Returning Values By Reference 2D Array Of Integers Complete.vi
I'm trying to understand the names of the items in the VkFormat enum, and so far I think I get all the structure of the names of all of the (non-block) formats, but I can't figure out what it means when they have a suffix of PACK8, PACK16, PACK32. If I add up the channel sizes, they always add up to 8, 16, or 32, nothing irregular, so I don't understand what it would mean to bit-pack these values, since they seem to be 100% efficient, using all their bits.
As usual, the documentation is not very helpful, just saying the format is packed without saying what that means.
The PACK fields mean exactly what the specification says they mean:
whole texels or attributes are stored in a single data element, rather than individual components occupying a single data element
Though if you find that too confusing, you could just look at the actual format descriptions. Vulkan goes into excruciating detail about them, to the point of needless repetition.
The difference between VK_FORMAT_B8G8R8A8_RGB and VK_FORMAT_B8G8R8A8_RGB_PACK32 is the same difference between a uint8_t[4] and a uint32_t. One is an array ("individual components"), while the other is a single value ("single data element") made up of smaller values.
If you have a uint8_t color[4] array, which stores B8G8R8A8, then color[0] stores the blue component. The order of the components in the array is defined by the order of the components in the format's name.
If you have a uint32_t color value, which stores B8G8R8A8, then (color & 0xFF000000) >> 24 will retrieve the blue component. The highest byte is the first, followed by the next highest and so forth.
The reason the packed-vs-not-packed distinction matters is because of endian issues. Arrays of bytes don't have endian issues. But values packed into 16 or 32-bits do have endian issues. The endian of the packed formats is always assumed to be the native endian of the host.
I clearly see the need to deepen my knowledge in RenderScript memory allocation and data types (I'm still confused about the sheer number of data types and finding the correct corresponding types on either side - allocations and elements. (or when to refer the forEach to input, to output or to both, etc.) Therefore I will read and re-read the documentation, which is really not bad - but it needs some time to get the necessary "intuition" how to use it correctly. But for now, please help me with this basic one (and I will return later with hopefully less stupid questions...). I need a very simple kernel that takes an ARGB Color Bitmap and returns an integer Array of gray-values. My attempt was the following:
#pragma version(1)
#pragma rs java_package_name(com.example.xxxx)
#pragma rs_fp_relaxed
uint __attribute__((kernel)) grauInt(uchar4 in) {
uint gr= (uint) (0.2125*in.r + 0.7154*in.g + 0.0721*in.b);
return gr;
}
and Java side:
int[] data1 = new int[width*height];
ScriptC_gray graysc;
graysc=new ScriptC_gray(rs);
Type.Builder TypeOut = new Type.Builder(rs, Element.U8(rs));
TypeOut.setX(width).setY(height);
Allocation outAlloc = Allocation.createTyped(rs, TypeOut.create());
Allocation inAlloc = Allocation.createFromBitmap(rs, bmpfoto1,
Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
graysc.forEach_grauInt(inAlloc, outAlloc);
outAlloc.copyTo(data1);
This crashed with the message cannot locate symbol "convert_uint". What's wrong with this conversion? Is the code otherwise correct?
UPDATE: isn't that ridiculous? I don't get this "easy one" run, even after 2 hours trying. I still struggle with the different Element- and variable-types. Let's recap: Input is a Bitmap. Output is an int[] Array. So, why doesnt it work when I use U8 in the Java-side Out-allocation, createFromBitmap in the Java-side In-allocation, uchar4 as kernel Input and uint as the kernel Output (RSRuntimeException: Type mismatch with U32) ?
There is no convert_uint() function. How about simple casting? Other than that, the code looks alright (assuming width and height have correct values).
UPDATE: I have just noticed that you allocate Element.I32 (i.e. signed integer type), but return uint from the kernel. These should match. And in any case, unless you need more than 8-bit precision, you should be able to fit your result in U8.
UPDATE: If you are changing the output type, make sure you change it in all places, e.g. if the kernel returns an uint, the allocation should use U32. If the kernel returns a char, the allocation should use I8. And so on...
You can't use a Uint[] directly because the input Bitmap is actually 2-dimensional. Can you create the output Allocation with a proper width/height and try that? You should still be able to extract the values into a Java array when you are finished.
Is is a follow-up to my previous question:
What are the digits in an ObjC method type encoding string?
Say there is an encoding:
v24#0:4:8#12B16#20
How are those numbers calculated? B is a char so it should occupy just 1 byte (not 4 bytes). Does it have something to do with "alignment"? What is the size of void?
Is it correct to calculate the numbers as follows? Ask sizeof on every item and round up the result to multiple of 4? And the first number becomes the sum of all the other ones?
The numbers were used in the m68K days to denote stack layout. That is, you could literally decode the the method signature and, for just about all types, know exactly which bytes at what offset within the stack frame you could diddle to get/set arguments.
This worked because the m68K's ABI was entirely [IIRC -- been a long long time] stack based argument/return passing. There wasn't anything shoved into registers across call boundaries.
However, as Objective-C was ported to other platforms, always-on-the-stack was no longer the calling convention. Arguments and return values are often passed in registers.
Thus, those offsets are now useless. As well, the type encoding used by the compiler is no longer complete (because it never was terribly useful) and there will be types that won't be encoded. Not too mention that encoding some C++ templatized types yields method type encoding strings that can be many Kilobytes in size (I think the record I ran into was around 30K of type information).
So, no, it isn't correct to use sizeof() to generate the numbers because they are effectively meaningless to everything. The only reason why they still exist is for binary compatibility; there are bits of esoteric code here and there that still parse the type encoding string with the expectation that there will be random numbers sprinkled here and there.
Note that there are vestiges of API in the ObjC runtime that still lead one to believe that it might be possible to encode/decode stack frames on the fly. It really isn't as the C ABI doesn't guarantee that argument registers will be preserved across call boundaries in the face of optimization. You'd have to drop to assembly and things get ugly really really fast (>shudder<).
The full encoding string is constructed (in clang) by the method ASTContext::getObjCEncodingForMethodDecl, which you can find in lib/AST/ASTContext.cpp.
The method that does the size rounding is ASTContext::getObjCEncodingTypeSize, in the same file. It forces each size to be at least the size of an int. On all of Apple's current platforms, an int is 4 bytes.
The stack frame size and argument offsets are calculated by the compiler. I'm actually trying to track this down in the Clang source myself this week; it possibly has something to do with CodeGenTypes::arrangeObjCMessageSendSignature. (Looks like Rob just made my life a lot easier!)
The first number is the sum of the others, yes -- it's the total space occupied by the arguments. To get the size of the type represented by an ObjC type encoding in your code, you should use NSGetSizeAndAlignment().
I have a com method where I want to pass information about 7 days of the week encoded in an unsigned long (to represent "selected" hours of the day in the week)
[id(5)] HRESULT GetSchedule([out, retval] SAFEARRAY(unsigned long)* days);
[id(6)] HRESULT SetSchedule([in] SAFEARRAY(unsigned long) days);
This is one way to do this but it would not be obvious for the COM client that it has to pass an array of 7 elements, where each element is a day (let alone that ordering could be with first day Monday or Sunday, and is not explicit in the interface).
Is there a way to make the size of the input array explicit ?
I know it could still be better to split this in 7 different methods for every day, also
For Dispatch-compatible automation clients, where you have to use use SAFEARRAY's, this is not possible. SAFEARRAY knows its own size, an can be marshalled safely.
The best you can do in case of error is to return E_INVALIDARG and set an IErrorInfo describing the issue. Also, mention in your documentation.
For IUnknown-Binding you could use raw pointers with a size_is declaration for the intreface, but I doubt that owuld improve things.
edit
I know it could still be better to split this in 7 different methods for every day, also
Not necessarily, that would be painful to call in some circumstances.
There is a race condition with other clients trying to change a single value, you may need to take provisions for that.
You could also set it up as an indexed property (i.e. a propget / propput method that takes an addiitonal "weekday" parameter). Again, you would have to validate that the weekday parameter is in the valid range.
This would be a bit more obvious as interface, IMO, but if the object could be remote, a method to set all weekdays at once with a single server roundtrip is always welcome.
edit The MSDN page even recommends to use the fixed size for arrays, like this:
// MIDL:
HRESULT SetWeekdayNames([in] BSTR valNames[8]);
instead of the size_is variant:
HRESULT SetWeekdayNames([in, size_is(8)] BSTR * valNames);
The respective C++ declaration would probably be
HRESULT SetWeekdayNames(BSTR * valNames);
in both cases.
You should add a size parameter and return an error if you don't get the number you're expecting. E_INVALIDARG should send the caller looking for the documentation.
If you were marshalling, I suspect you can use a constant for size_is or length_is, but that's sort of not the situation here.