Finding that code examples for the nvml API for nvidia cards is just really sparse.
Before any nvml calls could be conducted CMAKE required:
target_link_libraries(04_nvml_testing "/usr/lib/x86_64-linux-gnu/libnvidia-ml.so")
Code snippet:
nvmlReturn_t result;
unsigned int temp;
nvmlDevice_t device;
result = nvmlInit();
nvmlUnit_t unit;
unsigned int myint;
result = nvmlUnitGetHandleByIndex(0, &unit);
I can read the GPU temperature fine, but getting the nvmlUnit_t value of the card is required before a lot of API calls can be made.
This code block inside Clion is kicking: NVML_ERROR_INVALID_ARGUMENT
Also there is references to 'available to s-Series devices' whatever that is..
This is not a full answer but a the API reference for this.
'''
nvmlReturn_t nvmlUnitGetHandleByIndex (unsigned int index, nvmlUnit_t *unit)
The index of the target unit, >= 0 and < unitCount unit Reference in
which to return the unit handle
Returns
‣ NVML_SUCCESS if unit has been set
‣ NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized
‣ NVML_ERROR_INVALID_ARGUMENT if index is invalid or unit is NULL
‣ NVML_ERROR_UNKNOWN on any unexpected error
Description
Acquire the handle for a particular unit, based on its index.
For S-class products. << - what is a S-class product.
Valid indices are derived from the unitCount returned by
nvmlUnitGetCount(). For example, if unitCount is 2 the valid indices
are 0 and 1, corresponding to UNIT 0 and UNIT 1.
Related
TL;DR: Using native_log2() produces non deterministic behavior in OpenCL kernel, while using log2() produces deterministic behavior. Why is this happening?
So I have this function below acting as a helper function for an OpenCL kernel, and I was using the native_ version of log2 (native_log2) to improve speed performance.
When I was comparing the results produced by the kernel and by the original program, I realized that in most of the cases the kernel is producing the right values, however, sometimes it produces an incorrect value (like 30 incorrect values in 500k function calls). VERY IMPORTANT: The errors are not always on the same computations. I am processing multiple input files, and the errors seem to occur randomly in different sets of files with different runs. That is, the results are non deterministic.
After some tests I narrowed the problem to the function below and found out that swapping the native_log2 by log2 produces the correct value 100% of the times. All those typecasts look ugly, but the log2() and floor() functions are only compatible with double/float, while my input/output must be integers.
My device is a NVIDIA GPU 940MX and only supports OpenCL 1.2. The OpenCL 1.2 documentation states that
A subset of functions from table 6.8 that are defined with the native_ prefix. These functions may map to one or more native device instructions and will typically have better performance compared to the corresponding functions (without the native__ prefix) described in table 6.8. The accuracy (and in some cases the input range(s)) of these functions is implementation-defined.
Clearly I am supposed to expect some errors when using native_ functions, but the documentation is not clear about the determinism of the errors I may be encountering.
Can someone give me directions on why I am facing this strange behavior?
int xGetExpGolombNumberOfBits(int value){
unsigned int uiLength2 = 1;
unsigned int uiTemp2 = select((unsigned int)( value << 1 ), ( (unsigned int)( -value ) << 1 ) + 1, value <= 0);
// These magic numbers (7 and 128) are substituting two constants for the sake of clarity
while( uiTemp2 > 128 )
{
uiLength2 += ( 7 << 1 );
uiTemp2 >>= 7;
}
return uiLength2 + (((int)floor(native_log2((float)uiTemp2))) << 1);
}
So I tried using code from another post around here to see if I could use it, it was a code meant to utilize a potentiometer to move a servo motor, but when I attempted to compile it is gave the error above saying No operator "=" matches these operands in "Servo_Project.cpp". How do I go about fixing this error?
Just in case ill say this, the boards I was trying to compile the code were a NUCLEO-L476RG, the board from the post I mentioned utilized Nucleo L496ZG board and a Tower Pro Micro Servo 9G.
#include "mbed.h"
#include "Servo.h"
Servo myservo(D6);
AnalogOut MyPot(A0);
int main() {
float PotReading;
PotReading = MyPot.read();
while(1) {
for(int i=0; i<100; i++) {
myservo = (i/100);
wait(0.01);
}
}
}
This line:
myservo = (i/100);
Is wrong in a couple of ways. First, i/100 will always be zero - integer division truncates in C++. Second, there's not an = operator that allows an integer value to be assigned to a Servo object. YOu need to invoke some kind of Servo method instead, likely write().
myservo.write(SOMETHING);
The SOMETHING should be the position or speed of the servo you're trying to get working. See the Servo class reference for an explanation. Your code tries to use fractions from 0-1 and thatvisn't going to work - the Servo wants a position/speed between 0 and 180.
You should look in the Servo.h header to see what member functions and operators are implemented.
Assuming what you are using is this, it does have:
Servo& operator= (float percent);
Although note that the parameter is float and you are passing an int (the parameter is also in the range 0.0 to 1.0 - so not "percent" as its name suggests - so be wary, both the documentation and the naming are poor). You should have:
myservo = i/100.0f;
However, even though i / 100 would produce zero for all i in the loop, that does not explain the error, since an implicit cast should be possible - even if clearly undesirable. You should look in the actual header you are using to see if the operator= is declared - possibly you have the wrong file or a different version or just an entirely different implementation that happens to use teh same name.
I also notice that if you look in the header, there is no documentation mark-up for this function and the Servo& operator= (Servo& rhs); member is not documented at all - hence the confusing automatically generated "Shorthand for the write and read functions." on the Servo doc page when the function shown is only one of those things. It is possible it has been removed from your version.
Given that the documentation is incomplete and that the operator= looks like an after thought, the simplest solution is to use the read() / write() members directly in any case. Or implement your own Servo class - it appears to be only a thin wrapper/facade of the PwmOut class in any case. Since that is actually part of mbed rather than user contributed code of unknown quality, you may be on firmer ground.
I suspect this may be a bug in Rakudo, but I just started playing with Perl 6 today, so there's a good chance I'm just making a mistake. In this simple program, declaring a typed array inside a sub appears to make the Perl 6 compiler angry. Removing the type annotation on the array gets rid of the compiler error.
Here's a simple prime number finding program:
#!/usr/bin/env perl6
use v6;
sub primes(int $max) {
my int #vals = ^$max; # forcing a type on vals causes compiler error (bug?)
for 2..floor(sqrt($max)) -> $i {
next if not #vals[$i];
#vals[2*$i, 3*$i ... $max-1] = 0;
}
return ($_ if .Bool for #vals)[1..*];
}
say primes(1000);
On Rakudo Star 2016.07.1 (from the Fedora 24 repos), this program gives the following error:
[sultan#localhost p6test]$ perl6 primes.p6
Cannot unbox a type object
in sub primes at primes.p6 line 8
in block <unit> at primes.p6 line 13
If I remove the type annotation on the vals array, the program works correctly:
...
my #vals = ^$max; # I removed the int type
...
Am I making a mistake in my usage of Perl 6, or is this a bug in Rakudo?
There's a potential error in your code that's caught by type checking
The error message you got draws attention to line 8:
#vals[2*$i, 3*$i ... $max-1] = 0;
This line assigns the list of values on the right of the = to the list of elements on the left.
The first element in the list on the left, #vals[2*$i], gets a zero.
You didn't define any more values on the right so the rest of the elements on the left are assigned a Mu. Mus work nicely as placeholders for elements that do not have a specific type and do not have a specific value. Think of a Mu as being, among other things, like a Null, except that it's type safe.
You get the same scenario with this golfed version:
my #vals;
#vals[0,1] = 0; # assigns 0 to #vals[0], Mu to #vals[1]
As you've seen, everything works fine when you do not specify an explicit type constraint for the elements of the #vals array.
This is because the default type constraint for array elements is Mu. So assigning a Mu to an element is fine.
If you felt it tightened up your code you could explicitly assign zeroes:
#vals[2*$i, 3*$i ... $max-1] = 0 xx Inf;
This generates a (lazy) infinite list of zeroes on the RHS so that zero is assigned to each of the list of elements on the LHS.
With just this change your code will work even if you specify a type constraint for #vals.
If you don't introduce the xx Inf but do specify an element type constraint for #vals that isn't Mu, then your code will fail a type check if you attempt to assign a Mu to an element of #vals.
The type check failure will come in one of two flavors depending on whether you're using object types or native types.
If you specify an object type constraint (eg Int):
my Int #vals;
#vals[0,1] = 0;
then you get an error something like this:
Type check failed in assignment to #vals; expected Int but got Mu (Mu)
If you specify a native type constraint (eg int rather than Int):
my int #vals;
#vals[0,1] = 0;
then the compiler first tries to produce a suitable native value from the object value (this is called "unboxing") before attempting a type check. But there is no suitable native value corresponding to the object value (Mu). So the compiler complains that it can not even unbox the value. Finally, as hinted at at the start, while Mu works great as a type safe Null, that's just one facet of Mu. Another is that it's a "type object". So the error message is Cannot unbox a type object.
I clearly see the need to deepen my knowledge in RenderScript memory allocation and data types (I'm still confused about the sheer number of data types and finding the correct corresponding types on either side - allocations and elements. (or when to refer the forEach to input, to output or to both, etc.) Therefore I will read and re-read the documentation, which is really not bad - but it needs some time to get the necessary "intuition" how to use it correctly. But for now, please help me with this basic one (and I will return later with hopefully less stupid questions...). I need a very simple kernel that takes an ARGB Color Bitmap and returns an integer Array of gray-values. My attempt was the following:
#pragma version(1)
#pragma rs java_package_name(com.example.xxxx)
#pragma rs_fp_relaxed
uint __attribute__((kernel)) grauInt(uchar4 in) {
uint gr= (uint) (0.2125*in.r + 0.7154*in.g + 0.0721*in.b);
return gr;
}
and Java side:
int[] data1 = new int[width*height];
ScriptC_gray graysc;
graysc=new ScriptC_gray(rs);
Type.Builder TypeOut = new Type.Builder(rs, Element.U8(rs));
TypeOut.setX(width).setY(height);
Allocation outAlloc = Allocation.createTyped(rs, TypeOut.create());
Allocation inAlloc = Allocation.createFromBitmap(rs, bmpfoto1,
Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
graysc.forEach_grauInt(inAlloc, outAlloc);
outAlloc.copyTo(data1);
This crashed with the message cannot locate symbol "convert_uint". What's wrong with this conversion? Is the code otherwise correct?
UPDATE: isn't that ridiculous? I don't get this "easy one" run, even after 2 hours trying. I still struggle with the different Element- and variable-types. Let's recap: Input is a Bitmap. Output is an int[] Array. So, why doesnt it work when I use U8 in the Java-side Out-allocation, createFromBitmap in the Java-side In-allocation, uchar4 as kernel Input and uint as the kernel Output (RSRuntimeException: Type mismatch with U32) ?
There is no convert_uint() function. How about simple casting? Other than that, the code looks alright (assuming width and height have correct values).
UPDATE: I have just noticed that you allocate Element.I32 (i.e. signed integer type), but return uint from the kernel. These should match. And in any case, unless you need more than 8-bit precision, you should be able to fit your result in U8.
UPDATE: If you are changing the output type, make sure you change it in all places, e.g. if the kernel returns an uint, the allocation should use U32. If the kernel returns a char, the allocation should use I8. And so on...
You can't use a Uint[] directly because the input Bitmap is actually 2-dimensional. Can you create the output Allocation with a proper width/height and try that? You should still be able to extract the values into a Java array when you are finished.
Not very familiar with in-line assembly to begin with, and much less with that of the blackfin processor. I am in the process of migrating a legacy C application over to C++, and ran into a problem this morning regarding the following routine:
//
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
:: "a" ( buffer ), "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
I have a class that contains an array of shorts that is used for audio processing:
class AudProc
{
enum { buffer_size = 512 };
short M_samples[ buffer_size * 2 ];
// remaining part of class omitted for brevity
};
Within the AudProc class I have a method that calls clear_buffer, passing it the samples array:
clear_buffer ( M_samples, sizeof ( M_samples ) / 2 );
This generates a "Bus Error" and aborts the application.
I have tried making the array public, and that produces the same result. I have also tried making it static; that allows the call to go through without error, but no longer allows for multiple instances of my class as each needs its own buffer to work with. Now, my first thought is, it has something to do with where the buffer is in memory, or from where it is being accessed. Does something need to be changed in the in-line assembly to make this work, or in the way it is being called?
Thought that this was similar to what I was trying to accomplish, but it is using a different dialect of asm, and I can't figure out if it is the same problem I am experiencing or not:
GCC extended asm, struct element offset encoding
Anyone know why this is occurring and how to correct it?
Does anyone know where there is helpful documentation regarding the blackfin asm instruction set? I've tried looking on the ADSP site, but to no avail.
I would suspect that you could define your clear_buffer as
inline void clear_buffer (short * buffer, int len) {
memset (buffer, 0, sizeof(short)*len);
}
and probably GCC is able to optimize (when invoked with -O2 or -O3) that cleverly (because GCC knows about memset).
To understand assembly code, I suggest running gcc -S -O -fverbose-asm on some small C file, then to look inside the produced .s file.
I would have take a guess, because I don't know Blackfin assembler:
That LC0 sounds like "loop counter", LSETUP looks like a macro/insn, which, well, setups a loop between two labels and with a certain loop counter.
The "%0" operands is apparently the address to write to and we can safely guess it's incremented in the loop, in other words it's both an input and output operand and should be described as such.
Thus, I suggest describing it as in input-output operand, using "+" constraint modifier, as follows:
void clear_buffer ( short * buffer, int len ) {
__asm__ (
"/* clear_buffer */\n\t"
"LSETUP (1f, 1f) LC0=%1;\n"
"1:\n\t"
"W [%0++] = %2;"
: "+a" ( buffer )
: "a" ( len ), "d" ( 0 )
: "memory", "LC0", "LT0", "LB0"
);
}
This is, of course, just a hypothesis, but you could disassemble the code and check if by any chance GCC allocated the same register for "%0" and "%2".
PS. Actually, only "+a" should be enough, early-clobber is irrelevant.
For anyone else who runs into a similar circumstance, the problem here was not with the in-line assembly, nor with the way it was being called: it was with the classes / structs in the program. The class that I believed to be the offender was not the problem - there was another class that held an instance of it, and due to other members of that outer class, the inner one was not aligned on a word boundary. This was causing the "Bus Error" that I was experiencing. I had not come across this before because the classes were not declared with __attribute__((packed)) in other code, but they are in my implementation.
Giving Type Attributes - Using the GNU Compiler Collection (GCC) a read was what actually sparked the answer for me. Two particular attributes that affect memory alignment (and, thus, in-line assembly such as I am using) are packed and aligned.
As taken from the aforementioned link:
aligned (alignment)
This attribute specifies a minimum alignment (in bytes) for variables of the specified type. For example, the declarations:
struct S { short f[3]; } __attribute__ ((aligned (8)));
typedef int more_aligned_int __attribute__ ((aligned (8)));
force the compiler to ensure (as far as it can) that each variable whose type is struct S or more_aligned_int is allocated and aligned at least on a 8-byte boundary. On a SPARC, having all variables of type struct S aligned to 8-byte boundaries allows the compiler to use the ldd and std (doubleword load and store) instructions when copying one variable of type struct S to another, thus improving run-time efficiency.
Note that the alignment of any given struct or union type is required by the ISO C standard to be at least a perfect multiple of the lowest common multiple of the alignments of all of the members of the struct or union in question. This means that you can effectively adjust the alignment of a struct or union type by attaching an aligned attribute to any one of the members of such a type, but the notation illustrated in the example above is a more obvious, intuitive, and readable way to request the compiler to adjust the alignment of an entire struct or union type.
As in the preceding example, you can explicitly specify the alignment (in bytes) that you wish the compiler to use for a given struct or union type. Alternatively, you can leave out the alignment factor and just ask the compiler to align a type to the maximum useful alignment for the target machine you are compiling for. For example, you could write:
struct S { short f[3]; } __attribute__ ((aligned));
Whenever you leave out the alignment factor in an aligned attribute specification, the compiler automatically sets the alignment for the type to the largest alignment that is ever used for any data type on the target machine you are compiling for. Doing this can often make copy operations more efficient, because the compiler can use whatever instructions copy the biggest chunks of memory when performing copies to or from the variables that have types that you have aligned this way.
In the example above, if the size of each short is 2 bytes, then the size of the entire struct S type is 6 bytes. The smallest power of two that is greater than or equal to that is 8, so the compiler sets the alignment for the entire struct S type to 8 bytes.
Note that although you can ask the compiler to select a time-efficient alignment for a given type and then declare only individual stand-alone objects of that type, the compiler's ability to select a time-efficient alignment is primarily useful only when you plan to create arrays of variables having the relevant (efficiently aligned) type. If you declare or use arrays of variables of an efficiently-aligned type, then it is likely that your program also does pointer arithmetic (or subscripting, which amounts to the same thing) on pointers to the relevant type, and the code that the compiler generates for these pointer arithmetic operations is often more efficient for efficiently-aligned types than for other types.
The aligned attribute can only increase the alignment; but you can decrease it by specifying packed as well. See below.
Note that the effectiveness of aligned attributes may be limited by inherent limitations in your linker. On many systems, the linker is only able to arrange for variables to be aligned up to a certain maximum alignment. (For some linkers, the maximum supported alignment may be very very small.) If your linker is only able to align variables up to a maximum of 8-byte alignment, then specifying aligned(16) in an __attribute__ still only provides you with 8-byte alignment. See your linker documentation for further information.
.
packed
This attribute, attached to struct or union type definition, specifies that each member (other than zero-width bit-fields) of the structure or union is placed to minimize the memory required. When attached to an enum definition, it indicates that the smallest integral type should be used.
Specifying this attribute for struct and union types is equivalent to specifying the packed attribute on each of the structure or union members. Specifying the -fshort-enums flag on the line is equivalent to specifying the packed attribute on all enum definitions.
In the following example struct my_packed_struct's members are packed closely together, but the internal layout of its s member is not packed—to do that, struct my_unpacked_struct needs to be packed too.
struct my_unpacked_struct
{
char c;
int i;
};
struct __attribute__ ((__packed__)) my_packed_struct
{
char c;
int i;
struct my_unpacked_struct s;
};
You may only specify this attribute on the definition of an enum, struct or union, not on a typedef that does not also define the enumerated type, structure or union.
The problem which I was experiencing was specifically due to the use of packed. I attempted to simply add the aligned attribute to the structs and classes, but the error persisted. Only removing the packed attribute resolved the problem. For now, I am leaving the aligned attribute on them and testing to see if I find any improvements in the efficiency of the code as mentioned above, simply due to their being aligned on word boundaries. The application makes use of arrays of these structures, so perhaps there will be better performance, but only profiling the code will say for certain.