I'm developing an app for Mac OS, which includes a cross-platform lib in C++. There's a macro defined as follows:
#define MY_GET(DataType,DataName,PtrFunName,DefaultVaule) \
DataType Get##DataName() \
{ \
DataType dataTem = (DefaultVaule);\
if (NULL == p) \
{ \
return dataTem; \
} \
p->Get##PtrFunName(CComBSTR(L#DataName),&dataTem); \
return dataTem; \
}
When compiling, the compiler generates the following error:
Use of undeclared identifier 'L'
Which is expanded from macro 'MY_GET'. After searching for CComBSTR(L, I can find other usage of L"String". So why is the L expanded from my macro is undefined while other L are compiled successfully.
Is L"String" legal in Objective-C?
I seems that you need the preprocessor "token concatenation" operator ## here:
CComBSTR(L ## #DataName)
instead of
CComBSTR(L#DataName)
The following code in an Objective-C file compiles and produces the wchar_t string L"abc":
#define LL(x) L ## #x
wchar_t *s = LL(abc); // expands to: L"abc"
I don't know if other compilers behave differently, but the Apple LLVM 4.1 compiler does not a allow a space between L and the string:
#define LL(x) L#x
wchar_t *s = LL(abc); // expands to: L "abc"
// error: use of undeclared identifier 'L'
Related
Please note that my question is around JVM interpreter, not JIT compiler. JIT compiler converts java bytecodes to native machine code. As such, this MUST mean that the interpreter within the JVM DOES NOT convert bytecodes to machine code. Hence the question: in essence what does the interpreter do? If someone can help me answer this with a simple example of bytecodes equivalent of 1+1 = 2, i.e. what does the interpreter do with respect to executing this add operation? (My implicit question is, if interpreter does not translate to machine code which CPU then executes the ADD operation, how then is this operation performed? what machine code is ACTUALLY executed to support this ADD operation?)
The expression 1+1 will compile to the following bytecode:
iconst_1
iconst_1
add
(Actually, it will just compile to iconst_2 because the Java compiler performs constant-folding, but let's ignore that for the purposes of this answer.)
So to find out exactly what the interpreter does for those instructions, we should look at its source code. The relevant sections for const_1 and add start at line 983 and line 1221 respectively, so let's take a look:
#define OPC_CONST_n(opcode, const_type, value) \
CASE(opcode): \
SET_STACK_ ## const_type(value, 0); \
UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1);
OPC_CONST_n(_iconst_m1, INT, -1);
OPC_CONST_n(_iconst_0, INT, 0);
OPC_CONST_n(_iconst_1, INT, 1);
// goes on for several other constants
//...
#define OPC_INT_BINARY(opcname, opname, test) \
CASE(_i##opcname): \
if (test && (STACK_INT(-1) == 0)) { \
VM_JAVA_ERROR(vmSymbols::java_lang_ArithmeticException(), \
"/ by zero", note_div0Check_trap); \
} \
SET_STACK_INT(VMint##opname(STACK_INT(-2), \
STACK_INT(-1)), \
-2); \
UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1); \
// and then the same thing for longs instead of ints
OPC_INT_BINARY(add, Add, 0);
// other operators
The whole thing is inside a switch-statement that examines the opcode of the current instruction.
If we expand the macro-magic, replace the surrounding code with an extremely simplified template and make some simplifying assumptions (such as the stack only consisting of ints), we end up with something like this:
enum OpCode {
_iconst_1, _iadd
};
// ...
int* stack = new int[calculate_maximum_stack_size()];
size_t top_of_stack = 0;
size_t program_counter = 0;
while(program_counter < program_size) {
switch(opcodes[program_counter]) {
case _iconst_1:
// SET_STACK_INT(1, 0);
stack[top_of_stack] = 1;
// UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1);
program_counter += 1;
top_of_stack += 1;
break;
case _iadd:
// SET_STACK_INT(VMintAdd(STACK_INT(-2), STACK_INT(-1)), -2);
stack[top_of_stack - 2] = stack[top_of_stack - 1] + stack[top_of_stack - 2];
// UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1);
program_counter += 1;
top_of_stack += -1;
break;
}
So for 1+1 the sequence of operations would be:
stack[0] = 1;
stack[1] = 1;
stack[0] = stack[1] + stack[0];
And top_of_stack would be 1, so we'd end with a stack that contains the value 2 as its only element.
I am working with sysfs and I need to create a file under sysfs, the file should be readable and writable by all users, for which I set the Permissions in '__ATTR' to 0666. But the module does not compile, the moment I change the permissions to 0660, it compiles correctly.
The Error message that I get with 0666 permissions is as follows
`/home/rishabh/kernel_modules/Task09/task9.c: At top level:
include/linux/bug.h:33:45: error: negative width in bit-field ‘<anonymous>’
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
^
include/linux/kernel.h:859:3: note: in expansion of macro ‘BUILD_BUG_ON_ZERO’
BUILD_BUG_ON_ZERO((perms) & 2) + \
^
include/linux/sysfs.h:102:12: note: in expansion of macro ‘VERIFY_OCTAL_PERMISSIONS’
.mode = VERIFY_OCTAL_PERMISSIONS(_mode) }, \
^
/home/rishabh/kernel_modules/Task09/task9.c:65:2: note: in expansion of macro ‘__ATTR’
__ATTR(id, 0666, id_show, id_store);
^
include/linux/bug.h:33:45: warning: initialization from incompatible pointer type [enabled by default]
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
^
include/linux/kernel.h:859:3: note: in expansion of macro ‘BUILD_BUG_ON_ZERO’
BUILD_BUG_ON_ZERO((perms) & 2) + \
^
include/linux/sysfs.h:102:12: note: in expansion of macro ‘VERIFY_OCTAL_PERMISSIONS’
.mode = VERIFY_OCTAL_PERMISSIONS(_mode) }, \
^
/home/rishabh/kernel_modules/Task09/task9.c:65:2: note: in expansion of macro ‘__ATTR’
__ATTR(id, 0666, id_show, id_store);
^
include/linux/bug.h:33:45: warning: (near initialization for ‘id_attribute.show’) [enabled by default]
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
^
include/linux/kernel.h:859:3: note: in expansion of macro ‘BUILD_BUG_ON_ZERO’
BUILD_BUG_ON_ZERO((perms) & 2) + \
^
include/linux/sysfs.h:102:12: note: in expansion of macro ‘VERIFY_OCTAL_PERMISSIONS’
.mode = VERIFY_OCTAL_PERMISSIONS(_mode) }, \
^
/home/rishabh/kernel_modules/Task09/task9.c:65:2: note: in expansion of macro ‘__ATTR’
__ATTR(id, 0666, id_show, id_store);
^
`
I also tried using __ATTR_RW(_name) macro, but it gives read-write permissions only to root and all others are left with read permission.
If you follow the error messages, the 2nd one is
kernel.h:859:3: note: in expansion of macro ‘BUILD_BUG_ON_ZERO’
BUILD_BUG_ON_ZERO((perms) & 2)
and if you look in kernel.h you will see the comment
#define VERIFY_OCTAL_PERMISSIONS(perms)
...
/* OTHER_WRITABLE? Generally considered a bad idea. */ \
BUILD_BUG_ON_ZERO((perms) & 2) + \
...
So you can see that you are being told that it is a bad idea to make a sysfs file writeable to all. If you really want to do this, you must bypass this macro check. For example, add just before your call of __ATTR() a redefinition of the macro:
/* warning! need write-all permission so overriding check */
#undef VERIFY_OCTAL_PERMISSIONS
#define VERIFY_OCTAL_PERMISSIONS(perms) (perms)
__ATTR_RW(id) should be a correct way (and eudyptula accepted that ;)). Definition in sysfs.h says, that it set rights to 0644, which are correct rights you want - no one, except root user, can't write to /sys/kernel files (and it's specified in the task too).
sysfs.h part:
#define __ATTR_RW(_name) __ATTR(_name, (S_IWUSR | S_IRUGO), \
_name##_show, _name##_store)
I get errors during compilation of a GTK+ application saying I have undeclared functions/definitions (I believe GTK_OBJECT might be a definition in a header file). This is my code (main.c):
#include <gtk/gtk.h>
static gint delete_event_cb(GtkWidget* w, GdkEventAny* e, gpointer data);
int main(int argc, char *argv[]) {
//Create widgets
GtkWidget *window;
gtk_init(&argc, &argv);
//Initialize widgets
window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
//Configure widgets
gtk_window_set_title(GTK_WINDOW(window), "Hello World");
//Display widgets
gtk_widget_show(window);
//Set up signals
gtk_signal_connect(GTK_OBJECT(window), "delete_event", GTK_SIGNAL_FUNC(delete_event_cb), NULL);
gtk_main();
return 0;
}
static gint delete_event_cb(GtkWidget* w, GdkEventAny* e, gpointer data) {
gtk_main_quit();
turn FALSE;
}
I am using the following command in bash:
g++ `pkg-config --libs --cflags gtk+-3.0` main.c -o binary
I do have the developer version of gtk+ 3.0 installed. Any help is greatly appreciated.
Edit: This is the error message I get:
main.c: In function ‘int main(int, char**)’:
main.c:21:41: error: ‘GTK_OBJECT’ was not declared in this scope
gtk_signal_connect(GTK_OBJECT(window), "delete_event", GTK_SIGNAL_FUNC(delete_event_cb), NULL);
^
main.c:21:91: error: ‘GTK_SIGNAL_FUNC’ was not declared in this scope
gtk_signal_connect(GTK_OBJECT(window), "delete_event", GTK_SIGNAL_FUNC(delete_event_cb), NULL);
^
main.c:21:98: error: ‘gtk_signal_connect’ was not declared in this scope
gtk_signal_connect(GTK_OBJECT(window), "delete_event", GTK_SIGNAL_FUNC(delete_event_cb), NULL);
^
In file included from /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h:9:0,
from /usr/include/glib-2.0/glib/gtypes.h:32,
from /usr/include/glib-2.0/glib/galloca.h:32,
from /usr/include/glib-2.0/glib.h:30,
from /usr/include/gtk-3.0/gdk/gdkconfig.h:13,
from /usr/include/gtk-3.0/gdk/gdk.h:30,
from /usr/include/gtk-3.0/gtk/gtk.h:30,
from main.c:1:
main.c: In function ‘gint delete_event_cb(GtkWidget*, GdkEventAny*, gpointer)’:
/usr/include/glib-2.0/glib/gmacros.h:229:17: error: ‘turn’ was not declared in this scope
#define FALSE (0)
^
main.c:29:10: note: in expansion of macro ‘FALSE’
turn FALSE;
^
I solved it by myself, but it was difficult to find out how. I gave the wrong order of arguments to g++ and missed an argument as well. This is the bash command that worked for me:
g++ `pkg-config --libs --cflags gtk+-3.0` main.c -o binary `pkg-config --libs gtk+-3.0`
Unless my understanding is incorrect, the following macro
int i; // for loop
const char* ctype; // proprietary type string
void** pool = malloc(sizeof(void*) * (nexpected - 1));
size_t poolc = 0;
#define SET(type, fn) type* v = (pool[poolc++] = malloc(sizeof(type))); \
*v = (type) fn(L, i)
#define CHECK(chr, type, fn) case chr: \
SET(type, fn); \
break
switch (ctype[0]) {
CHECK('c', char, lua_tonumber);
}
should expand to
int i; // for loop
const char* ctype; // proprietary type string
void** pool = malloc(sizeof(void*) * (nexpected - 1));
size_t poolc = 0;
switch (ctype[0]) {
case 'c':
char* v = (pool[poolc++] = malloc(sizeof(char)));
*v = (char) lua_tonumber(L, i);
break;
}
but upon compilation, I get:
src/lua/snip.m:185:16: error: expected expression
CHECK('c', char, lua_tonumber);
^
src/lua/snip.m:181:9: note: expanded from macro 'CHECK'
SET(type, fn); \
^
src/lua/snip.m:178:23: note: expanded from macro 'SET'
#define SET(type, fn) type* v = (pool[poolc++] = malloc(sizeof(type))); \
^
src/lua/snip.m:185:5: error: use of undeclared identifier 'v'
CHECK('c', char, lua_tonumber);
^
src/lua/snip.m:181:5: note: expanded from macro 'CHECK'
SET(type, fn); \
^
src/lua/snip.m:179:6: note: expanded from macro 'SET'
*v = (type) fn(L, i)
^
2 errors generated.
What is going on here? Isn't the preprocessor a literal text replacement engine? Why is it trying to evaluate expressions?
Keep in mind while this looks like straight C, this is actually clang Objective C (note the .m) under the C11 standard. Not sure if that makes any difference.
I'm a loss at how to continue without expanding the code for each entry.
Your understanding is correct! But you're running into a quirk of the C language. A label, including a case label, must be followed by an expression, not a variable declaration.
You can work around this by inserting a null statement (e.g, 0;) after the case, or by enclosing the case body in a set of braces. A practical way of doing this might be by redefining CHECK as:
#define CHECK(chr, type, fn) \
case chr: { SET(type,fn); } break;
Is it possible to get the total number of items defined by an enum at runtime?
While it's pretty much the same question as this one, that question relates to C#, and as far as I can tell, the method provided there won't work in Objective-C.
An enum is a plain-old-C type, therefore it provides no dynamic runtime information.
One alternative is to use the last element of an enum to indicate the count:
typedef enum {
Red,
Green,
Blue,
numColors
} Color;
Using preprocessors you can achieve this without the annoying 'hack' of adding an additional value to your enum
#define __YourEnums \
YourEnum_one, \
YourEnum_two, \
YourEnum_three, \
YourEnum_four, \
YourEnum_five, \
YourEnum_six,
typedef enum : NSInteger {
__YourEnums
}YourEnum;
#define YourEnum_count ({ \
NSInteger __YourEnumsArray[] = {__YourEnums}; \
sizeof(__YourEnumsArray)/sizeof(__YourEnumsArray[0]); \
})