Which of the following declarations of function main are standard or standard conforming extensions [duplicate] - program-entry-point

This question already has answers here:
What are the valid signatures for C's main() function?
(5 answers)
What should main() return in C and C++?
(19 answers)
Closed 8 years ago.
Which of the following declarations of function main are standard or standard conforming extensions
(Please note that some compilers accept ill-formed main declarations, these should be considered incorrect).
A. void main (char* argv[] , int argc )
B. int main () correct
C. void main ()
D. int main (int argc, char* argv [] ) correct
E. int main (int argc, char* argv [], char* arge[] )

Related

C++ Builder Function error [bcc32 - Ambiguity error] inside dll file

I am creating a currency converter Win32 program in Embarcadero C++Builder. I wrote a function for transforming date from format specified on user PC to YYYY-MM-DD format. I need that part because of API settings.
When I have this function inside my project it works fine, but I need to have that function inside a DLL.
This is how my code looks like:
#pragma hdrstop
#pragma argsused
#include <SysUtils.hpp>
extern DELPHI_PACKAGE void __fastcall DecodeDate(const System::TDateTime DateTime, System::Word &Year, System::Word &Month, System::Word &Day);
extern "C" UnicodeString __declspec (dllexport) __stdcall datum(TDateTime dat) {
Word dan, mjesec, godina;
UnicodeString datum, datum_dan, datum_mjesec, datum_godina;
DecodeDate(dat, godina, mjesec, dan);
if (dan<=9 && mjesec<=9) {
datum_dan="0"+IntToStr(dan);
datum_mjesec="0"+IntToStr(mjesec);
}
if (dan<=9 && mjesec>9) {
datum_dan="0"+IntToStr(dan);
datum_mjesec=IntToStr(mjesec);
}
if (dan>9 && mjesec<=9) {
datum_dan=IntToStr(dan);
datum_mjesec="0"+IntToStr(mjesec);
}
if (dan>9 && mjesec>9) {
datum_dan=IntToStr(dan);
datum_mjesec=IntToStr(mjesec);
}
datum_godina=IntToStr(godina);
return datum_godina+"-"+datum_mjesec+"-"+datum_dan;
}
extern "C" int _libmain(unsigned long reason)
{
return 1;
}
`
I've included SysUtils.hpp and declared DecodeDate() function, without those lines I have a million errors. But with code looking like this, I am getting this error, which I can't get rid of:
[bcc32 Error] File1.cpp(30): E2015 Ambiguity between '_fastcall System::Sysutils::DecodeDate(const System::TDateTime,unsigned short &,unsigned short &,unsigned short &) at c:\program files (x86)\embarcadero\studio\19.0\include\windows\rtl\System.SysUtils.hpp:3466' and '_fastcall DecodeDate(const System::TDateTime,unsigned short &,unsigned short &,unsigned short &) at File1.cpp:25'
Full parser context
File1.cpp(27): parsing: System::UnicodeString __stdcall datum(System::TDateTime)
Can you help me to get rid of that error?
The error message is self-explanatory. You have two functions with the same name in scope, and the compiler doesn't know which one you want to use on line 30 because the parameters you are passing in satisfy both function declarations.
To fix the error, you can change this line:
DecodeDate(dat, godina, mjesec, dan);
To either this:
System::Sysutils::DecodeDate(dat, godina, mjesec, dan);
Or this:
dat.DecodeDate(&godina, &mjesec, &dan);
However, either way, you should get rid of your extern declaration for DecodeDate(), as it doesn't belong in this code at all. You are not implementing DecodeDate() yourself, you are just using the one provided by the RTL. There is already a declaration for DecodeDate() in SysUtils.hpp, which you are #include'ing in your code. That is all the compiler needs.
Just make sure you are linking to the RTL/VCL libraries to resolve the function during the linker stage after compiling. You should have enabled VCL support when you created the DLL project. If you didn't, recreate your project and enable it.
BTW, there is a MUCH easier way to implement your function logic - instead of manually pulling apart the TDateTime and reconstituting its components, just use the SysUtils::FormatDateTime() function or the TDateTime::FormatString() method instead, eg:
UnicodeString __stdcall datum(TDateTime dat)
{
return FormatDateTime(_D("yyyy'-'mm'-'dd"), dat);
}
UnicodeString __stdcall datum(TDateTime dat)
{
return dat.FormatString(_D("yyyy'-'mm'-'dd"));
}
That being said, this code is still wrong, because it is not safe to pass non-POD types, like UnicodeString, over the DLL boundary like you are doing. You need to re-think your DLL function design to use only interop-safe POD types. In this case, change your function to either:
take a wchar_t* as input from the caller, and just fill in the memory block with the desired characters. Let the caller allocate the actual buffer and pass it in to your DLL for populating:
#pragma hdrstop
#pragma argsused
#include <SysUtils.hpp>
extern "C" __declspec(dllexport) int __stdcall datum(double dat, wchar_t *buffer, int buflen)
{
UnicodeString s = FormatDateTime(_D("yyyy'-'mm'-'dd"), dat);
if (!buffer) return s.Length() + 1;
StrLCopy(buffer, s.c_str(), buflen-1);
return StrLen(buffer);
}
extern "C" int _libmain(unsigned long reason)
{
return 1;
}
wchar_t buffer[12] = {};
datum(SomeDateValueHere, buffer, 12);
// use buffer as needed...
int len = datum(SomeDateValueHere, NULL, 0);
wchar_t *buffer = new wchar_t[len];
int len = datum(SomeDateValueHere, buffer, len);
// use buffer as needed...
delete[] buffer;
allocate a wchar_t[] buffer to hold the desired characters, and then return a wchar_t* pointer to that buffer to the caller. Then export a second function that the caller can pass the returned wchar_t* back to you so you can free it correctly.
#pragma hdrstop
#pragma argsused
#include <SysUtils.hpp>
extern "C" __declspec(dllexport) wchar_t* __stdcall datum(double dat)
{
UnicodeString s = FormatDateTime("yyyy'-'mm'-'dd", dat);
wchar_t* buffer = new wchar_t[s.Length()+1];
StrLCopy(buffer, s.c_str(), s.Length());
return buffer;
}
extern "C" __declspec(dllexport) void __stdcall free_datum(wchar_t *dat)
{
delete[] dat;
}
extern "C" int _libmain(unsigned long reason)
{
return 1;
}
wchar_t *buffer = datum(SomeDateValueHere);
// use buffer as needed...
free_datum(buffer);

Remove an Array element [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
So im trying to code a function to remove an element from an Array.for some reason i'm getting no errors but still does not print the result i need. i think the problem is in the function or data type declaration.
#import <Foundation/Foundation.h>
void deleteArray(char stra[ ], char ElementToRemove);
int main(int argc, const char * argv[]) {
#autoreleasepool {
char str[100];
printf("Please Enter Array Elements\n");
scanf("%s",&str);
deleteArray(str, "a");
printf("%s",&str);
}
return 0;
}
void deleteArray(char stra[ ], char ElementToRemove)
{
int NumberOfElements = sizeof(stra);
int ElementPos;
for (int i = 0; i >= NumberOfElements;i++)
{
if (ElementToRemove == stra [i])
{
ElementPos = i;
}
}
for (int SecondCounter = ElementPos; SecondCounter >= NumberOfElements;SecondCounter++ )
{
stra[SecondCounter] = stra[SecondCounter - 1];
}
}
There are many issues with your code, let's see them one by one.
When you pass an array to a function, it decays to a pointer to the first element of the array. So, sizeof in the deleteArray() function is not doing what you think it's doing there.
You can use strlen() instead to get the length of a char array. However, please note, this does not count the terminating null, anyways, and you need to move that one, too, to make the end of the modified array.
Then, in the for loop,
for (int i = 0; i >= NumberOfElements;i++) //false always....
is wrong. I believe what you want is
for (int i = 0; i < NumberOfElements;i++)
After that, regarding the call to the function should be
deleteArray(str, 'a'); // 'a' is a char
instead of
deleteArray(str, "a"); // "a" denotes a string
Next, in the main() function, remove the & from the argument to printf(). It should look like
printf("%s",str);
Also, to ensure safety from buffer overflow, you should make yourscanf() to look like
scanf("%99s",str);
If you need dynamically sized arrays, I recommend making them the last flexible array member (that wikipage has an example) of some (growable) struct and keep the size of that array in its containing struct ....
If you want to have array features, such as add, delete, move and so on, use linked lists. Linked lists are using pointers, so you can represent an array using them. This way it is possible to delete an element, move it or add a new one.
When declaring an array in c, you declare it fixed size. In your case, if you want not to use pointers and lists, you have to copy array elements to a new one, excluding the unneeded.

pthread_mutex_t struct: What does lock stand for?

I am looking at the pthread_mutex_t structure in the pthreadtypes.h file. What does the "__lock" stand for? Is it like a lock number assigned to the mutex?
typedef union
{
struct __pthread_mutex_s
{
int __lock;
unsigned int __count;
int __owner;
#if __WORDSIZE == 64
unsigned int __nusers;
#endif
/* KIND must stay at this position in the structure to maintain
binary compatibility. */
int __kind;
#if __WORDSIZE == 64
int __spins;
__pthread_list_t __list;
# define __PTHREAD_MUTEX_HAVE_PREV 1
#else
unsigned int __nusers;
__extension__ union
{
int __spins;
__pthread_slist_t __list;
};
#endif
} __data;
char __size[__SIZEOF_PTHREAD_MUTEX_T];
long int __align;
} pthread_mutex_t;
The __lock member of struct __pthread_mutex_s __data is used as a futex object on Linux. Many of the following details may differ depending on the architecture you're looking at:
See the pthread_mutex_lock.c code for the high level locking function for pthread mutexes - __pthread_mutex_lock(), which generally will end up calling LLL_MUTEX_LOCK() and the definitions of LLL_MUTEX_LOCK() and friends, which end up calling lll_lock(), etc., in lowlevellock.h.
The lll_lock() macro in turn calls __lll_lock_wait_private(), which calls lll_futex_wait(), which makes the sys_futex system call.

How to interpret objective-c type specifier (e.g. returned by method_copyReturnType())?

Given I have a type specifier as returned by method_copyReturnType(). In the GNU runtime delivered with the GCC there are various methods to work with such a type specifier like objc_sizeof_type(), objc_alignof_type() and others.
When using the Apple runtime there are no such methods.
How can I interpret a type specifier string (e.g. get the size of a type) using the Apple runtime without implementing an if/else or case switch for myself?
[update]
I am not able to use the Apple Foundation.
I believe that you're looking for NSGetSizeAndAlignment:
Obtains the actual size and the aligned size of an encoded type.
const char * NSGetSizeAndAlignment (
const char *typePtr,
NSUInteger *sizep,
NSUInteger *alignp
);
Discussion
Obtains the actual size and the aligned size of the first data type represented by typePtr and returns a pointer to the position of the next data type in typePtr.
This is a Foundation function, not part of the base runtime, which is probably why you didn't find it.
UPDATE: Although you didn't initially mention that you're using Cocotron, it is also available there. You can find it in Cocotron's Foundation, in NSObjCRuntime.m.
Obviously, this is much better than rolling your own, since you can trust it to always correctly handle strings generated by its own runtime in the unlikely event that the encoding characters should change.
For some reason, however, it's unable to handle the digit elements of a method signature string (which presumably have something to do with offsets in memory). This improved version, by Mike Ash will do so:
static const char *SizeAndAlignment(const char *str, NSUInteger *sizep, NSUInteger *alignp, int *len)
{
const char *out = NSGetSizeAndAlignment(str, sizep, alignp);
if(len)
*len = out - str;
while(isdigit(*out))
out++;
return out;
}
afaik, you'll need to bake that info into your binary. just create a function which returns the sizeof and alignof in a struct, supports the types you must support, then call that function (or class method) for the info.
The program below shows you that many of the primitives are just one character. So the bulk of the function's implementation could be a switch.
static void test(SEL sel) {
Method method = class_getInstanceMethod([NSString class], sel);
const char* const type = method_copyReturnType(method);
printf("%s : %s\n", NSStringFromSelector(sel).UTF8String, type);
free((void*)type);
}
int main(int argc, char *argv[]) {
#autoreleasepool {
test(#selector(init));
test(#selector(superclass));
test(#selector(isEqual:));
test(#selector(length));
return 0;
}
}
and you could then use this as a starting point:
typedef struct t_pair_alignof_sizeof {
size_t align;
size_t size;
} t_pair_alignof_sizeof;
static t_pair_alignof_sizeof MakeAlignOfSizeOf(size_t align, size_t size) {
t_pair_alignof_sizeof ret = {align, size};
return ret;
}
static t_pair_alignof_sizeof test2(SEL sel) {
Method method = class_getInstanceMethod([NSString class], sel);
const char* const type = method_copyReturnType(method);
const size_t length = strlen(type);
if (1U == length) {
switch (type[0]) {
case '#' :
return MakeAlignOfSizeOf(__alignof__(id), sizeof(id));
case '#' :
return MakeAlignOfSizeOf(__alignof__(Class), sizeof(Class));
case 'c' :
return MakeAlignOfSizeOf(__alignof__(signed char), sizeof(signed char));
...

bool versus BOOL [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Objective-C : BOOL vs bool
Is there any difference between BOOL and Boolean in Objective-C?
I noticed from the autocomplete in XCode that there is a bool and a BOOL in Objective-C. Are these different? Why are there two different kinds of bool?
Are they interchangeable?
Yes they are different.
C++ has bool, and it is a true Boolean type. It is guaranteed to be 0 or 1 within integer context.
C99 has _Bool as a true Boolean type, and if <stdbool.h> is included, then bool becomes a preprocessor macro for _Bool (this header also defines true and false as preprocessor macros for 1 and 0 respectively).
Cocoa has BOOL as a type, but it is just a typedef for signed char. It can represent more values than just 0 or 1.
Carbon has Boolean as a type, but it is just a typedef for unsigned char. Like, Cocoa's BOOL it can represent more values than just 0 or 1.
For Cocoa and Carbon's “Boolean” types, they should be thought of as zero meaning false, and any non-zero value meaning true.
You should use bool unless you need to interoperate with code that uses BOOL, because bool is a real Boolean type and BOOL isn't. What do I mean "real Boolean type"? I mean that code like this does what you expect it to:
#define FLAG_A 0x00000001
#define FLAG_B 0x00000002
...
#define FLAG_F 0x00000020
struct S
{
// ...
unsigned int flags;
};
void doSomething(S* sList, bool withF)
{
for (S* s = sList; s; s = s->next)
{
if ((bool)(s->flags & FLAG_F) != withF)
continue;
// actually do something
}
}
because (bool)(s->flags & FLAG_F) can be relied upon to evaluate to either 0 or 1. If that were a BOOL instead of a bool in the cast, it wouldn't work, because withF evaluates to 0 or 1, and (BOOL)(s->flags & FLAG_F) evaluates to 0 or the numeric value of FLAG_F, which in this case is not 1.
This example is contrived, yeah, but real bugs of this type can and do happen all too often in old code that doesn't use the C99/C++ genuine boolean types.
BOOL is defined in Objective-C as typedef signed char BOOL, while bool is the datatype defined in C99.
BOOL is actually a signed char (thanks Yuji), while bool is a true boolean from the ISO C99 standard.
See here: http://iosdevelopertips.com/objective-c/of-bool-and-yes.html