I'm using the CocoaLumberjack logging framework 2.0.0 for logging with different levels. In my Prefix.pch (I know that this file is deprecated, but it should work nevertheless) I include Cocoalumberjack and set the global log level as suggested here:
#ifdef DEBUG
static const DDLogLevel ddLogLevel = DDLogLevelDebug;
#else
static const DDLogLevel ddLogLevel = DDLogLevelWarn;
#endif
I have a DDLogVerbose statement on a few methods, that should not be logged by default. Problem: However, they are getting logged.
Inspecting the ddLogLevel in an init-function shows 00001111, which equals DDLogLevelDebug. Nevertheless, a verbose logging statement directly after this is executed. (1)
Preprocessing the line DDLogVerbose(#"I AM VERBOSE") shows this code:
do {
if(DDLogLevelVerbose & DDLogFlagVerbose)
[DDLog log : __objc_yes
level : DDLogLevelVerbose
flag : DDLogFlagVerbose
context : 0
file : "....m"
function : __PRETTY_FUNCTION__
line : 59
tag : ((void *)0)
format : (#"I AM VERBOSE")];
} while(0);
which means, that the LogLevel after preprocessing is Verbose. (2) I found out that this level is the default in CocoaLumberjack in case, no log level is defined:
#ifndef LOG_LEVEL_DEF
#ifdef ddLogLevel
#define LOG_LEVEL_DEF ddLogLevel
#else
#define LOG_LEVEL_DEF DDLogLevelVerbose
#endif
#endif
But: Debugging this shows that the first path is executed, i.e. LOG_LEVEL_DEF (which is checked against the level of the statement to determine if it should be logged or not) is assigned the correct level (Debug).
Question: I didn't find out, why (1) shows the LogLevel Debug and, after preprocessing, it switched to Verbose (2). Could this be a matter of the order in which headers are included? Or am I missing some important point?
I didn't solve this issue, so I wrote my own header file for logging:
// Create Logging Messages by calling the functions:
// * DDLogFatal(...)
// * DDLogError(...)
// * DDLogWarn(...)
// * DDLogInfo(...)
// * DDLogDebug(...)
// * DDLogTrace(...)
// * DDLogTrace()
// Only the functions that match Log Level (defined beneath) and above this level will lead to an output.
//
// NOTE: For this file to work, the option "Treat warnings as errors" must be turned off!
/*********************************
* CURRENT LOG LEVEL ***
*********************************/
#define LOG_LEVEL LOG_LEVEL_DEBUG
/* Default Log Level */
#ifndef LOG_LEVEL
#ifdef DEBUG
#define LOG_LEVEL LOG_LEVEL_DEBUG
#else
#define LOG_LEVEL LOG_LEVEL_WARN
#endif
#endif
/* List of Log Levels */
#define LOG_LEVEL_OFF 0 // 0000 0000
#define LOG_LEVEL_FATAL 1 // 0000 0001
#define LOG_LEVEL_ERROR 3 // 0000 0011
#define LOG_LEVEL_WARN 7 // 0000 0111
#define LOG_LEVEL_INFO 15 // 0000 1111
#define LOG_LEVEL_DEBUG 31 // 0001 1111
#define LOG_LEVEL_TRACE 63 // 0011 1111
#define LOG_FLAG_FATAL 1 // 0000 0001
#define LOG_FLAG_ERROR 2 // 0000 0010
#define LOG_FLAG_WARN 4 // 0000 0100
#define LOG_FLAG_INFO 8 // 0000 1000
#define LOG_FLAG_DEBUG 16 // 0001 0000
#define LOG_FLAG_TRACE 32 // 0010 0000
#if (LOG_LEVEL & LOG_FLAG_FATAL) > 0
#define DDLogFatal(...) ALog(#"FATAL", __VA_ARGS__)
#else
#define DDLogFatal(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_ERROR) > 0
#define DDLogError(...) ALog(#"ERROR", __VA_ARGS__)
#else
#define DDLogError(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_WARN) > 0
#define DDLogWarn(...) ALog(#"WARNING", __VA_ARGS__)
#else
#define DDLogWarn(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_INFO) > 0
#define DDLogInfo(...) ALog(#"INFO", __VA_ARGS__)
#else
#define DDLogInfo(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_DEBUG) > 0
#define DDLogDebug(...) ALog(#"DEBUG", __VA_ARGS__)
#else
#define DDLogDebug(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_TRACE) > 0
#define DDLogTrace(...) ALog(#"TRACE", __VA_ARGS__)
#define DDLogEntry() ALog(#"TRACE", #"->")
#else
#define DDLogTrace(...)
#define DDLogEntry()
#endif
#define ALog(logLevel, fmt, ...) NSLog((#"%s [Line %d] %#: " fmt), __PRETTY_FUNCTION__, __LINE__, logLevel, ##__VA_ARGS__)
Include this file wherever Logging is needed. Hope this helps someone!
So I'm not sure if this is the same issue you were running into, but I had a similar symptom, i.e. my log levels being ignored. What was happening for me is that the cocoa lumberjack folks made it easier in v2 for new users to get started by not having to specify a log level at all to get the framework to work.
As per the lumberjack docs, to actually use ddLogLevel I needed to #define it before importing the CocoaLumberjack.h file:
Using ddLogLevel to start using
the library is now optional. If you define it add #define
LOG_LEVEL_DEF ddLogLevel before #import
and make change its type to
DDLogLevel
In my case, I'm doing that in the .pch file, so it looks like:
// ProjectX.pch
#define LOG_LEVEL_DEF ddLogLevel // this is the crucial bit!
#import "CocoaLumberjack/CocoaLumberjack.h"
// Then the normal definitions...
#ifdef DEBUG
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused-variable"
static DDLogLevel ddLogLevel = DDLogLevelWarning;
#pragma clang diagnostic pop
#else
static const DDLogLevel ddLogLevel = DDLogLevelWarning;
#endif
#define LOG_LEVEL_DEF ddLogLevel
CocoaLumberjack has 4 log levels
Error
Warning
Info
Verbose
The "ddLogLevel" determines which logs are to be executed and which to be ignored.
If you do not want DDLogVerbose to be executed change to lower levels like Info.
Change your DEBUG macro as follows
#ifdef DEBUG
static const int ddLogLevel = LOG_LEVEL_INFO;
#else
static const int ddLogLevel = LOG_LEVEL_ERROR;
#endif
Hope this solves your issue.
Related
A client sends to me a message signed with a private key, type ECDSA secp256R1. I'm in possession of a leaf certificate, in DER format, provided by the client. In addition, I also have the raw message and a sha256 digest of the msg.
I have created a struct where to store all the required info for the verification, with the idea of providing a public API in my application:
struct SignatureVerifyData {
unsigned char *msg;
unsigned char *hash; // digest sha256 of msg
unsigned char *cert; // leaf cert in DER
unsigned char *signature;
size_t msg_len;
size_t hash_len;
size_t cert_len;
size_t signature_len;
};
I'm reading the ecdsa.c example from MbedTLS, but in this case the cert is generated in the same example, I can use mbedtls_x509_crt_parse_der() to load my leaf cert, but then, should I to move it to a mbedtls_ecdsa_context object to use with mbedtls_ecdsa_read_signature()?
Should I use other way to load the leaf cert?
Confused on how to use the group and point objects, or if I need to use them at all.
#define MBEDTLS_HAVE_ASM
#define MBEDTLS_HAVE_TIME
#define MBEDTLS_ALLOW_PRIVATE_ACCESS
#define MBEDTLS_PLATFORM_C
#define MBEDTLS_ECP_DP_SECP256R1_ENABLED
#define MBEDTLS_KEY_EXCHANGE_ECDHE_ECDSA_ENABLED
#define MBEDTLS_SSL_PROTO_TLS1_2
#define MBEDTLS_AES_C
#define MBEDTLS_ASN1_PARSE_C
#define MBEDTLS_ASN1_WRITE_C
#define MBEDTLS_BIGNUM_C
#define MBEDTLS_CIPHER_C
#define MBEDTLS_CTR_DRBG_C
#define MBEDTLS_ECDH_C
#define MBEDTLS_ECDSA_C
#define MBEDTLS_ECP_C
#define MBEDTLS_ENTROPY_C
#define MBEDTLS_GCM_C
#define MBEDTLS_MD_C
#define MBEDTLS_NET_C
#define MBEDTLS_OID_C
#define MBEDTLS_PK_C
#define MBEDTLS_PK_PARSE_C
#define MBEDTLS_SHA224_C
#define MBEDTLS_SHA256_C
#define MBEDTLS_SHA384_C
#define MBEDTLS_SHA512_C
#define MBEDTLS_SSL_CLI_C
#define MBEDTLS_SSL_SRV_C
#define MBEDTLS_SSL_TLS_C
#define MBEDTLS_X509_CRT_PARSE_C
#define MBEDTLS_X509_USE_C
#define MBEDTLS_BASE64_C
#define MBEDTLS_PEM_PARSE_C
#define MBEDTLS_AES_ROM_TABLES
#define MBEDTLS_MPI_MAX_SIZE 48 // 384-bit EC curve = 48 bytes
#define MBEDTLS_ECP_WINDOW_SIZE 2
#define MBEDTLS_ECP_FIXED_POINT_OPTIM 0
#define MBEDTLS_ECP_NIST_OPTIM
#define MBEDTLS_ENTROPY_MAX_SOURCES 2
#define MBEDTLS_SSL_CIPHERSUITES \
MBEDTLS_TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, \
MBEDTLS_TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
#define MBEDTLS_SSL_IN_CONTENT_LEN 1024
#define MBEDTLS_SSL_OUT_CONTENT_LEN 1024
#include "mbedtls/check_config.h"
mbedtls_x509_crt_parse_der constructs an object of type mbedtls_x509_crt. This structure has a field called pk which contains the public key. Call mbedtls_pk_verify to verify the signature.
Here's the general idea of the code to parse the certificate, calculate the hash and verify the signature. Untested code, typed directly into my browser. Error checking omitted, be sure to check that all the function calls succeed.
#include <mbedtls/md.h>
#include <mbedtls/pk.h>
#include <mbedtls/x509_crt.h>
mbedtls_x509_crt crt;
mbedtls_x509_init(&crt);
mbedtls_x509_crt_parse_der(&crt, cert, cert_len);
const mbedtls_md_info_t *md_info = mbedtls_md_info_from_type(MBEDTLS_MD_SHA256);
hash_len = mbedtls_md_get_size(md_info);
hash = malloc(hash_len);
mbedtls_md(md_info, msg, msg_len, hash);
mbedtls_pk_verify(&crt->pk, MBEDTLS_MD_SHA256, hash, hash_len, signature, signature_len);
mbedtls_x509_free(&crt);
free(hash);
After upgrade to xcode13.4, redefinition of struct z_stream_s, gz_header_s, gzfile_s in zlib.h.
I added to guard to avoid the redefinition in the starting and ending
#ifndef ZLIB_H
#define ZLIB_H
------ coding
#endif
Code where the error occurring
typedef struct z_stream_s {
z_const Bytef *next_in; /* next input byte */
uInt avail_in; /* number of bytes available at next_in */
uLong total_in; /* total number of input bytes read so far */
Bytef *next_out; /* next output byte will go here */
uInt avail_out; /* remaining free space at next_out */
uLong total_out; /* total number of bytes output so far */
z_const char *msg; /* last error message, NULL if no error */
struct internal_state FAR *state; /* not visible by applications */
alloc_func zalloc; /* used to allocate the internal state */
free_func zfree; /* used to free the internal state */
voidpf opaque; /* private data object passed to zalloc and zfree */
int data_type; /* best guess about the data type: binary or text
for deflate, or the decoding state for inflate */
uLong adler; /* Adler-32 or CRC-32 value of the uncompressed data */
uLong reserved; /* reserved for future use */
} z_stream;
typedef struct z_stream_s FAR *z_streamp;
typedef struct gz_header_s {
int text; /* true if compressed data believed to be text */
uLong time; /* modification time */
int xflags; /* extra flags (not used when writing a gzip file) */
int os; /* operating system */
Bytef *extra; /* pointer to extra field or Z_NULL if none */
uInt extra_len; /* extra field length (valid if extra != Z_NULL) */
uInt extra_max; /* space at extra (only when reading header) */
Bytef *name; /* pointer to zero-terminated file name or Z_NULL */
uInt name_max; /* space at name (only when reading header) */
Bytef *comment; /* pointer to zero-terminated comment or Z_NULL */
uInt comm_max; /* space at comment (only when reading header) */
int hcrc; /* true if there was or will be a header crc */
int done; /* true when done reading gzip header (not used
when writing a gzip file) */
} gz_header;
struct gzFile_s {
unsigned have;
unsigned char *next;
z_off64_t pos;
};
I replaced the latest zlib.h also but the same redefinition error occurring again.
These files are used to generate the qrcode
redefintion issue occurring in all these files
When i click the error occur it goes to the line where struct z_stream_s is defined
typedef struct z_stream_s {
In that error shows that it has been already defined in the file unzip.c
in the unzip.c when i look into that zlib.h has been include in the both unzip.h and unzip.c
in unzip.h
#ifndef _unz_H
#define _unz_H
#ifdef __cplusplus
extern "C" {
#endif
#ifndef _ZLIB_H
#include "zlib.h"
#endif
#ifndef _ZLIBIOAPI_H
#include "ioapi.h"
#endif
in unzip.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "zlib.h"
#include "unzip.h"
#ifdef STDC
# include <stddef.h>
# include <string.h>
# include <stdlib.h>
#endif
#ifdef NO_ERRNO_H
extern int errno;
#else
# include <errno.h>
#endif
These files are used to generate the QRCODE
and the redefinition error continues on file by file zip.c, zip.h, pngpriv.h, pngstruct.h like that it goes on
Note:The same project is working fine in Xcode 12.4, but in latest Xcode 13 Its shows the redefinition error
Please help me to resolve this issue.
Thanks in advance.
Is it possible to have a header file compile differently in two different source files by using defines in the source files?
For example, if i have a single header included in two source files as in:
header.h:
#if FOO
#define BAR(x) f(x)
#else
#define BAR(x) g(x)
#endif
source1.cpp:
#define FOO 1
#include "header.h"
void a(int x) {
BAR(x); // f(x)?
}
source2.cpp
#include "header.h"
void b(int x) {
BAR(x); // g(x)?
}
Should this not compile so that function a performs f and function b performs g?
I'm trying to do this in XCode and Objective-C++. Both a and b perform g as if source1.cpp didn't define FOO.
Your macro is defined incorrectly correctly. The mistake is that it should be However I prefer to use #ifdef and not #if
#ifdef FOO
#define BAR(x) f(x)
#else
#define BAR(x) g(x)
#endif
In addition you do not have to give FOO a value, all you need to do is to #define it in source1.cpp
#define FOO
#include "header.h"
In source2.cpp I would also ensure that FOO is not defined (as a carry over from any other includes) by doing:
#ifdef FOO
#undef FOO
#endif
#include "header.h"
EDIT
I was a bit quick to say that the macro was wrong. As per this SO question What is the value of an undefined constant used in #if? (C++) the #if should work as given by the OP, as the value of FOO should decay to 0 when it is not defined.
However I think that using #ifdef provides more context as to what is actually desired.
Thus I suspect that the definition of FOO is sneaking in unexpectedly somewhere.
For your case, The best way to differentiate based on macros is, using the toggle method:
#ifdef FOO
#define BAR(x) f(x)
#undef FOO
#else
#define BAR(x) g(x)
#endif
source1.cpp:
#define FOO
#include "header.h"
void a(int x) {
BAR(x); // f(x)?
}
source2.cpp
#undef FOO
#include "header.h"
void b(int x) {
BAR(x); // g(x)?
}
For more control try this:
#ifdef FOO
#if FOO == 1
#define BAR(x) f(x)
#undef FOO
#elif FOO == 2
#define BAR(x) g(x)
#undef FOO
#endif
#endif
And write like this:
source1.cpp:
#undef FOO
#define FOO 1
#include "header.h"
void a(int x) {
BAR(x); // f(x)?
}
source2.cpp
#undef FOO
#define FOO 2
#include "header.h"
void b(int x) {
BAR(x); // g(x)?
}
There are several ways you can achieve, what you asked.
Hope this helps.
The problem was that the header was after all indirectly included in the precompiled headers. XCode seems to include the precompiled headers automatically to every compilation unit, therefore only one version of the macro was available. The version precompiled was the one without the definition i.e. the #else-branch because no source files has been read at the time of the precompilation.
I will accept Peter M's answer as he came to the right conclusion about this.
The toggle method by askmish didn't help in my case, but that's the way i'll do this in the future, since that would have lead to the solution immediately.
Pretty straightforward question,
no matter what I do the app crashes on attempting to call open(), below is a part of the code that is relevant. filename is not a garbage value, and contains an absolute path to the file. This fails on the device and the simulator.
printf of filename returns:
/Users/programmingstation7/Library/Application Support/iPhone
Simulator/4.3/Applications/E2BD16DB-FFBA-45D2-B425-96C981380B85/Documents/issue2.zip
relevant backtrace:
#0 0x002132dc in open ()
#1 0x000ddcec in -[ExternalZipInstaller
unzipTheFile] (self=0x68a8d60, _cmd=0x1483f3) at
ExternalZipInstaller.mm:261
code:
#include <stdio.h> /* Standard input/output definitions */
#include <string.h> /* String function definitions */
#include <unistd.h> /* UNIX standard function definitions */
#include <fcntl.h> /* File control definitions */
#include <errno.h> /* Error number definitions */
#include <termios.h> /* POSIX terminal control definitions */
#ifndef O_BINARY
#define O_BINARY 0
#endif
- (void) unzipTheFile
{
BOOL success = YES;
const char* filename = [self.zipName UTF8String];
open(filename, O_RDONLY | O_BINARY);
The documentation for the UTF8String method of NSString has the following note:
The returned C string is automatically freed just as a returned object
would be released; you should copy the C string if it needs to store
it outside of the autorelease context in which the C string is
created.
I think you need to copy the resulting string into your own buffer instead of just pointing to it. The ObjC garbage collector could be deleting your string from under you. Try this instead:
const char filename[MAX_PATH];
strcpy(filename, [self.zipName UTF8String], MAX_PATH);
open(filename, O_RDONLY | O_BINARY);
I am installing mod_mono with Apache 2 on FreeBSD and I get the following error when Apache tries to load the mod_mono.so module.
Cannot load
/usr/local/apache/modules/mod_mono.so
into server:
/usr/local/apache/modules/mod_mono.so:
Undefined symbol "strndup"
The prefix I set for Apache is /usr/local/apache and I have PHP and other modules working already. I found that strndup is referenced in roken.h in /usr/include and I tried the following additions to configure command but it did not work.
--libdir=/usr/lib --includedir=/usr/include
I also tried...
--with-mono-prefix=/usr
I do not know what to try next. It does not appear that mod_mono has many build options. Since Mono and XSP are both built successfully I just need mod_mono to work.
I appreciate any tips to get this working.
Add strndup via implementing it:
ifdef HAVE_CONFIG_H
# include <config.h>
#endif
#if !_LIBC
# include "strndup.h"
#endif
#include <stdlib.h>
#include <string.h>
#if !_LIBC
# include "strnlen.h"
# ifndef __strnlen
# define __strnlen strnlen
# endif
#endif
#undef __strndup
#if _LIBC
# undef strndup
#endif
#ifndef weak_alias
# define __strndup strndup
#endif
char *
__strndup (s, n)
const char *s;
size_t n;
{
size_t len = __strnlen (s, n);
char *new = malloc (len + 1);
if (new == NULL)
return NULL;
new[len] = '\0';
return memcpy (new, s, len);
}
#ifdef libc_hidden_def
libc_hidden_def (__strndup)
#endif
#ifdef weak_alias
weak_alias (__strndup, strndup)
#endif