During a debug session with XCode 5, how would I display the actual value of an NSDecimal var? I found this question but that doesn't work for me. Entering a summary description like {(int)[$VAR intValue]} just results in a message "Summary Unavailable". I should add that my NSDecimals are in an array (NSDecimal dataPoint[2];).
Using the debug console to either print the var description via the context menu or by using p dataPoint[0] just gives me the raw NSDecimal view:
Printing description of dataPoint[0]:
(NSDecimal) [0] = {
_exponent = -1
_length = 1
_isNegative = 0
_isCompact = 1
_reserved = 0
_mantissa = {
[0] = 85
[1] = 0
[2] = 42703
[3] = 65236
[4] = 65534
[5] = 65535
[6] = 23752
[7] = 29855
}
}
UPDATE
In Xcode 10.0, there was an lldb bug that made this answer print the wrong value if the Decimal is a value in a Dictionary<String: Decimal> (and probably in other cases). See this question and answer and Swift bug report SR-8989. The bug was fixed by Xcode 11 (possibly earlier).
ORIGINAL
You can add lldb support for formatting NSDecimal (and, in Swift, Foundation.Decimal) by installing Python code that converts the raw bits of the NSDecimal to a human-readable string. This is called a type summary script and is documented under “PYTHON SCRIPTING” on this page of the lldb documentation.
One advantage of using a type summary script is that it doesn't involve running code in the target process, which can be important for certain targets.
Another advantage is that the Xcode debugger's variable view seems to work more reliably with a type summary script than with a summary format as seen in hypercrypt's answer. I had trouble with the summary format, but the type summary script works reliably.
Without a type summary script (or other customization), Xcode shows an NSDecimal (or Swift Decimal) like this:
With a type summary script, Xcode shows it like this:
Setting up the type summary script involves two steps:
Save the script (shown below) in a file somewhere. I saved it in ~/.../lldb/Decimal.py.
Add a command to ~/.lldbinit to load the script. The command should look like this:
command script import ~/.../lldb/Decimal.py
Change the path to wherever you stored the script.
Here's the script. I have also saved it in this gist.
# Decimal / NSDecimal support for lldb
#
# Put this file somewhere, e.g. ~/.../lldb/Decimal.py
# Then add this line to ~/.lldbinit:
# command script import ~/.../lldb/Decimal.py
import lldb
def stringForDecimal(sbValue, internal_dict):
from decimal import Decimal, getcontext
sbData = sbValue.GetData()
if not sbData.IsValid():
raise Exception('unable to get data: ' + sbError.GetCString())
if sbData.GetByteSize() != 20:
raise Exception('expected data to be 20 bytes but found ' + repr(sbData.GetByteSize()))
sbError = lldb.SBError()
exponent = sbData.GetSignedInt8(sbError, 0)
if sbError.Fail():
raise Exception('unable to read exponent byte: ' + sbError.GetCString())
flags = sbData.GetUnsignedInt8(sbError, 1)
if sbError.Fail():
raise Exception('unable to read flags byte: ' + sbError.GetCString())
length = flags & 0xf
isNegative = (flags & 0x10) != 0
if length == 0 and isNegative:
return 'NaN'
if length == 0:
return '0'
getcontext().prec = 200
value = Decimal(0)
scale = Decimal(1)
for i in range(length):
digit = sbData.GetUnsignedInt16(sbError, 4 + 2 * i)
if sbError.Fail():
raise Exception('unable to read memory: ' + sbError.GetCString())
value += scale * Decimal(digit)
scale *= 65536
value = value.scaleb(exponent)
if isNegative:
value = -value
return str(value)
def __lldb_init_module(debugger, internal_dict):
print('registering Decimal type summaries')
debugger.HandleCommand('type summary add Foundation.Decimal -F "' + __name__ + '.stringForDecimal"')
debugger.HandleCommand('type summary add NSDecimal -F "' + __name__ + '.stringForDecimal"')
The easiest way is to turn it into an NSDecimalNumber in the debugger, i.e.
po [NSDecimalNumber decimalNumberWithDecimal:dataPoint[0]]
This will create a new NSDecimalNumber which prints a nice description. The NSDecimal in your questions is 8.5.
(lldb) po [NSDecimalNumber decimalNumberWithDecimal:dataPoint[0]]
8.5
If you want to have the number displayed in the Variable View, the Summary Format for it would be:
{[NSDecimalNumber decimalNumberWithDecimal:$VAR]}:s
Related
I have enabled all verification layers and am error and warning free and should be able to render a colored triangle. However, currently all I am seeing is a cleared screen (black) or an empty screen (grey) depending on the machine I am running on.
Upon further inspection it seems in my call to vkCreateGraphicsPipelines the pViewports and pScissors are always set to UNUSED even though I passed in values. I do not have dynamicState and the counts are both one.
Am I missing a flag or is my binding flawed?
Code snippet:
Thread 0, Frame 0:
vkCreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines) returns VkResult VK_SUCCESS (0):
device: VkDevice = 00000000056FD350
pipelineCache: VkPipelineCache = 0000000000000000
createInfoCount: uint32_t = 1
pCreateInfos: const VkGraphicsPipelineCreateInfo* = 000000000566BB38
pCreateInfos[0]: const VkGraphicsPipelineCreateInfo = 000000000566BB38:
sType: VkStructureType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO (28)
pNext: const void* = NULL
...
pViewportState: const VkPipelineViewportStateCreateInfo* = 000000000725A920:
sType: VkStructureType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO (22)
pNext: const void* = NULL
flags: VkPipelineViewportStateCreateFlags = 0
viewportCount: uint32_t = 1
pViewports: const VkViewport* = UNUSED
scissorCount: uint32_t = 1
pScissors: const VkRect2D* = UNUSED
Debugging prints:
Pipeline_Info.pViewportState.pScissors before assign = 0
Pipeline_Info.pViewportState.pViewports before assign = 0
Pipeline_Info.pViewportState.pScissors after assign = 2350856
Pipeline_Info.pViewportState.pViewports after assign = 2348296
vkCreateGraphicsPipelines call result = 0
Full API Dump (see line 2107):
https://pastebin.com/MmXUBnk0
Full code (for those who are curious - see line 791):
https://github.com/AdaDoom3/AdaDoom3/blob/master/Engine/neo-engine-renderer.adb
Just wanted to confirm: The UNUSED is definitely a VK_LAYER_LUNARG_api_dump layer bug.
There is basically if(false) ... else print UNUSED in the code.
UPDATE: Issue fix PR.
So that is a dead end. But you found a bug in the Vulkan ecosystem which is a good thing too...
As for your rendering problem, in several places you have 1x1 render target (e.g. in vkCmdBeginRenderPass), so there would be not much to see. I think Windows usually does this (initially creating 0x0 window) and then you have to react to a resize event.
In Perl 5, I can use Getopt::Long to parse commandline arguments with some validation (see below from http://perldoc.perl.org/Getopt/Long.html).
use Getopt::Long;
my $data = "file.dat";
my $length = 24;
my $verbose;
GetOptions ("length=i" => \$length, # numeric
"file=s" => \$data, # string
"verbose" => \$verbose) # flag
or die("Error in command line arguments\n");
say $length;
say $data;
say $verbose;
Here =i in "length=i" creates a numeric type constraint on the value associated with --length and =s in "file=s" creates a similar string type constraint.
How do I do something similar in Raku (née Perl 6)?
Basics
That feature is built into Raku (formerly known as Perl 6). Here is the equivalent of your Getopt::Long code in Raku:
sub MAIN ( Str :$file = "file.dat"
, Num :$length = Num(24)
, Bool :$verbose = False
)
{
$file.say;
$length.say;
$verbose.say;
}
MAIN is a special subroutine that automatically parses command line arguments based on its signature.
Str and Num provide string and numeric type constraints.
Bool makes $verbose a binary flag which is False if absent or if called as --/verbose. (The / in --/foo is a common Unix command line syntax for setting an argument to False).
: prepended to the variables in the subroutine signature makes them named (instead of positional) parameters.
Defaults are provided using $variable = followed by the default value.
Aliases
If you want single character or other aliases, you can use the :f(:$foo) syntax.
sub MAIN ( Str :f(:$file) = "file.dat"
, Num :l(:$length) = Num(24)
, Bool :v(:$verbose) = False
)
{
$file.say;
$length.say;
$verbose.say;
}
:x(:$smth) makes additional alias for --smth such as short alias -x in this example. Multiple aliases and fully-named is available too, here is an example: :foo(:x(:bar(:y(:$baz)))) will get you --foo, -x, --bar, -y and --baz and if any of them will pass to $baz.
Positional arguments (and example)
MAIN can also be used with positional arguments. For example, here is Guess the number (from Rosetta Code). It defaults to a min of 0 and max of 100, but any min and max number could be entered. Using is copy allows the parameter to be changed within the subroutine:
#!/bin/env perl6
multi MAIN
#= Guessing game (defaults: min=0 and max=100)
{
MAIN(0, 100)
}
multi MAIN ( $max )
#= Guessing game (min defaults to 0)
{
MAIN(0, $max)
}
multi MAIN
#= Guessing game
( $min is copy #= minimum of range of numbers to guess
, $max is copy #= maximum of range of numbers to guess
)
{
#swap min and max if min is lower
if $min > $max { ($min, $max) = ($max, $min) }
say "Think of a number between $min and $max and I'll guess it!";
while $min <= $max {
my $guess = (($max + $min)/2).floor;
given lc prompt "My guess is $guess. Is your number higher, lower or equal (or quit)? (h/l/e/q)" {
when /^e/ { say "I knew it!"; exit }
when /^h/ { $min = $guess + 1 }
when /^l/ { $max = $guess }
when /^q/ { say "quiting"; exit }
default { say "WHAT!?!?!" }
}
}
say "How can your number be both higher and lower than $max?!?!?";
}
Usage message
Also, if your command line arguments don't match a MAIN signature, you get a useful usage message, by default. Notice how subroutine and parameter comments starting with #= are smartly incorporated into this usage message:
./guess --help
Usage:
./guess -- Guessing game (defaults: min=0 and max=100)
./guess <max> -- Guessing game (min defaults to 0)
./guess <min> <max> -- Guessing game
<min> minimum of range of numbers to guess
<max> maximum of range of numbers to guess
Here --help isn't a defined command line parameter, thus triggering this usage message.
See also
See also the 2010, 2014, and 2018 Perl 6 advent calendar posts on MAIN, the post Parsing command line arguments in Perl 6, and the section of Synopsis 6 about MAIN.
Alternatively, there is a Getopt::Long for perl6 too. Your program works in it with almost no modifications:
use Getopt::Long;
my $data = "file.dat";
my $length = 24;
my $verbose;
get-options("length=i" => $length, # numeric
"file=s" => $data, # string
"verbose" => $verbose); # flag
say $length;
say $data;
say $verbose;
I am writing some unit tests for a map coordinate function that I am writing. Unfortunately, there's something going on with XCTest that I am unable to nail down that is causing my test to fail:
NSString *testValue = #"121°31'40\"E";
double returnValue = coordinateStringToDecimal(testValue);
static double expectedValue = 121.5277777777778;
XCTAssertEqual(returnValue, expectedValue, #"Expected %f, got %f", expectedValue, returnValue);
I did read this similar question to troubleshoot. However, I am able to validate that the numbers and types are the same. Here is the console output of checking the type of each value:
(lldb) print #encode(__typeof__(returnValue))
(const char [2]) $5 = "d"
(lldb) print #encode(__typeof__(expectedValue))
(const char [2]) $6 = "d"
The Variables View in the debugger is showing them to be the same:
The interesting thing is the console output of comparing them in lldb:
(lldb) print (returnValue == expectedValue)
(bool) $7 = false
The types are the same and the actual numbers are the same. Why else would my assert be failing???
Because you are dealing with floating point numbers, there will always be a certain degree of inaccuracy, even between double values. In these cases, you need to use a different assertion: XCTAssertEqualWithAccuracy. From the docs:
Generates a failure when a1 is not equal to a2 within + or - accuracy. This test is for scalars such as floats and doubles, where small differences could make these items not exactly equal, but works for all scalars.
Change your assert to something like this:
XCTAssertEqualWithAccuracy(returnValue, expectedValue, 0.000000001);
Or in Swift 4:
XCTAssertEqual(returnValue, expectedValue, accuracy: 0.000000001, "expected better from you")
In Nimble:
expect(expectedValue).to(beCloseTo(returnValue, within: 0.000000001))
In Swift 4 accuracy was removed from the function name - now its an overload of XCTAssertEqual:
XCTAssertEqual(returnValue, expectedValue, accuracy: 0.000000001, "expected better from you")
LLDB has various problems printing struct fields of double type, as documented here:
strange-behavior-in-lldb-when-printing-a-double-type-struct-member.
In my own case I tried to print a struct of type CLLocationCoordinate2D. As of Xcode 4.5.2 the error in printing CLLocationCoordinate2D persists.
Looking for ways around this bug I came across a nice macro, LOG_EXPR, in this blog:
http://vgable.com/blog/2010/08/19/the-most-useful-objective-c-code-ive-ever-written/
It does a great job of logging types into the debugger, but can't be called from the debugger.
Has anyone figured out a way to do something like LOG_EXPR while debugging in the LLDB command line interface, or any other improved printing that will work on arbitrary structs from the command line, other than switching back to GDB?
Here's what happens when I type in LLDB:
(lldb) p (CLLocationCoordinate2D)[self mapSetPointLatLon]
(CLLocationCoordinate2D) $4 = {
(CLLocationDegrees) latitude = 42.4604
(CLLocationDegrees) longitude = 42.4604
(double) easting = 42.4604
(double) northing = -71.5179
}
Notice the redundant and wrong lines added by lldb.
Here's what happens (at the same breakpoint) when I compile LOG_EXPR into my code:
Line of Code (not debugger):
LOG_EXPR(self.mapSetPointLatLon);
produces the correct output in the debugger output:
2013-01-26 14:02:17.555 S6E11[79116:c07] self.mapSetPointLatLon = {latitude=42.4604,longitude=-71.5179}
At the same breakpoint if I try to invoke LOG_EXPR from the command line, this is what happens:
(lldb) expr LOG_EXPR(self.mapSetPointLatLon);
error: use of undeclared identifier 'LOG_EXPR'
error: 1 errors parsing expression
(lldb)
Here's another example, since it turns out there is a case when you can get the debugger to do the right thing.
Here's my code fragment where I assign the struct 2 ways:
CLLocationCoordinate2D defloc = CLLocationCoordinate2DMake(kStartingLat, kStartingLon);
[[self.verticalPagingViewController mapPagingViewController] setMapSetPointLatLon: defloc];
If I just print the variable it works.
(lldb) p defloc
(CLLocationCoordinate2D) $1 = {
(CLLocationDegrees) latitude = 42.4604
(CLLocationDegrees) longitude = -71.5179
}
If I'm casting the type of the return value of an accessor method, it fails.
(lldb) p (CLLocationCoordinate2D)[[self.verticalPagingViewController mapPagingViewController] mapSetPointLatLon]
(CLLocationCoordinate2D) $2 = {
(CLLocationDegrees) latitude = 0
(CLLocationDegrees) longitude = 0
(double) easting = 0
(double) northing = 0
}
I'm doing a small app for evaluating and analyzing transfer functions. As boring as the subject might seem to some, I want it to at least look extra cool and pro and awesome etc... So:
Step 1: Gimme teh coefficients! [A bunch of numbers]
Step 2: I'll write the polynomial with its superscripts. [The bunch of numbers in a string]
So, I write a little C parser to just print the polynomial with a decent format, for that I require a wchar_t string that I concatenate on the fly. After the string is complete I quickly try printing it on the console to check everything is ok and keep going. Easy right? Welp, I ain't that lucky...
wchar_t *polynomial_description( double *polyArray, char size, char var ){
wchar_t *descriptionString, temp[100];
int len, counter = 0;
SUPERSCRIPT superscript;
descriptionString = (wchar_t *) malloc(sizeof(wchar_t) * 2);
descriptionString[0] = '\0';
while( counter < size ){
superscript = polynomial_utilities_superscript( size - counter );
len = swprintf(temp, 100, L"%2.2f%c%c +", polyArray[counter], var, superscript);
printf("temp size: %d\n", len);
descriptionString = (wchar_t *) realloc(descriptionString, sizeof(wchar_t) * (wcslen(descriptionString) + len + 1) );
wcscat(descriptionString, temp);
counter++;
}
//fflush(stdout); //Already tried this
len = wprintf(L"%ls\n", descriptionString);
len = printf("%ls**\n", descriptionString);
len = fprintf(stdout, "%ls*\n", descriptionString);
len = printf("FFS!! Print something!");
return descriptionString;
}
During the run we can see temp size: 8 printed the expected number of times ONLY WHILE DEBUGGING, if I run the program I get an arbitrary number of prints each run. But after that, as the title states, wprintf, printf and fprintf don't print anything, yet len does change its size after each call.
In the caller function, (application:(UIApplication *)application didFinishLaunchingWithOptions:, while testing) I put an NSLog to print the return string, and I dont get ANYTHING not even the Log part.
What's happening? I'm at a complete loss.
Im on XCode 4.2 by the way.
What's the return value from printf/wprintf in the case where you think it's not printing anything? It should be returning either -1 in the case of a failure or 1 or more, since if successful, it should always print at least the newline character after the description string.
If it's returning 1 or more, is the newline getting printed? Have you tried piping the output of your program to a hex dumper such as hexdump -C or xxd(1)?
If it's returning -1, what is the value of errno?
If it turns out that printf is failing with the error EILSEQ, then what's quite likely happening is that your string contains some non-ASCII characters in it, since those cause wcstombs(3) to fail in the default C locale. In that case, the solution is to use setlocale(3) to switch into a UTF-8 locale when your program starts up:
int main(int argc, char **argv)
{
// Run "locale -a" in the Terminal to get a list of all valid locales
setlocale(LC_ALL, "en_US.UTF-8");
...
}