FluentAssertions without exceptions? [duplicate] - fluent-assertions

This question already has an answer here:
Customize failure handling in FluentAssertions
(1 answer)
Closed 2 years ago.
This seems like a long shot...
I am building a test harness for manual testing (for my QA Team). It runs in a console application and can output some level of smart data, but nothing so automatic as a fully automated test (not my rules).
I would love to use FluentAssertions to generate the text to show, but I don't want to throw an exception.
Is there a way to have FluentAssertions just output a string with its fluent message? (Without throwing an exception.)
NOTE: I am aware of a possible workaround: (Try/Catch statements around an AssertionScope around my fluent assertion checks). But I am hoping to keep the extra code to a minimum so as to not confuse the non-programmer QA person that has to use the test harness.

You could replace the Services.ThrowException property with custom behavior or you could use AssertionScope's Discard method.

Related

Can I customize error reports in Gatling? [duplicate]

This question already has an answer here:
How can I print/get the errors on gatling report? [duplicate]
(1 answer)
Closed 1 year ago.
Using Karate-Gatling, the default reports I get only log errors as
path/to/feature/myFeature.feature:19 method get
If I read the feature file, I can find the offending line and determine the cause of the error. However I would like to have this information in the report itself (so my colleagues and I don't have to go fishing through the features).
Is there a way to add a custom error log into the Gatling report? Possibly replace the location (path/to/feature/myFeature.feature:19 method get) with a custom message?
No. You can consider contributing code. There may be an opportunity to emit the Karate HTML reports only on errors. But right now, for performance reasons, all the reporting is disabled when in "gatling mode".

ExecutionHook with parallel runner [duplicate]

This question already has answers here:
Dynamic scenario freezes when called using afterFeature hook
(2 answers)
Closed 1 year ago.
I am using the parallel runner to run one of m feature files. It has 8 scenarios as of now. I wanted to integrate a third party reporting plugin (Extent report) to build out the reports. I planned to use the ExecutionHook interface to try and achieve this. Below are the issues i faced and havent found a even after looking at the documentation.
My issues
I am creating a new test on the afterFeature method. This gives me 2 handles, Feature and ExecutionContext. However since the tests are running in parallel, the reporting steps are getting mixed on each other? How do i handle this? Any out of the box method i can use?
To counter the above, i decided to build the whole report towards the end on the afterAll overridden method but here i am missing the execution context data so i cant use the context.getRequestBuilder() to get the urls and paths.
Any help would be great.
Please focus on the 1.0 release: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Reasons:
it gives you a way to build the whole report at the end, and the Results object can iterate over all ScenarioResult instances
ExecutionHook has been changed to RuntimeHook, see example
yes, since tests can run in parallel, it is up to you to synchronize as the framework has to be high performance, but building reports at the end using the Results object is recommended instead of using the RuntimeHook

Running a test in golang that only works for some versions [duplicate]

This question already has answers here:
How do I skip a tests file if it is run on systems with go 1.4 and below?
(2 answers)
Closed 6 years ago.
I have a library here (https://github.com/turtlemonvh/altscanner) that includes a test comparing functionality of a custom scanner to bufio.Scanner. In particular, I am comparing my approach to the Buffer method which wasn't added until go1.6.
My actual code works with versions of go back to 1.4, but I wanted to include this test (and I'd like to add a benchmark as well) that uses the Buffer function of the bufio.Scanner object.
How can I include these tests that use features of go1.6+ while still allowing code to run for go1.4 and 1.5?
I imagine the answer is using a build flag to trigger the execution of these tests only if explicitly requested (and I do have access to the go version in my CI pipeline via a travis environment variable). I could also abuse the short flag here.
Is there a cleaner approach?
A few minutes after posting this I remembered about build constraints. Go has a built in constraint that handles this exact case, i.e. "version of go must be >= X".
Moving that test into a separate file and adding //+build go1.6 at the top fixed it.

Exception reporting frameworks for Cocoa [duplicate]

This question already has answers here:
Crash Reporter for Cocoa app [closed]
(7 answers)
Closed 3 years ago.
Are there any frameworks that capture and report exceptions? I need something that catches any exceptions or errors caused by my program while its being used by a non-technical user and then to email it to me. Do you handle this on your own projects? Did you roll your own solution or is there an off the shelf solution?
Any help would be greatly appreciated.
Exceptions are typically used in Cocoa for denoting programmer error as opposed to something going "oops" at runtime.
The classic example of the former: An Array out of bounds exception occurs if you tried accessing the 50th element of a 10 element NSArray. This is programmer error as you should not let this happen.
The classic example of the latter: You try to read in a file from disk but the file is missing. This is not an exceptional case, it's somewhat common for file read operations to fail, and thus an exception should not be thrown (it's your job as a Cocoa developer to recover from this gracefully, and it's not too difficult to do so).
Keep this in mind when using Exceptions in Cocoa, especially if they are going to be user-facing.
This may not be exactly what you are looking for, but if you use Fogbugz, there is a tool called Bugzscout that will create a ticket from the application. You could tie it into to your exception and give the user an opportunity to create a ticket on the exception:
http://www.fogcreek.com/FogBugz/docs/70/topics/customers/BugzScout.html

The PSP0 Process and modern tools and programming languages [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
The Personal Software Process (PSP) is designed to allow software engineers to understand and improve their performance. The PSP uses scripts to guide a practitioner through the process. Each script defines the purpose, the entry criteria, the steps to perform, and the exit criteria. PSP0 is designed to be a framework that allows for starting a personal process.
One of the scripts used in PSP0 is the Development Script, which is to guide development. This script is used once there is a requirements statement, a project plan summary, time and defect recording logs are made, and a defect type standard is established. The activities of this script are design, code, compile, and test. The script is exited when you have a thoroughly tested application and complete time and defect logs.
In the Code phase, you review the requirements and make a design, record requirements defects in the log, and perform time tracking. In the Compile phase, you compile, fix any compile time errors, and repeat until the program compiles, and record any defects and time. Finally, in the Test phase, you test until all tests run without error and all defects are fixed, while recording time and defects.
My concerns are with how to manage the code, compile, and test phases when using modern programming languages (especially interpreted languages like Python, Perl, and Ruby) and IDEs.
My questions:
In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?
If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.
If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?
It seems like the PSP, at least the PSP0 baseline process, is designed to be used with a compiled language and small applications written using a text editor (and not an IDE). In addition to my questions, I would appreciate the advice and commentary of anyone who is using or has used the PSP.
In general, as the PSP is a personal improvement process, the answers to your actual questions do not matter as long as you pick one answer and apply it consistently. That way you will be able to measure the times you take in each defined phase, which is what PSP is after. If your team is collectively using the PSP then you should all agree on which scripts to use and how to answer your questions.
My takes on the actual questions are (not that they are relevant):
In interpreted languages, there is no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
To me, test time is the time when the actual tests run and not anything else. In this case, both the errors and execution time I'd add as 'compile' time, time which is used in generating and running the code.
If a test case encounters a syntax error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
Syntax errors are code defects.
If an IDE identifies an error that would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
If the IDE is part of your toolchain, then it seeing errors is just like yourself having spotted the errors, and thus code errors. If you don't use the IDE regularly, then I'd count them as compile errors.
I've used PSP for years. As others have said, it is a personal process, and you will need to evolve PSP0 to improve your development process. Nonetheless, our team (all PSP-trained) grappled with these issues on several fronts. Let me give you an idea of the components involved, and then I'll say how we managed.
We had a PowerBuilder "tier"; the PowerBuilder IDE prevents you from even saving your code until it compiles correctly and links. Part of the system used JSP, though the quantity of Java was minor, and boilerplate, so that in practice, we didn't count it at all. A large portion of the system was in JS/JavaScript; this was done before the wonderful Ajax libraries came along, and represented a large portion of the work. The other large portion was Oracle Pl/Sql; this has a somewhat more traditional compile phase.
When working in PowerBuilder, the compile (and link) phase started when the developer saved the object. If the save succeeded, we recorded a compile time of 0. Otherwise, we recorded the time it took for us to fix the error(s) that caused the compile-time defect. Most often, these were defects injected in coding, removed in compile phase.
That forced compile/link aspect of the PowerBuilder IDE forced us to move the code review phase to after compiling. Initially, this caused us some distress, because we weren't sure how/if such a change would affect the meaning of the data. In practice, it became a non-issue. In fact, many of us also moved our Oracle Pl/Sql code reviews to after the compile phase, too, because we found that when reviewing the code, we would often gloss over some syntax errors that the compiler would report.
There is nothing wrong with a compile time of 0, any more than there is anything wrong with a test time of 0 (meaning your unit test passed without detecting errors, and ran significantly quicker than your unit of measure). If those times are zero, then you don't remove any defects in those phases, and you won't encounter a div/0 problem. You could also record a nominal minimum of 1 minute, if that makes you more comfortable, or if your measures require a non-zero value.
Your second question is independent of the development environment. When you encounter a defect, you record which phase you injected it in (typically design or code) and the phase you removed it (typically design/code review, compile or test). That gives you the measure called "leverage", which indicates the relative effectiveness of removing a defect in a particular phase (and supports the "common knowledge" that removing defects sooner is more effective than removing them later in the process). The phase the defect was injected in is its type, i.e., a design or coding defect. The phase the defect is removed in doesn't affect its type.
Similarly, with JS/JavaScript, the compile time is effectively immeasurable. We didn't record any times for compile phase, but then again, we didn't remove any defects in that phase. The bulk of the JS/JavaScript defects were injected in design/coding and removed in design review, code review, or test.
It sounds, basically, like your formal process doesn't match your practice process. Step back, re-evaluate what you're doing and whether you should choose a different formal approach (if in fact you need a formal approach to begin with).
In interpreted languages, there is no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
The errors should be categorized according to when they were created, not when you found them.
If a test case encounters a syntax error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
Same as above. Always go back to the earliest point in time. If the syntax error was introduced while coding, then it corresponds to the coding phase, if it was introduced while fixing a bug, then it's in the defect phase.
If an IDE identifies an error that would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
I believe that should not be identified. It's just time spent on writing the code.
As a side note, I've used the Process Dashboard tool to track PSP data and found it quite nice. It's free and Java-based, so it should run anywhere. You can get it here:
http://processdash.sourceforge.net/
After reading the replies by Mike Burton, Vinko Vrsalovic, and JRL and re-reading the appropriate chapters in PSP: A Self-Improvement Process for Software Engineers, I've come up with my own takes on these problems. What is good, however, is that I found a section in the book that I originally missed when two pages stuck together.
In interpreted languages, there is
no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
According to the book, it says that "if you are using a development environment that does not compile, then you should merely skip the compile step." However, it also says that if you have a build step, "you can record the build time and any build errors under the compile phase".
This means that for interpreted languages, you will either remove the compile phase from the tracking or replace compilation with your build scripts. Because the PSP0 is generally used with small applications (similar to what you would expect in a university lab), I would expect that you would not have a build process and would simply omit the step.
If a test case encounters a syntax
error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
I would record errors where they are located.
For example, if a test case has a defect, that would be a test defect. If the test ran, and an error was found in the application being tested, that would be a code or design defect, depending on where the problem actually originated.
If an IDE identifies an error that
would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
If the IDE identifies a syntax error, it is the same as you actually spotting the error before execution. Using an IDE properly, there are few excuses for letting defects that would affect execution (as in, cause errors to the execution of the application other than logic/implementation errors) through.