NCover Exclude Anonymous Types - msbuild

I am using TeamCity with NCover integration and we want to exclude anonymous types from our code coverage reports. Is this possible? I've searched through the documentation and can't not find any mention of how or if this can be done.

You could use the fact that they are attributed with the CompilerGeneratedAttribute and exclude them, however this has the bad side effect of also excluding the expression in lambdas and possibly several other things.
//ea "System.Runtime.CompilerServices.CompilerGeneratedAttribute"
In our NCover setup we exclude code attributed with GeneratedCodeAttribute, but nothing else as we couldn't find a reliable way of determining those types. At the end of the day, anonymous types are very easy to cover if you have at least a single unit test on that code.

Related

Ultimate complete list of native-internal VBA commands

After discovering (see here and here) that:
"VBA.Len" is not equivalent to "Len"
"VBA.LenB" is not equivalent to "LenB"
"VBA.Mid" is not equivalent to "Mid"
"VBA.Left$" is equivalent to "Left$"
and other confusing things like that:
"Left" and "InStr" are in this official list of keywords while it's not in this other official list of keywords
"InStrRev" and "LenB" don't appear in any official keywords lists while "InStr" and "Len" do appear in one or both lists
I'm left very confused about when to use "VBA." and when not.
Is there a way to obtain a real complete list of the native-internal VBA commands, more reliable than the official documentation?
I mean something like an "object browser" (where I can see the real complete list of the commands in the "VBA." library) for native-internal VBA commands?
The best resource is probably VBE's help and similar on-line, though it's not complete, and doesn't go into the details your raise. Also it doesn't document 'hidden' methods which were once intended to be deprecated but never were, some of which are very useful! Some of your questions were well answered in those other threads, but more generally about your points -
Qualifying with VBA is more about 'where' the method is sourced, and then whether or not there is any difference in the actual method, which in most cases there isn't apart from sourcing. Most of those strings functions are both in the Strings module and in the _HiddenInterface Interface (which is faster).
Left$ though is only in the Strings module, as are similar $ functions, and why you don't notice any difference in performance. Str is both in the _HiddenInterface and Conversion module.
If you're sure the unqualified versions do what you want you might think best to not to qualify, and generally that's fine. But if you ever end up with a project with a MISSING reference, unqualified Strings and DateTime functions will blow before your code has a chance to be alerted. Depending on how deployed or if not sure I tend to fully qualify, eg VBA.Strings.Left$, VBA.DateTime.Date. FWIW as a small bonus you'll get the intellisense.

what is the different between default, stronger and all operators in Pitclipse?

I was using PITclipse on Eclipse to run PIT mutation tests on code. There are three type of operators in PITclipse: DEFAULT, STRONGER and ALL).
What is the different between them and the list of mutants listed in each operator?
You can see the existing groups here: https://pitest.org/quickstart/mutators/
Each group has a different set of mutators/operators. I know that DEFAULT only includes mutators very well tested. If you choose ALL you may include some mutators that come up with false positives. From my experience is not common, but it has happened once to me

Proprietary handling/collecting of user defined errors

I do not know how to implement a proprietary handling procedure of user defined errors (routines/algorithm stop) or warnings messages (routine/algorithm can proceed) without using Exceptions (i.e. failwith .... Standard System Exception).
Example: I have a Module with a series of Functions that uses a lot of input data to be checked and to be used to calculate the thickness of a Pressure Vessel Component.
The calculation procedure is complex, iterative and there are a lot of checks to be performed before getting a result, check that can generate "User Defined Errors" that stop the procedure/routine/algorithm or generate a "Warning Messages" proceeding on.
I need to collect these Errors and Messages to be shown to the User in a dedicated form (Wpf or Windows form). This later at the end.
Note: every time that I read a books of F# or C# or Visual basic or an Article in Internet, I found the same Phylosophy/Warning: raise of System/User-Defined Exception should be limited as much as possible: Exception are for unmanageable Exceptional Events ( not predictable) and cause "overload" to the Computer System.
I do not know which handling philosophy to implement. I'm confused. Limited sources available on internet on this particular argument.
Actually I'm planning to adopt this Phylosophy , taken from: "https://fsharpforfunandprofit.com/posts/recipe-part2/". It sounds good for me, ... complex, but good. No other references I was able to go find on this relevant argument.
Question: there are other Phylosophies that I can consider to create this Proprietary handling/collecting of user defined errors? Some books to read or some articles?
My decision will give a big impact on how to design and write my code (splitting problem in several functions, generate a "motor" that run in sequence functions or compose then in different ways depending on results, where to check for Errors/Warnings, how to store Errors and Warning Messages to understand what is going on or where "Errors/Warnings" are genetate and caused "By Which Function"?).
Many thanks in advance.
The F# way is to encode the errors in the types as much as possible. The easiest example is an option type where you would return None if the operation failed ans Some value when it succeeded. Surprisingly, very often this is enough! If not, then you can encode different types of errors AND a success "state" in a discriminated union, e.g.
[<Measure>]
type psi
type VesselPressureResult =
| PressureOk
| WarningApproachingLimit
| ErrorOverLimitBy of int<psi>
and then you will use pattern matching to "decide" what to do in each case. If you need to add more variants, e.g. ErrorTooLow, then you would add that to the DU and then the compiler will "tell" you about all places where you need to fix the logic.
Here is the perfect source with detailed information: https://fsharpforfunandprofit.com/series/designing-with-types.html

How to quickly analyse the impact of a program change?

Lately I need to do an impact analysis on changing a DB column definition of a widely used table (like PRODUCT, USER, etc). I find it is a very time consuming, boring and difficult task. I would like to ask if there is any known methodology to do so?
The question also apply to changes on application, file system, search engine, etc. At first, I thought this kind of functional relationship should be pre-documented or some how keep tracked, but then I realize that everything can have changes, it would be impossible to do so.
I don't even know what should be tagged to this question, please help.
Sorry for my poor English.
Sure. One can technically at least know what code touches the DB column (reads or writes it), by determining program slices.
Methodology: Find all SQL code elements in your sources. Determine which ones touch the column in question. (Careful: SELECT ALL may touch your column, so you need to know the schema). Determine which variables read or write that column. Follow those variables wherever they go, and determine the code and variables they affect; follow all those variables too. (This amounts to computing a forward slice). Likewise, find the sources of the variables used to fill the column; follow them back to their code and sources, and follow those variables too. (This amounts to computing a backward slice).
All the elements of the slice are potentially affecting/affected by a change. There may be conditions in the slice-selected code that are clearly outside the conditions expected by your new use case, and you can eliminate that code from consideration. Everything else in the slices you may have inspect/modify to make your change.
Now, your change may affect some other code (e.g., a new place to use the DB column, or combine the value from the DB column with some other value). You'll want to inspect up and downstream slices on the code you change too.
You can apply this process for any change you might make to the code base, not just DB columns.
Manually this is not easy to do in a big code base, and it certainly isn't quick. There is some automation to do for C and C++ code, but not much for other languages.
You can get a bad approximation by running test cases that involve you desired variable or action, and inspecting the test coverage. (Your approximation gets better if you run test cases you are sure does NOT cover your desired variable or action, and eliminating all the code it covers).
Eventually this task cannot be automated or reduced to an algorithm, otherwise there would be a tool to preview refactored changes. The better you wrote code in the beginning, the easier the task.
Let me explain how to reach the answer: isolation is the key. Mapping everything to object properties can help you automate your review.
I can give you an example. If you can manage to map your specific case to the below, it will save your life.
The OR/M change pattern
Like Hibernate or Entity Framework...
A change to a database column may be simply previewed by analysing what code uses a certain object's property. Since all DB columns are mapped to object properties, and assuming no code uses pure SQL, you are good to go for your estimations
This is a very simple pattern for change management.
In order to reduce a file system/network or data file issue to the above pattern you need other software patterns implemented. I mean, if you can reduce a complex scenario to a change in your objects' properties, you can leverage your IDE to detect the changes for you, including code that needs a slight modification to compile or needs to be rewritten at all.
If you want to manage a change in a remote service when you initially write your software, wrap that service in an interface. So you will only have to modify its implementation
If you want to manage a possible change in a data file format (e.g. length of field change in positional format, column reordering), write a service that maps that file to object (like using BeanIO parser)
If you want to manage a possible change in file system paths, design your application to use more runtime variables
If you want to manage a possible change in cryptography algorithms, wrap them in services (e.g. HashService, CryptoService, SignService)
If you do the above, your manual requirements review will be easier. Because the overall task is manual, but can be aided with automated tools. You can try to change the name of a class's property and see its side effects in the compiler
Worst case
Obviously if you need to change the name, type and length of a specific column in a database in a software with plain SQL hardcoded and shattered in multiple places around the code, and worse many tables present similar column namings, plus without project documentation (did I write worst case, right?) of a total of 10000+ classes, you have no other way than manually exploring your project, using find tools but not relying on them.
And if you don't have a test plan, which is the document from which you can hope to originate a software test suite, it will be time to make one.
Just adding my 2 cents. I'm assuming you're working in a production environment so there's got to be some form of unit tests, integration tests and system tests already written.
If yes, then a good way to validate your changes is to run all these tests again and create any new tests which might be necessary.
And to state the obvious, do not integrate your code changes into the main production code base without running these tests.
Yet again changes which worked fine in a test environment may not work in a production environment.
Have some form of source code configuration management system like Subversion, GitHub, CVS etc.
This enables you to roll back your changes

Oracle database dependencies in PL/SQL

I need to find dependencies between functions/procedures(defined inside package bodies) and tables which they use.
I've tried all_dependencies but it works only on the package-level, not the inner function/procedure-level.
Is there any possibilty to find this dependencies using e.g. all_source?
Thanks in advance for your help.
It is not possible to find the dependencies between procedures (in a package) and tables.
There are several tools to examine dependencies. As you've already discovered, *_DEPENDENCIES only tracks object dependencies on a per-package level. There is a neat tool PL/Scope that tracks dependencies between parts of a package. But it does it does not track all table references.
Theoretically you could use *_SOURCE. In practice, this is impossible unless your code uses a limited set of features. For any moderately complicated code, forget about using string functions or regular expressions to parse code. Unfortunately there does not seem to be any PL/SQL parser that is both programmable and capable of accurately parsing complex code.
Saying "it's not possible" isn't a great answer. But in this case it might save you a lot of time. This is one of those tasks where it's very easy to hit a dead end and waste a lot of effort.