How to write a custom rule in SQLFluff? - lint

I was learning about SQLFluff and how to lint using it. While linting, it used only the default rules for checking. But our product need different customised rules for linting. So, I planned to write a custom rule in SQLFluff.
I was trying to find how a custom rule is written. But was not able find it. I need the step wise procedure to write a custom rule in SQLFluff.

Related

Configure rules in Detekt

I am adding Detekt to a new project.
But, I find that some rules are too strict.
How can I implement my own thresholds for a few rules?
I don't want to use Baseline files, because this is new code and we don't want to consider some things code smells.
It's possible to configure Detekt via configuration files.
As per Detekt's documentation, with my emphasis:
detekt allows easily to just pick the rules you want and configure them the way you like. For example if you want to allow up to 20 functions inside a Kotlin file instead of the default threshold, write:
complexity:
TooManyFunctions:
thresholdInFiles: 20
You'll want to create a config folder in the root of your project, and add detekt.yaml to it. This config will contain all the rules that you want to override the default, or all the rules (establishing the defaults as the correct value).
For more information, check the official documentation on configuration

Snakemake: parameterized runs that re-use intermediate output files

here is a complex problem that I am struggling to find a clean solution for:
Imagine having a Snakemake workflow with several rules that can be parameterized in some way. Now, we might want to test different parameter settings for some rules, to see how the results differ. However, ideally, if these rules depend on the output of other rules that are not parameterized, we want to re-use these non-changing files, instead of re-computing them for each of our parameter settings. Furthermore, if at all possible, all this should be optional, so that in the default case, a user does not see any of this.
There is inherent complexity in there (to specify which files are re-used, etc). I am also aware that this is not exactly the intended use case of Snakemake ("reproducible workflows"), but is more of a meta-feature for experimentation.
Here are some approaches:
Naive solution: Add wildcards for each possible parameter to the file paths. This gets ugly, hard to maintain, and hard to extend really quickly. Not a solution.
A nice approach might be to name each run, and have an individual config file for that name which contains all settings that we need. Then, we only need a wildcard for such a named set of parameter settings. That would probably require to read some table of meta-config file, and process that. That doesn't solve the re-use issue though. Also, that means we need multiple config files for one snakemake call, and it seems that this is not possible (they would instead update each other, but not considered as individual configs to be run separately).
Somehow use sub-workflows, by specifying individual config files each time, e.g., via a wildcard. Not sure that this can be done (e.g., configfile: path/to/{config_name}.yaml). Still not a solution for file re-using.
Quick-and-dirty: Run all the rules up to the last output file that is shared between different configurations. Then, manually (or with some extra script) create directories with symlinks to this "base" run, with individual config files that specify the parameters for the per-config-runs. This still necessitates to call snakemake individually for each of these directories, making cluster usage harder.
None of these solve all issues though. Any ideas appreciated!
Thanks in advance, all the best
Lucas
Snakemake now offers the Paramspace helper to solve this! https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html?highlight=parameter#parameter-space-exploration
I have not tried it yet, but it seems like the solution to the issue!

ArchUnit to test actual layered architecture

Currently in our project we have layered architecture implemented in following way where Controller, Service, Repository are placed in the same package for each feature, for instance:
feature1:
Feature1Controller
Feature1Service
Feature1Repository
feature2:
Feature2Controller
Feature2Service
Feature2Repository
I've found following example of arch unit test where such classes are placed in dedicated packages https://github.com/TNG/ArchUnit-Examples/blob/master/example-junit5/src/test/java/com/tngtech/archunit/exampletest/junit5/LayeredArchitectureTest.java
Please suggest whether there is possibility to test layered architecture when all layers are in single package
If the file name conventions are followed properly across your project, how about you write custom test cases instead of using layeredArchitecture().
For Example:
classes().that().haveSimpleNameEndingWith("Service")
.should().onlyBeAccessed().byClassesThat().haveSimpleNameEndingWith("Controller")
noClasses().that().haveSimpleNameEndingWith("Service")
.should().accessClassesThat().haveSimpleNameEndingWith("Controller")
I know this question is rather old. But for the record, this has been possible for a while using predicates for the layers, e.g.
layeredArchitecture().consideringAllDependencies()
.layer("Controllers").definedBy(HasName.Predicates.nameEndingWith("Controller"))
.layer("Services").definedBy(HasName.Predicates.nameEndingWith("Service"))
.layer("Repository").definedBy(HasName.Predicates.nameEndingWith("Repository"))
.whereLayer("Controllers").mayNotBeAccessedByAnyLayer()
.whereLayer("Services").mayOnlyBeAccessedByLayers("Controllers")
.whereLayer("Repository").mayOnlyBeAccessedByLayers("Services")
However, I'm not sure how well this works in practice. Because usually you don't just have classes following this naming pattern and that's it. A service might also have some POJO as method parameter type (e.g. MyInput) and that should maybe for example not be used by repositories as well. Also, using forward dependency rules (mayOnlyAccessLayers(..)) this might then cause unwanted violations.

Compiling Sass with custom variables per request under Rails 3.1

In a Rails 3.1 app, one controller needs to have all its views compile whatever Sass stylesheets they might need per request using a set of custom variables. Ideally, the compilation must happen via the asset pipeline so that content-based asset names (ones that include an MD5 hash of the content) are generated. It is important for the solution to use pure Sass capabilities as opposed to resorting to, for example, ERB processing of Sass stylesheets.
From the research I've done here and elsewhere, the following seems like a possible approach:
Set up variable access
Create some type of variable accessor bridge using custom Sass functions, e.g., as described by Konstantin Haase here (gist). This seems pretty easy to do.
Configure all variable access via a Sass partial, e.g., in _base.sass which is the Compass way. The partial can use the custom functions defined above. Also easy.
Capture all asset references
Decorate the asset_path method of the view object. I have this working well.
Resolve references using a custom subclass of Sprockets::Environment. This is also working well.
Force asset recompilation, regardless of file modification times
I have not found a good solution for this yet.
I've seen examples of kicking off Sass processing manually by instantiating a new Sass::Engine and passing custom data that will be available in Sass::Script::Functions::EvaluationContext. The problem with this approach is that I'd have to manage file naming and paths myself and I'd always run the risk of possible deviation from what Sprockets does.
I wasn't able to find any examples of forcing Sprockets processing on a per-request basis, regardless of file mod times, that also allows for custom variable passing.
I'd appreciate comments on the general approach as well as any specific pointers/suggestions on how to best handle (3).
Sim.
It is possible. Look here SASS: Set variable at compile time
I wrote a solution to address it, I'll post soon and push it here, in case you or someone else still need it.
SASS is designed to be pre-compiled to CSS. Having Sprockets do this for every request for a view on a per request basis is not going to perform very well. Every request is going to have to wait for the compilation to be done, and it is not fast (from a serving-pages point of view).
The MD5 generation is within Sprockets, so if you are changing custom variables you are going to have to force a compilation on every single request to make sure that changes are seen because the view is (probably) not going to know.
It sounds as though this is not really in the sweet-spot of the asset-pipeline, and you should look at doing something more optimised for truly dynamic CSS.
Sorry. :-)

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.