I am trying to integrate the Security Code Scan with Gitlab CI. I read the documentation but still, I can't understand how exactly must write commands for SCS in yml file (source file: warning SCS[rule id]: [warning description] [project_file]). My Gitlab is hosted on Windows 10 machine without a container. The project is .NET Framework 4.6.2 and I use Visual Studio 2019. I already get an SCS package from NuGet.Also, I have read about Fortify but I am stuck on the same problem.
Per the GitLab docs, you really just add this include to your main .gitlab-ci.yml file.
include:
- template: Security/SAST.gitlab-ci.yml
The template defines a job that uses a custom Docker image and Go wrapper around the Security Code Scan package. It actually dynamically adds the SCS package to discovered projects, runs a build, and captures and parses the output in order to produce the security report.
It does things this way because the Security Code Scan project runs as an analyzer at build time... it's not a normal CLI application, although there are mostly ignored issues asking for this option.
Update: You could just add the Security Code Scan package to your project(s)
$ dotnet add package SecurityCodeScan --version 3.5.3
And run a normal build in your GitLab pipeline, reading the warnings that are produced in the pipeline logs.
build:
stage: build
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- dotnet build
You could enable TreatWarningsAsErrors to break the build, too.
dotnet build /p:TreatWarningsAsErrors=true
<PropertyGroup>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
<PropertyGroup>
You won't get a nice MR attached report this way, just pipeline logs. The interactive pipeline report doesn't appear unless you have a Gold plan, anyway.
With GitLab 13.9 (February 2021), this will work for multiple projects too:
Multi-project support for .NET SAST scanning
GitLab security scans automatically detect code language and run appropriate analyzers.
With monorepos, microservices, and multi-project repositories, more than one project can exist within a single GitLab repository.
Previously our .NET SAST tool could only detect single projects in repositories.
With this release, our .NET SAST analyzer can now intelligently detect multiple solution files (.sln) in .NET projects and report vulnerabilities across them.
This will make it easier and more streamlined for users with multi-project .NET repos to leverage our SAST scanning.
See Documentation and Issue.
And it is more visible (still with GitLab 13.9, February 2021)
Security Configuration page for all users
With SAST and Secret Detection available to all GitLab customers, we wanted to improve the experience for developers enabling available security scans. We have had a guided configuration experience for Ultimate users and have now made a simplified version of this experience available to all users. This new configuration experience makes it easier for developers to understand which security scans are available to them, find relevant documentation, and provide simple enablement tools. With this initial release, we support a button to create a merge request for SAST. In a future release, we will add additional buttons to easily enable other scan types.
-- Security Configuration page for all users
See Documentation and Epic.
It does evolve with GitLab 13.11 (April 2021)
GitLab + Semgrep: upgrading SAST for the future
GitLab SAST historically has been powered by over a dozen open-source static analysis security analyzers. These analyzers have proactively identified millions of vulnerabilities for developers using GitLab every month.
Each of these analyzers is language-specific and has different technology approaches to scanning. These differences produce overhead for updating, managing, and maintaining additional features we build on top of these tools, and they create confusion for anyone attempting to debug.
The GitLab Static Analysis team is continuously evaluating new security analyzers. We have been impressed by a relatively new tool from the development team at r2c called Semgrep.
It’s a fast, open-source, static analysis tool for finding bugs and enforcing code standards.
Semgrep’s rules look like the code you are searching for; this means you can write your own rules without having to understand abstract syntax trees (ASTs) or wrestle with regexes.
Semgrep’s flexible rule syntax is ideal for streamlining GitLab’s Custom Rulesets feature for extending and modifying detection rules, a popular request from GitLab SAST customers. Semgrep also has a growing open-source registry of 1,000+ community rules.
We are in the process of transitioning many of our lint-based SAST analyzers to Semgrep.
This transition will help increase stability, performance, rule coverage, and allow GitLab customers access to Semgrep’s community rules and additional custom ruleset capabilities that we will be adding in the future. We have enjoyed working with the r2c team and we cannot wait to transition more of our analyzers to Semgrep. You can read more in our transition epic, or try out our first experimental Semgrep analyzers for JavaScript, TypeScript, and Python.
(So not yet for .Net, but soon)
We are excited about what this transition means for the future of GitLab SAST and the larger Semgrep community.
GitLab will be contributing to the Semgrep open-source project including additional rules to ensure coverage matches or exceeds our existing analyzers.
See Documentation and Epic.
The same GitLab 13.11 announces:
OpenShift Support for SAST and Secret Detection
Experimental Semgrep Analyzer for Python, JavaScript, and TypeScript
Support for Generic Kotlin SAST Scanning
See GitLab 13.12 (May 2021)
Semgrep SAST Analyzer for JavaScript, TypeScript, and Python
In GitLab 13.11 we announced an experimental release of Semgrep, a new SAST analyzer for JavaScript, TypeScript, and Python. This transition has been developed in partnership with r2c, the team behind Semgrep who share our mission to help developers write more secure code. After an extensive beta with hundreds of customers trying out our experimental analyzer, we’re ready to start the transition to Semgrep.
With 13.12, we’re updating our managed SAST.gitlab-ci.yml CI template to automatically run this new analyzer alongside our existing JavaScript and TypeScript analyzer, ESlint. In a future release we will fully disable ESLint, but for now it will work in unison with Semgrep. We’ve done work to deduplicate findings, so you should not notice any difference in findings. If you include our SAST.gitlab-ci.yml, you don’t need to do anything to start benefiting from the Semgrep analyzer, however if you override or manage your own SAST CI configuration, you should update your CI configuration.
Both GitLab and r2c are excited about the future of this transition to bring you fast and wide coverage Static Application Security Testing (SAST). We’ll continue to expand the Semgrep analyzer through new security detection rules as well as expanding coverage to other languages. We’ve created a feedback issue where you can share your experience with this transition or ask questions.
See Documentation and Epic.
See GitLab 15.0 (May 2022)
Semgrep-based SAST scanning available for early adoption
You can now switch to Semgrep-based scanning for many languages in GitLab SAST.
Semgrep-based scanning brings significantly faster analysis, reduced usage of CI minutes, and more customizable scanning rules compared to existing language-specific analyzers.
As of GitLab 15.0, it supports C, Go, Java, JavaScript, Python, and TypeScript.
In a future release, we’ll change GitLab SAST to use only Semgrep-based scanning by default for supported languages, and we’ll remove the language-specific analyzers that also scan them. (This change was previously scheduled for GitLab 15.0; work to complete it is tracked in this deprecation issue.)
You can now choose to disable deprecated language-specific analyzers early and use Semgrep-based scanning instead before we change the default behavior. We’ve updated documentation to explain the transition, including guidance on when to make the change in your pipelines.
See Documentation and Issue.
Related
I haven't done any cshtml front-end development for a few years.
What's the current, generally accepted way for ASP.NET Core front-end developers to work across a range of tools on Windows?
By that, I mean a way to have the front-end JS build and the .NET project(s) also build and to work rapidly in the browser and code.
My thinking is.
We have much better command line story around dotnet today.
Some folk like VS Code.
Some folk prefer VS 2019, and some like either, depending.
We need to work on UI aspects sometimes.
But we also need to attach a debugger and debug the server logic sometimes.
The build server should have no problem, be simple, and rely mostly on build logic held in the repo.
Tooling, and kicking off the whole build and serve process should be understandable and familiar.
It should be pretty simple to get going after a team noob clones the repo.
My initial thought would be to setup NPM then use something like Gulp to kick off everything, including running dotnet run.
Then when running under the Visual Studio 2019 debugger, use the Task Runner Explorer to kick off the Gulp stuff but skip the dotnet run part.
(shame there doesn't seem to be a command line for start VS(Code or 2019) and attach debugger)
Now I'm expecting to get a "primarily opinion based" SO beating, but there are general trends and ideas that go into designing all these tools for how they can all play ball together and what the dev story looks like.
You've pretty much already described the process. However, I'll add a few things:
You don't need the dotnet run bit. Visual Studio and VS Code are both capable of debugging directly.
You can assign the gulp tasks to build tasks in Task Runner Explorer, so you really don't even then to think about running those directly. I'm not as sure on this aspect of VS Code, but I'm sure there's probably some extension to handle it, if it's not already built-in.
If you want true ease of development, the best thing you can do is use Docker. Just add a Dockerfile to each project that actually runs (i.e. not a class library) and set up the steps to build and run it there. In Visual Studio, you can right-click the project and choose Add > Docker Support, and it will actually generate a ready-made Dockerfile, though you may need to add a step or two to handle the client-side build steps. In any case, this then becomes truly click and run, with nothing to worry about. The story is even better when you use docker-compose, as then Visual Studio and VS Code can spin up your entire application stack all at once, including external dependencies such as a database, Redis instance, etc. If you haven't used Docker before, start now. It's absolutely revolutionary for development.
One note for CI/CD, as much as possible, you should add a YAML file to describe your CI/CD pipeline. Depending on the the actual provider you're using for build/release, there might be some differences, so consult the relevant documentation. (Azure DevOps, for example, doesn't currently support describing release pipelines in yaml, though you can still do your build that way.) In any case, this allows you to configure all this in code, and have it committed to source control.
You may consider the same for your infrastructure. Azure has ARM templates, AWS has CloudFormation, GCP has Deployment Manager. There's also third-party tools like Terraform or Ansible. All of these, in some form or fashion (usually JSON or YAML) allow you to define all the characteristics of the infrastructure you're going to deploy to and commit that to source control. This makes deployment and things like creating new environments as breeze.
Why akka.persistence is still having beta release on nuget packages. Does it imply it is still not stable and not good for used in production applications?
In Akka.NET in order to get out of prerelease, a package must meet multiple criteria, like:
Having full test suite up and running. In case of clustered plugins, this also includes multi-node tests.
Having a fixed API. There are dedicated API Approval tests ensuring, that no public API has been accidentally changed.
Having a battery of performance tests. While many of plugins are ready and usually fast without it, stress tests are needed in order to check if any of the merged pull requests didn't introduce any performance penalties.
Having all documentation writen and published.
While this is a lot, not all of these are necessary to make plugin functional. In case of Akka.Persistence there are minor changes (like deprecation of PersistentView in favor of persistence queries), but the plugin itself is production ready and used as such already. However maturity of persistent backend plugins, that are used underneat, may vary.
Akka.Persistence is stable now. You can download it by running following command in Package Manager Console
Install-Package Akka.Persistence
We are looking for a software to run our test cases automatically.
We want a software which will run on our server (or a commercial), which automatically gets the newest commit on github. Then compiles the commit of the project with CMake and run Ctest on our test cases. The results should then be visualized on a nice website.
I had a look at CDash, but as the documentation is so bad I did not even get it to get the latest commit from github.
So my questions are:
Is there a good tutorial to CDash? Except the bad wiki page.
What software is available for running tests on new commits to github, what are their advantages and drawbacks?
In answer to your second question, Jenkins is a robost and extensible continuous integration tool that can be integrated tightly with GitHub using a plug-in (or loosely using standard Git support). It also supports CMake via a plug-in. Whether it has disadvantages that will make it less useful for you depends on your organization and build process, but I've found it to be highly customizable to a wide variety of processes. I recommend taking a look at it.
There's also a third-party Ctest plugin available for Jenkins.
CDash works in pair with CTest. If you are already using CMake then it should be fairly easy to submit your testing results to CDash. I'd recommend reading the CTest documentation:
http://www.vtk.org/Wiki/CMake_Testing_With_CTest
You can either install your own CDash server or use Kitware's hosted server at my.cdash.org. You can test your server with a sample project available at:
http://www.cdash.org/cdash/resources/software.html
We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.
I ask this question because I find the the community contributions to the various build engines (like MSBuild and NAnt) do include all the tasks that promote for CI servers, like getting versions from source control, cleaning folders, changing build numbers, sending emails, etc...
Is it only because it "listens" to the changes happens on the source control repository? what else am I missing?
Grzegorz Oledzki linked a good resource for finding the differences between multiple CI solutions, but it should be noted that the intent of MSBuild is to specifically turn code into binary and is used by CI software to build the source. It's true that it can do other things but most of its tasks lie closely within that realm.
In addition to what you mentioned about listening to the repo, some CI servers can do all kinds of things like^1:
multi-agent building (not just multi-core, msbuild can do that, but multi-machine)
monitoring build status
notifications (e-mail/sms/rss/whatnot)
assigning blame for broken builds
administrative features
supporting XFDs (extreme feedback devices)
automated deployment
And generally all from a handy UI.
1 Not all CI software will have all of these features, it is by no means meant to be exhaustive and there is some overlap.
I believe CI (Continuous Integration) feature matrix will answer all your questions about particular CI providers and their capabilities.
Wow there are just so many answers to this. As for what a CI system can do that a Build Script can't do other than listen to your Version Control System... Well for starters systems like TeamCity can let you first test your code on the build server and then check it in if it passes all the tests for starters.
I highly recommend using a CI server but I prefer to keep all of the build logic in a MSBuild file and all of the who to notify when it fails etc. in the CI server. Keeping the logic in the Build file helps you to reproduce the build on your own machine and makes it simple to set up new projects in the CI server or to change how the CI server builds the project