Why akka.persistence is still having beta release? Is it stable? - akka.net

Why akka.persistence is still having beta release on nuget packages. Does it imply it is still not stable and not good for used in production applications?

In Akka.NET in order to get out of prerelease, a package must meet multiple criteria, like:
Having full test suite up and running. In case of clustered plugins, this also includes multi-node tests.
Having a fixed API. There are dedicated API Approval tests ensuring, that no public API has been accidentally changed.
Having a battery of performance tests. While many of plugins are ready and usually fast without it, stress tests are needed in order to check if any of the merged pull requests didn't introduce any performance penalties.
Having all documentation writen and published.
While this is a lot, not all of these are necessary to make plugin functional. In case of Akka.Persistence there are minor changes (like deprecation of PersistentView in favor of persistence queries), but the plugin itself is production ready and used as such already. However maturity of persistent backend plugins, that are used underneat, may vary.

Akka.Persistence is stable now. You can download it by running following command in Package Manager Console
Install-Package Akka.Persistence

Related

Versioning APIs during internal development

In our team we have a number of APIs specified using the Open API Specification (formerly Swagger). We use Maven and OpenAPI Generator to generate code, build and publish the artifact to our local nexus. We build our code on TeamCity. The artifact is given the version that is specified in the pom.xml file of Maven.
During development we only use snapshot versions, that is versions which can be overwritten and will be cleaned up. This is opposite to release versions, that cannot be overwritten and needs administrative privileges to clean up. The reason for this is, that a developer usually changes a little bit at the time, which is much more convenient with snapshot versions. This also makes cleaning up outdated unreleased artifacts much easier.
Our problem is, that from time to time a developer makes API changes but forgets to set a new version. This works fine locally, but when the code is build on TeamCity the changed API overwrites the artifact of an older version. A developer not working on this branch will then experience a compile error, because the code does not match the API artifact being used.
What does others do? Is there a best practice? Preferably with standard tools. We have tried many things and nothing works well. At the same time this issue is so basic that someone must have a good solution - or at least experience enough to point to the least bad solution.

How do I run Security Code Scan in a GitLab pipeline?

I am trying to integrate the Security Code Scan with Gitlab CI. I read the documentation but still, I can't understand how exactly must write commands for SCS in yml file (source file: warning SCS[rule id]: [warning description] [project_file]). My Gitlab is hosted on Windows 10 machine without a container. The project is .NET Framework 4.6.2 and I use Visual Studio 2019. I already get an SCS package from NuGet.Also, I have read about Fortify but I am stuck on the same problem.
Per the GitLab docs, you really just add this include to your main .gitlab-ci.yml file.
include:
- template: Security/SAST.gitlab-ci.yml
The template defines a job that uses a custom Docker image and Go wrapper around the Security Code Scan package. It actually dynamically adds the SCS package to discovered projects, runs a build, and captures and parses the output in order to produce the security report.
It does things this way because the Security Code Scan project runs as an analyzer at build time... it's not a normal CLI application, although there are mostly ignored issues asking for this option.
Update: You could just add the Security Code Scan package to your project(s)
$ dotnet add package SecurityCodeScan --version 3.5.3
And run a normal build in your GitLab pipeline, reading the warnings that are produced in the pipeline logs.
build:
stage: build
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- dotnet build
You could enable TreatWarningsAsErrors to break the build, too.
dotnet build /p:TreatWarningsAsErrors=true
<PropertyGroup>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
<PropertyGroup>
You won't get a nice MR attached report this way, just pipeline logs. The interactive pipeline report doesn't appear unless you have a Gold plan, anyway.
With GitLab 13.9 (February 2021), this will work for multiple projects too:
Multi-project support for .NET SAST scanning
GitLab security scans automatically detect code language and run appropriate analyzers.
With monorepos, microservices, and multi-project repositories, more than one project can exist within a single GitLab repository.
Previously our .NET SAST tool could only detect single projects in repositories.
With this release, our .NET SAST analyzer can now intelligently detect multiple solution files (.sln) in .NET projects and report vulnerabilities across them.
This will make it easier and more streamlined for users with multi-project .NET repos to leverage our SAST scanning.
See Documentation and Issue.
And it is more visible (still with GitLab 13.9, February 2021)
Security Configuration page for all users
With SAST and Secret Detection available to all GitLab customers, we wanted to improve the experience for developers enabling available security scans. We have had a guided configuration experience for Ultimate users and have now made a simplified version of this experience available to all users. This new configuration experience makes it easier for developers to understand which security scans are available to them, find relevant documentation, and provide simple enablement tools. With this initial release, we support a button to create a merge request for SAST. In a future release, we will add additional buttons to easily enable other scan types.
-- Security Configuration page for all users
See Documentation and Epic.
It does evolve with GitLab 13.11 (April 2021)
GitLab + Semgrep: upgrading SAST for the future
GitLab SAST historically has been powered by over a dozen open-source static analysis security analyzers. These analyzers have proactively identified millions of vulnerabilities for developers using GitLab every month.
Each of these analyzers is language-specific and has different technology approaches to scanning. These differences produce overhead for updating, managing, and maintaining additional features we build on top of these tools, and they create confusion for anyone attempting to debug.
The GitLab Static Analysis team is continuously evaluating new security analyzers. We have been impressed by a relatively new tool from the development team at r2c called Semgrep.
It’s a fast, open-source, static analysis tool for finding bugs and enforcing code standards.
Semgrep’s rules look like the code you are searching for; this means you can write your own rules without having to understand abstract syntax trees (ASTs) or wrestle with regexes.
Semgrep’s flexible rule syntax is ideal for streamlining GitLab’s Custom Rulesets feature for extending and modifying detection rules, a popular request from GitLab SAST customers. Semgrep also has a growing open-source registry of 1,000+ community rules.
We are in the process of transitioning many of our lint-based SAST analyzers to Semgrep.
This transition will help increase stability, performance, rule coverage, and allow GitLab customers access to Semgrep’s community rules and additional custom ruleset capabilities that we will be adding in the future. We have enjoyed working with the r2c team and we cannot wait to transition more of our analyzers to Semgrep. You can read more in our transition epic, or try out our first experimental Semgrep analyzers for JavaScript, TypeScript, and Python.
(So not yet for .Net, but soon)
We are excited about what this transition means for the future of GitLab SAST and the larger Semgrep community.
GitLab will be contributing to the Semgrep open-source project including additional rules to ensure coverage matches or exceeds our existing analyzers.
See Documentation and Epic.
The same GitLab 13.11 announces:
OpenShift Support for SAST and Secret Detection
Experimental Semgrep Analyzer for Python, JavaScript, and TypeScript
Support for Generic Kotlin SAST Scanning
See GitLab 13.12 (May 2021)
Semgrep SAST Analyzer for JavaScript, TypeScript, and Python
In GitLab 13.11 we announced an experimental release of Semgrep, a new SAST analyzer for JavaScript, TypeScript, and Python. This transition has been developed in partnership with r2c, the team behind Semgrep who share our mission to help developers write more secure code. After an extensive beta with hundreds of customers trying out our experimental analyzer, we’re ready to start the transition to Semgrep.
With 13.12, we’re updating our managed SAST.gitlab-ci.yml CI template to automatically run this new analyzer alongside our existing JavaScript and TypeScript analyzer, ESlint. In a future release we will fully disable ESLint, but for now it will work in unison with Semgrep. We’ve done work to deduplicate findings, so you should not notice any difference in findings. If you include our SAST.gitlab-ci.yml, you don’t need to do anything to start benefiting from the Semgrep analyzer, however if you override or manage your own SAST CI configuration, you should update your CI configuration.
Both GitLab and r2c are excited about the future of this transition to bring you fast and wide coverage Static Application Security Testing (SAST). We’ll continue to expand the Semgrep analyzer through new security detection rules as well as expanding coverage to other languages. We’ve created a feedback issue where you can share your experience with this transition or ask questions.
See Documentation and Epic.
See GitLab 15.0 (May 2022)
Semgrep-based SAST scanning available for early adoption
You can now switch to Semgrep-based scanning for many languages in GitLab SAST.
Semgrep-based scanning brings significantly faster analysis, reduced usage of CI minutes, and more customizable scanning rules compared to existing language-specific analyzers.
As of GitLab 15.0, it supports C, Go, Java, JavaScript, Python, and TypeScript.
In a future release, we’ll change GitLab SAST to use only Semgrep-based scanning by default for supported languages, and we’ll remove the language-specific analyzers that also scan them. (This change was previously scheduled for GitLab 15.0; work to complete it is tracked in this deprecation issue.)
You can now choose to disable deprecated language-specific analyzers early and use Semgrep-based scanning instead before we change the default behavior. We’ve updated documentation to explain the transition, including guidance on when to make the change in your pipelines.
See Documentation and Issue.

Speeding up builds with Jenkins workflow plugin on CloudBees

Documentation on this is quite rare but are there any tips on how to speed up build on CloudBees, especially using the workflow plugin?
Usually -- when using the very same machine for subsequent builds, you can make use of caches or reuse previous computations.
There are some quite expensive computations like downloading dependencies with SBT, Maven or Gradle; the initial npm install; Gemfile Cache that are quite expensive in time and computation but are great to cache.
On CloudBees you will most probably get a random (new) node for your builds, so there's no cache.
We are also using Snap-CI - there we have a persistent CACHE_DIR that allows that. Is there anything similar on CloudBees?
If you are referring to DEV#cloud, CloudBees’ hosted Jenkins, there is a cached workspace system, though it is not used for every build. (Depends on detail of hardware allocation in the cloud.) If you run a number of builds, over time you should see most of them picking up an existing workspace, and thus being able to use Maven local repository caches, etc.
Use of the Workflow plugin as opposed to freestyle or other project types should not matter in this regard.

Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.

Differences between CruiseControl (original) and CruiseControl.NET

Are there any differences between the original CruiseControl and the .NET port? I've compared the 2, but can't find any big differences except the language it has been developed in. I want to use either one of them for (automated) testing of web applications, using Selenium and Subversion, perhaps even Groovy but don't know which to choose.
[edit]
After looking at CC and Hudson, I've chosen Hudson for it's simplicity, it already has plugins to run Groovy scripts and Selenium as well
Choose me, choose me! (I work on the original CruiseControl.)
I've never used CC.NET but from what I know I agree that they are pretty comparable. Probably the most important difference is cross-platform vs. Windows only.
Now I wonder how long until someone comes by and says their both crap and you should try Hudson? ;)
(And of course there are lots of other choices...)
CruiseControl.NET (cc.net henceforth) has build queues (http://confluence.public.thoughtworks.org/display/CCNET/Project+Configuration+Block), which allows you to serialize builds that depends on a certain build order. I'm in the process of emulating this behavior in the java version of cruisecontrol but the functionality doesn't map one to one. The reason however, that I'm at all moving from the .net to the java version is that the .net version core dumps with mono (cc.net nightly build and mono nightly build as of two months ago). The fault lies with monos thread handling but voids attempts to get cc.net up and running.
The documentation on this can be tricky to find, if you don't notice the version numbers that the configuration examples/documentation adhere to (confluence.public.thoughtworks.org has the updated configuration documentation whereas ccnet.sourceforge.net has not. I know that the ccnet is most likely a dead site, but if your're not carefully reading the datestamps on every page you're visiting, this may bite you).
Furthermore, the sourcecontrol blocks for cvs and svn in cc.net are more granular and featurerich than their counterpart in the java version, but this has not been a problem in my work. The java version is also easy to extend/modify re: plugin behavior, but you would really just like to see this kind of work going upstream instead of forking.
I'm fairly impressed with both the java version and the fork in .net (modulo mono runtime behavior), but you really do not want to try any of the other forks of cruisecontrol. I've had peripheral experience with hudson, and the features were just not compelling enough to veer me from cruisecontrol. Hudson has a (somewhat coloured) comparison map of Hudson and CruiseControl (java) at http://hudson.gotdns.com/wiki/display/HUDSON/Home
A viable alternative is the python implemented buildbot (http://buildbot.net/trac). It does not have fancy gui dashboards and the setup is somewhat more commandline-bound, but if you're doing distributed builds, it's very easy to set up and get running.
I think for you it will come down to operating system, original can run on nix, and .net version runs on windows.
There are other automated build utilities that can do this as well, such as TeamCity in the windows space, and cruisecontrol.rb in the ruby world.
Also there is a PowerShell based build utility called pSake that can poll subversion and perform tasks.