Historical test coverage data for a given class/method - api

We are trying to pull historical test coverage data (more like weekly/mothly trend) for a given resource. At a project level we are able to fetch required data by using the API https://sonar-service/api/resources?resource=com.demo.project:demo-project&metrics=new_coverage&includetrends=true.
But the same is not working if we try to fetch information at a class level (https://sonar-service/api/resources?resource=com.demo.project:demo-project:src/main/java/com/demoClass.java&metrics=new_coverage&includetrends=true). It simply returns current value.
Is there any way to fetch historical coverage data for a given class/method (either by using APIs or querying Database).
If it is not possible in Sonarqube 5.x & below, do we have any simpler way in 6.x version of Sonarqube.

Historical data is not retained at the file level. I don't believe it ever has been.

Related

Is there a way to access raw data stored in Youtrack?

In Youtrack reports, you can view the issues by two fields using creation date as y-axis and any other field as x-axis. But when you do that like in this graph you view number of issues that are currently in the state stated in x-axis. For example, if the x-axis is the state, then you will see the current states of the issues that are created in the date intervals of the y-axis. But I also want to see the number of issues in each state in a chronological way. I want to see the states (or some other field) of the issues in May 21, 2021 (not their current states but their states in May 21).
I know that Youtrack keeps the state changes and their dates and many other data like that because in different reports, I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
I want to access all those raw data. My plan is to create some reports that are not available in Youtrack Reports, using R or Python. Is there a way to access those raw data, or a guideline to access them?
The way to access raw data in YouTrack is through the REST API. For example, you can get the issue's activity data to retrieve the history of changes applied to the issue. This way you can identify how things have changed chronologically.
I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
Report's data can be accessed via API as well. The report's API endpoint is api/reports, however, it's not documented as it may be subject to change. In this case, we can't guarantee backward compatibility. If you are fine with it, you can still use it. To see the exact request, check the network requests in the browser when loading a report.

How to transfer data from SQL Server to mongodb (using mongoose schema for validation)

Goal
We have a MEAN stack application that implements a strict mongoose schema. The MEAN stack app needs to be seeded with data that originates from a SQL Server database. The app should function as expected as long as the seeded data complies with the mongoose schema.
Problem
Currently, the data transfer job is being done through the mongo CLI which does not perform validation. Issues that have come up have been Date objects being saved as strings, missing keys that are required on our schema, entire documents missing, etc. The dev team has lost hours of development time debugging the app and discovering these data issues.
Solution we are looking for
How can we validate data so it:
Throws errors
Fails and halts the transfer
Or gives some other indication that the data is not clean
Disclaimer
I was not part of the data transfer process so I don't have more detail on the specifics of that process.
This is a general problem of what you might call "batch import", "extract-transform-load (ETL)", or "data store migration", disconnected from any particular tech. I'd approach it by:
Export the data into some portable format (e.g. CSV or JSON)
Push the data into the new system through the same validation logic that will handle new data on an ongoing basis.
It's often necessary to modify that logic a bit. For example, maybe your API will autogenerate time stamps for normal operation, but for data import, you want to explicitly set them from the old data source. A more complicated situation would be when there are constraints that cross your models/entities that need to be suspended until all the data is present.
Typically, you write your import script or system to generate a summary of how many records were processed, which ones failed, and why. Then you fix the issues, run it on those remaining records. Repeat until you're happy.
P.S. It's a good idea to version control your import script.
Export to csv and write a small script using node. that will solve your problem. You can use fast-csv npm

How to pass random parameters to SilkTest Workbench or Classic Record&Play Scenario

I am new to SilkTest and I don't have any scripting background. What I need to do is to record some test cases and then play them to check my system. After getting used to it, I plan to learn scripting and dive into it, but first things first.
What I need is to pass random generated (or randomly read from a text file or pre-defined) parameters into the recordins so that every time I run the tests, different parameters are used. For example, there is a component in which I write some letters and the component filters the results based on the text. Then, I select one of the results. Now, instead of recording the same letters everytime, how can I use random given parameters?
Thanks
What you are looking for is called Active Data in Silk Test.
It allows enhancing your visual tests with external data, for example from an Excel file.
ActiveData testing enables you to leverage existing data in external files as input for powerful, comprehensive application testing solutions. ActiveData testing enables you to perform multiple transactions against test applications using a different set of data for each transaction without writing complicated code or compromising existing data.
You can find an introduction to Active Data in the online documentation or in the tutorial video.
I have a question, what version of Silk Test are you using, also, what client are you using (Silk Test Workbench, Silk4Net or Silk4J). Each of these clients has the ability to receive parameters from an external source whether it is from a command line or from an external data file.
You indicate that you want random data, do you really mean random data or external data? If it is random data that you want you probably need to use a random number/string generator for the client that you are working with (.Net code for Workbench and Silk4Net and Java code for Silk4J).

How to access results of Sonar metrics for use with applications like PowerPivot

I'm trying to run a number of applications with known failure rates through Sonar, with hopes of deciding which metrics are most valuable in determining whether a particular application will fail. Ultimately I'll be making some sort of algorithm that will look at the outputs of whatever metrics I'm using and generate a score from 1 - 100. I've got about 21 applications put through Sonar, and the results have been stored in a MySQL database. I originally planned to use PowerPivot to find relationships in the data, but it seems like the formatting of the tables doesn't lend itself well to that. Other questions on stackoverflow have told me that Sonar's tables are unformatted, and I should instead use the Web Service API to get the information. I'm unfamiliar with API and was unsuccessful in trying to do what I wanted by looking at Sonar's documentation for API.
From an answer to another question:
http://nemo.sonarsource.org/api/timemachine?resource=org.apache.cxf:cxf&format=csv&metrics=ncloc,violations_density,comment_lines_density,public_documented_api_density,duplicated_lines_density,blocker_violations,critical_violations,major_violations,minor_violations
This looks very similar to what I'd like to have, except I'm only looking at each application once (I'm analyzing a sample of all the live applications on a grid), which means Timemachine isn't really what I'm looking for. Would it be possible to generate a similar table, except instead of the stats for a particular application per date, it showed the statistics for an application and all of its classes, etc?
If you're not familiar with the WS API, you can also create your own Sonar plugin to achieve whatever you want: it is written in Java and it will execute on every analysis you run. This way, in the code ot this custom plugin, you can do whatever you want: flush the metrics you need in an output file, push them into a third party system, ... etc.
Just take a look on how to write a plugin (most probably you will create a Decorator). You have concrete examples also to get started faster.

sql server global data version

I wonder what is the best way to implement global data version for database. I want for any modification that is done to the database to incerease the version in "global version table" by one. I need this so that when I talk to application users I know what version of data we are talking about.
Should I store this information in table?
Should I use triggers for this?
This version number can be stored in a configuration table or in a dedicated table (with one field).
This parameter should not be automatically updated because you are the owner of the schema and you are responsible for knowing when you need to update it. Basically, you need to update this number every time you deploy a new application package (regardless of the reason for the package: code or database change).
Each and every deployment package should take care of updating the schema version number and the database schema (if necessary)
I tend to have a globals or settings table with various pseudo-static values stored.
- Just one row
- Many fields
This can include version numbers.
In terms of maintaining the version number you refer to, would this change when the data content changes? If so, the a trigger would be useful. If you mean for the version number to relate to table structures, etc, I'd be more inclined to manage this by hand. (Some changes may be irrelevant as far as teh applications are concerned, or there maybe several changes wrapped up into a single version upgrade.)
The best way to implement a "global data version for database" is via your source control system and build process. When all the changes have been submitted and passed testing your build process will increment your versioning number schema.
The version number could be implemented in a stored procedure. The result of the call to the stored proc could be added to a screen in your app so you can avoid users directly accessing a table.
To complete the previous answers, I came across the concept of "Migrations" (from the Ruby on Rails world apparently) today, and there was already a question on SO that covered existing frameworks in .Net.
The concept is still to store DB versioning information as data in a table somewhere, but for that versioning information to be managed automatically by a framework, rather than manually by your custom deployment processes:
previous SO question with overview of options: https://stackoverflow.com/questions/313/net-migrations-engine