during defect raise in software testing , i came across a word "Artifact" , what it actually means? can anybody explain? I'm really frustrated by searching the actual meaning of it in google.
It usually means something like "a file created during testing". For example, the log file is an artifact. If your tests create temporary files, they are artifacts. If your test dowloads images, those are artifacts.
Artifacts can mean other things besides files (which is why we say "artifact" rather than "file"). For example, an artifact could be a row added to a database.
Depending on the context, it could also mean files you need in order to perform the test (eg: "for this test you need the following artifacts...")
In short, an artifact is something created or used by the test suite.
The artifacts produced, let's say, for a given release are all the different "units of products" that are available. It's better to have an example to understand. Imagine you have to test a single product but that this products comes 2 you in 2 versions (.msi file for Win and .dmg for Mac) plus 3 upgrade scripts for the different databases you could as a backend, then you have 5 artifacts in your hands that you should test.
Artifacts are also the documents used to execute different things. For example SRS, FS, Test plan, Test cycle plan, Test cases of different types.
They are used for different purposes. Other than the above mentioned artifacts there are design documents, ERD, DFD, etc.
These documents are needed to execute different tasks during different stages of SDLC.
Related
I´m using Karate testing framework to validate some APIs and would like to know if there is any way to generate a Test Coverage Report by using a predefined list of expected scenarios to run and validate them against the scenarios that actually exist within the Karate feature files.
Imagine that you agree to run 50 scenarios with your client but in reallity you have only developed 20 scenarios within your feature files (more than one stored in different folders)
Wonder if there is any (easy) way to:
list ALL the scenarios developed in ALL the feature files available
match them against an external (csv, excel, json...) list of scenarios (the ones agreed with the client) so that a coverage % could be calculated
Here's a bare bones implementation of a coverage report based on comparing karate.log to an openapi/swagger json spec.
https://github.com/ericdriggs/karate-test-utils#karate-coverage-report
Endpoint coverage is a useful metric which can be auto-generated based on auto-generated spec. It also lets you exclude paths which aren't in scope for coverage, e.g. actuator, ping
Will publish jar soonish.
Open an issue if you'd like any enhancements.
MIT licensed so feel free to repurpose
I read in the documentation that we can run multiple feature files by adding newer lines for different classpaths in the simulation class. Is there a way wherein we can run multiple feature files belonging to the same package just like we run in FeatureRunner files?
No, I personally think that will introduce maintainability issues. We will consider PR-s though if anyone wants to contribute this.
If you really want this behavior, you should be able to write a small bit of Java code that scans a folder, loops over them and builds the Gatling "scenarios".
I have tried many option but any of them not work as i need. I have created some feature file as i required time by time. so i have a disorder feature file.
I want to run the feature file as i wanted. like.
fil1.feature
fil2.feature
fil3.feature
fil4.feature
so i want to run in this sequence : file3.feature->fil4.feature->fil1.feature.
I have tried #tag, #feature in junit test runner option but its maintain the sequence its run only 3,4 but can't run 1.
So can you tell me how to run feature file randomly???
Cucumber picks up the feature files in alphabetic order from the folder given in the features parameter of the CucumberOptions. So one options would be to rename your feature files alphabetically in the order you want.
After the feature files in the initial folder are read, then the sub folders in the location are picked up alphabetically and the feature files inside them are read. So you can place the file you want to be used later into a sub-folder.
Saying all this, it is not a very good idea to have any dependency between tests which requires a sequence to be maintained.
I got the same stop. Have you solved this problem?
I did some test, use --tags #XXX to controll the sequence is useless, it's just action on different scenarios in a feature file.
so far, I have to do it like this cucumber features/c.feature features/a.feature features/b.feature I think this is a little bit better than rename feature files.
But you may say that: "If there is a lot of feature files......" In my project(ROR), I assigned the statement to ./config/cucumber.yml
In cucumber.yml I define a profile, test_dev: features/c.feature features/a.feature features/b.feature after that, I just to use cucumber -p test_dev and the feature files execute in order.
If you have a butter way, please share with us.
I have some Cucumber scenarios, for which I created the following files:
create_extended_search.feature
activate_extended_search.feature
edit_extended_search.feature
delete_extended_search.feature
Within these files, I have several scenarios.
Three of the files use the same background, and it would be nice to be able to place it into one file (e.g. support/backgrounds.rb) and then reference it from the feature files.
Is this possible somehow? Thanks.
I believe you would have to create a step that is made up of the steps in your current background. Then call that step in the background for each feature.
There's no notion of 'include'ing feature files in Cucumber. As Justin points out, you can create a single step representing what you want as a background, and call that where appropriate. An alternative is to use a Before hook to perform certain tasks in advance of scenarios that you mark with a specific tag.
Personally, I'd treat this problem as something of a red flag, and start asking if my feature files were split up in the best way possible. Frequently if I find myself bemoaning the inability to include other feature files, or conversely, wishing I could exclude certain scenarios from running my background, it's a very strong sign that my feature files are too finely sliced up, or I'm trying to cram unrelated functionality together and need to split it up further.
The organization I currently work for an organization that is moving into the whole CMMI world of documenting everything. I was assigned (along with one other individual) the title of Configuration Manager. Congratulations to me right.
Part of the duties is to perform on a regular basis (they are still defining regular basis, it will either by quarterly or monthly) a physical configuration audit. This is basically a check of source code versions deployed in production to what we believe to be the source code versions in production.
Our project is a relatively small web application with written in Java. The file types we work with are java, jsp, xml, property files, and sql packages.
The problem I have (and have expressed but seem to be going ignored) is how am I supposed to physical log on to the production server and verify file versions and even if I could it would take a ridiculous amount of time?
The file versions are not even currently in the file(i.e. in a comment or something). It was suggested that we place visible version numbers on each screen that is visible to the users also. I thought this ridiculous also, since the screens themselves represent only a small fraction of the code we maintain.
The tools we currently use are Netbeans for our IDE and Serena Dimensions as our versioning tool.
I am specifically looking for ideas on how to perform this audit in a hopefully more automated way, that will be both accurate and not time consuming.
My idea is currently to add a comment to the top of each file that contains the version number of that file, a script that runs when a production build is created to create an XML file or something similar containing the file name and version file of each file in the build. Then when I need to do an audit I go to the production server grab the the xml file with the info, and compare it programmatically to what we believe to be in production, and output a report.
Any better ideas. I know this has to have been done already, and seems crazy to me that I have not found any other resources.
You could compute a SHA1 hash of the source files on the production server, and compare that hash value to the versions stored in source control. If you can find the same hash in source control, then you know what version is in production. If you can't find the same hash in source control, then there are untracked modifications in production and your new job title is justified. :)
The typical trap organizations fall into with the CMMI is trying to overdo everything. If I could suggest anything, it'd be start small & only do what you need. So consider any problems that you may have had in the CM area peviously.
The CMMI describes WHAT an organisation should do, but leaves the HOW up to you. The CMMI specification, chapter 2 is well worth a read - it describes the required, expected, and informative components of the specification - basically the goals are required, the practices are expected, and everything else is informative. This means there is only a small part of the specification which a CMMI appraiser can directly demand - the goals. At the practice level, it is permissable to have either the practices as described, or acceptable alternatives to them.
In the case of configuration audits, goal SG3 is "Integrity of baselines is established and maintained". SP3.2 says "Perform configuration audits to maintain integrity of the configuration baselines." There is nothing stated here about how often these are done, or how long they may take.
In my previous organisation, FCA/PCA was usually only done as part of the product release process, and we used ClearCase as the versioning tool, with labels applied across the codebase to define baselines. We didn't have version numbers in all the source files, nor did we have version numbers on all the products screens - the CM activity was doing the right thing & was backed up by audits, and this was never an issue in any CMMI appraisal.
We could use the deltas between labels to look at what files had changed, perform diffs to see the actual code changes. An important part of the process is being able to link those changes back to either a requirement/bug report/whatever the reason was which initiated the change.
Our auditing did use scripts to automate the process, but these were in-house developed scripts are specific to ClearCase - basically they would list all the files, their versions in the CM system, and the baseline/config item to which they belonged.
can't you use your source control for this? if you deploy a version and tag your sourcecontrol with that deployment, you can then verify against the source control system