lets say this is my .featurefile , behind very step a method is binded..
Given The system administrator sends a list of Tracks
And The system is at CreateCWRFile method
And The system sends "name", "caeID" & "ver" to generate HDR Line
Then The system generates GRH Line
Then The system generates track Revision Line
Then The system generates track SPU Line
Then The system generates track SPT Line
Then The system generates and verifies SWT, PWR & SWR Lines each writer of track
let's say my test is at line number 5 i.e Step#5 and on some condition I want to come back to Step#2, How to do it..
At the risk of repeating Specflow step definition mapping with wildcard attribute I think you are struggling because of what you are trying to achieve.
SpecFlow is good at describing;
the state your system should be in - i.e. Given
the operation you want to perform - i.e. When
and the what the state should look like afterwards i.e. Then
So it may be that your example above has mixed up some of the Thens and Whens.
As nemesv points out in the comment, you probably should have more than one scenario to handle the branching. Have a look at How to run gherkin scenario multiple times for an example.
Your only other option would be to build your scenario from multiple steps and test you are in the right state each time, e.g.
Given the traffic light is red
When the light changes
Then the light should be amber
When the light changes
Then the light should be green
When the light changes
Then the light should be amber
When the light changes
Then the light should be red
Good luck :-)
Related
Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.
I have an exam tomorrow and to be honest till now I don't know what are the steps that I should go through to design a given Scenario.
For example, when you see a scenario like this
Every weekday morning, the database is backed up and then it is checked to see whether the “Account Defaulter” table has new records. If no new records are found, then the process should check the CRM system to see whether new returns have been filed. If new returns exist, then register all defaulting accounts and customers. If the defaulting client codes have not been previously advised, produce another table of defaulting accounts and send to account management. All of this must be completed by 2:30 pm, if it is not, then an alert should be sent to the supervisor. Once the new defaulting account report has been completed, check the CRM system to see whether new returns have been filed. If new returns have been filed, reconcile with the existing account defaulters table. This must be completed by 4:00 pm otherwise a supervisor should be sent a message.
What is your approach to model this? I am not asking for the answer of this particular scenario, I am asking for the method. Do you design sentence by sentence? or do you try to figure out the big picture first then try to find the sub process?
There are no exact steps. Use imagination, Luke!)
You can take these funny instructions like a starting point, but they were made by dummies for dummies.
Commonly you should outline process steps and process participants on a sheet of paper schematically and try to build your model. No other way: only brainstorm.
When BPMN comes to mind, one thinks of people together in a conference room discussing how the business does things (creating what you call scenarios and translating to business processes) and drawing boxes and lines on a whiteboard.
Since 2012, when BPMN 2.0 appeared as an Object Management Group (OMG) specification, we have the very comprehensive 532-page .pdf file with pretty much all the information to create the process diagrams one needs.
Still, in addition to reading the previous file, one can also find many BPMN examples of common modeling problems, patterns, books and research papers which help to understand how certain scenarios come to live.
Generally speaking, we first identify who takes part in the process to understand who are the actors. After, we realize where they get (if they get) their input, what they do with it (if they do anything) and where they forward it after they have completed their work (if they forward). This allows to visualize each actor has specific tasks that follow a specific flow of work and can better draw it.
Then, once the clean and simple diagram is built, one can validate visualizing (IRL or not) the users / actors executing the activities.
I am trying to make a VB 2012 project where a console application combines with a form application. My main goal is to be able to type commands on a console and change properties or displays on the forms.
The idea came when want to teach my students how to use what they know together with what they see to be able manipulate programs.
I wanted to make a program with a plot that allows student to be in a mind frame that gives them a purpose to the task.
I wanted to make the project where there are 3 form windows,
1: door access screen saying " Door 342" with an image of a door and a colour dot. Then when the right command is entered in the console then the colour dot would change from Red (closed/locked) to Green (Open/Unlocked)
2: Security alarm system, running basically the same but with a fail safe where if something is entered wrong then an error message pops up or a timer starts timing down before the alarm triggers.
3: Pressure pipe system, where they see tubes and then if the alarm triggers then the pressure if sealing doors so a colour strip follows the pipe to each door which would have to be monitored and shutdown if triggered.
I can write form applications and console applications but i can seem to be able to get a joining one.
The best i have come up with is by making a button on a form that writes into a console without feedback from the console.
Please help as it is something that I would like to make so that i can teach and have fun at the same time for the higher level students.
Many thanks for your help in the matter.
I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?