I have a framework for native iOS testing which utilizes Appium, TestNg, Cucumber, PicoContainer
And I`m looking for best way to store data fetched from one step/scenario that later can be used to assert another scenario
Example:
Scenario: user can answer on survey question
Given User answers on Survey Question with {var1}
Then success screen displayed
Scenario: previously answered question has value that user sent initially
Given user on reviewMyAnswers screen
Then answer hold value of {var1}
I just give generic example. In reality i have a lot of data like this that need to be validated and i want to store answer from first scenario in separate class and then retrieve it when needed by key and value pairs somehow like that
public classWhereIstoreTestData() {
ANSWER1;
ANSWER2;
PRODUCT1;
ETC...;
}
#Given(User answers on Survey Question with {var1}{
poSurvey.AnswerOnQuestion;
classWhereIstoreTestData().setValue(key.Answer1,value.poSurvey.getAnswerValue)
#Then(answer hold value of {var1}{
assertThat(classWhereIstoreTestData().getValue(key.Answer1),equalsTo(poSurvey.GetAcceptedAnswerValue)
I`ve seen tutorials (there are just couple on google) , but could not get them
They all seem much more complicated then they suppose to be
My app is not too big and i guess i gonna be using just one stepdefs file. But i still don't want to use static variables for this purpose cause I'm planning to use parrallelization in future
Much like Unit Tests, Scenarios should be independent from each other and sharing data makes them depend on each other. This is a problem. Esp. if you want to use parallel execution later on you can't guaranteed that the Scenario that consumes the data won't run at the same time as the one that produces it.
In short. You can't share data in any way other then using static variables.
And you shouldn't have to. Rather then writing out the answers to the questionnaire step by step in a feature file and then trying to reuse this data, what you can do is store the answer in a Map<String, String> in your step definition file and use it to fill out all questions of the questionnaire all at once in a single step. Or if you need to fill out an entire flow to get where you want to test your thing, do all that and the questionnaire in a single step.
Now you'll probably have a few different scenarios and different ways to progress through the application. If you specify these paths technically you'll get rather dry feature file. However if you use personas to name these variations they'll become more understand able.
Given Jack (the fitness enthusiast) completes the daily exercise task
When Jack fills out a questionnaire prompt about his habits
Then Jack will receive the fitness enthusiasts advice to workouts
Given Jill (the workaholic) completes the daily exercise task
When Jill fills out a questionnaire prompt about his habits
Then Jill will receive the workaholics advice to workouts
And an extra set of reminders is scheduled to remind Jill to take an early break
Related
Whats the best practice to fetch details data in react app when you are dealing with multiple master details view?
For an example if you have
- /rest/departments api which returns list of departments
- /rest/departments/:departmentId/employees api to return all employees within department.
To fetch all departments i use:
componentDidMount() {
this.props.dispatch(fetchDepartments());
}
but then ill need a logic to fetch all employees per department. Would be a great idea to call employee action creator for each department in department reducer logic?
Dispatching employees actions in render method does not look like a good idea to me.
Surely it is a bad idea to call an employee action creator inside the department reducer, as reducers should be pure functions; you should do it in your fetchDepartments action creator.
Anyway, if you need to get all the employees for every department (not just the selected one), it is not ideal to make many API calls: if possible, I would ask to the backend developers to have an endpoint that returns the array of departments and, for each department, an embedded array of employees, if the numbers aren't too big of course...
Big old "It depends"
This is something that in the end, you will need to pick a way and see how it works out with your specific data and user needs. This somewhat deals with network issues as well, such as latency. In a very nicely networked environment, such as a top-3 insurance company I was a net admin for, you can achieve super low latency network calls. In such a case, multiple network requests would be significantly different than a homeowner internet based environment could be. Even then, you have to consider a wide range of possibilities. And you ALWAYS need to consider your end goals.
(Not to get too down in the technical aspects, but latency can fairly accurately be defined as "the time you are waiting for a network request to actually start sending data". A classic example of where this can be important is online first person shooter gaming. You click shoot, and the data is not transmitted as fast as you would like since the network is waiting to send the data, then you die. A classic example where bandwidth is more useful than latency is downloading or uploading large files. If you have to wait a second or two for the actual data to move, but when it moves you can download a GB in seconds, then oh well, I'll take it.)
Currently, I have our website making multiple calls to load dynamic menus and dynamic content. It is very small data. It is done in three separate calls. On the internet. It's "ok", but I would not say that it is "good". Since users are waiting for all of it to even start, I might as well throw it all in a single network call. Also, in case two calls go ok, then the third chokes a bit, the user may start to navigate, then more menus pop in and it is not ideal. This is why regardless, you have to think about your specific needs, and what range of possible use cases may likely apply. (I am currently re-writing the entire site anyways)
As a previous (in my opinion "good") answer stated, it probably makes sense to have the whole data set shot to you in one gulp. It appears to me this is an internal, or at least commercial app, with decent network and much more importantly, no risk of losing customers because your stuff did not load super fast.
That said, if things do not work out well with that, especially if you are talking large data sets, then consider a lazy loading architecture. For example, your user cannot get to an employee until they see the departments. So it may be ok, depending on your network and large data size, to load departments, and then after it returns initiate an asynchronous load of the employee data. The employee data is now being loaded while your user browses the department names.
A huge question you may want to clarify is whether or not any employee list data is rendered WITH the departments. In one of my cases, I have a work order system that I load after login, but lazy, and when it is loaded it throws a badge on the Work Order menu to show how many are outstanding. Since I do not have a lot of orders, it is basically a one second wait. No biggie. It is not like the user has to wait for it to load to begin work. If you wanted a badge per department, then it may get weird. You could, if you load by department, have multiple badges popping in randomly. In this case, it may cause user confusion, and it probably a good choice to load it in one large chunk. If the user has to wait anyways, it may produce one less call with a user asking "is it ok that it is doing this?". Especially with software for the workplace, it is more acceptable to have to wait for an initial load at the beginning of the work day.
To be clear, with all of these complications to consider, it is extremely important that you develop with as good of software coding practices as you are able. This way, you can code one solution, and if it does not meet your performance or user needs, it is not a nightmare to make a change. In a general case with small data, I would just load it in one big gulp to start, and if there are problems with load times complicate it from there. Complicating code from the beginning for no clearly needed reason is a good way to clutter your code up to the point of making it completely unwieldy to maintain.
On a third note, if you are dealing with enterprise size data sets, that is a whole different thing. Then you have to deal with pagination, and yes it gets a bit more complicated.
Regards,
DB
I'm not sure what fetchDepartments does exactly but I'd ensure the actual fetch request is executed from a Redux middleware. By doing it from middleware, you can fingerprint / cache / debounce all your requests and make a single one across the app no matter how many components request the thing.
In general, middleware is the best place to handle asynchronous side effects.
Lately I need to do an impact analysis on changing a DB column definition of a widely used table (like PRODUCT, USER, etc). I find it is a very time consuming, boring and difficult task. I would like to ask if there is any known methodology to do so?
The question also apply to changes on application, file system, search engine, etc. At first, I thought this kind of functional relationship should be pre-documented or some how keep tracked, but then I realize that everything can have changes, it would be impossible to do so.
I don't even know what should be tagged to this question, please help.
Sorry for my poor English.
Sure. One can technically at least know what code touches the DB column (reads or writes it), by determining program slices.
Methodology: Find all SQL code elements in your sources. Determine which ones touch the column in question. (Careful: SELECT ALL may touch your column, so you need to know the schema). Determine which variables read or write that column. Follow those variables wherever they go, and determine the code and variables they affect; follow all those variables too. (This amounts to computing a forward slice). Likewise, find the sources of the variables used to fill the column; follow them back to their code and sources, and follow those variables too. (This amounts to computing a backward slice).
All the elements of the slice are potentially affecting/affected by a change. There may be conditions in the slice-selected code that are clearly outside the conditions expected by your new use case, and you can eliminate that code from consideration. Everything else in the slices you may have inspect/modify to make your change.
Now, your change may affect some other code (e.g., a new place to use the DB column, or combine the value from the DB column with some other value). You'll want to inspect up and downstream slices on the code you change too.
You can apply this process for any change you might make to the code base, not just DB columns.
Manually this is not easy to do in a big code base, and it certainly isn't quick. There is some automation to do for C and C++ code, but not much for other languages.
You can get a bad approximation by running test cases that involve you desired variable or action, and inspecting the test coverage. (Your approximation gets better if you run test cases you are sure does NOT cover your desired variable or action, and eliminating all the code it covers).
Eventually this task cannot be automated or reduced to an algorithm, otherwise there would be a tool to preview refactored changes. The better you wrote code in the beginning, the easier the task.
Let me explain how to reach the answer: isolation is the key. Mapping everything to object properties can help you automate your review.
I can give you an example. If you can manage to map your specific case to the below, it will save your life.
The OR/M change pattern
Like Hibernate or Entity Framework...
A change to a database column may be simply previewed by analysing what code uses a certain object's property. Since all DB columns are mapped to object properties, and assuming no code uses pure SQL, you are good to go for your estimations
This is a very simple pattern for change management.
In order to reduce a file system/network or data file issue to the above pattern you need other software patterns implemented. I mean, if you can reduce a complex scenario to a change in your objects' properties, you can leverage your IDE to detect the changes for you, including code that needs a slight modification to compile or needs to be rewritten at all.
If you want to manage a change in a remote service when you initially write your software, wrap that service in an interface. So you will only have to modify its implementation
If you want to manage a possible change in a data file format (e.g. length of field change in positional format, column reordering), write a service that maps that file to object (like using BeanIO parser)
If you want to manage a possible change in file system paths, design your application to use more runtime variables
If you want to manage a possible change in cryptography algorithms, wrap them in services (e.g. HashService, CryptoService, SignService)
If you do the above, your manual requirements review will be easier. Because the overall task is manual, but can be aided with automated tools. You can try to change the name of a class's property and see its side effects in the compiler
Worst case
Obviously if you need to change the name, type and length of a specific column in a database in a software with plain SQL hardcoded and shattered in multiple places around the code, and worse many tables present similar column namings, plus without project documentation (did I write worst case, right?) of a total of 10000+ classes, you have no other way than manually exploring your project, using find tools but not relying on them.
And if you don't have a test plan, which is the document from which you can hope to originate a software test suite, it will be time to make one.
Just adding my 2 cents. I'm assuming you're working in a production environment so there's got to be some form of unit tests, integration tests and system tests already written.
If yes, then a good way to validate your changes is to run all these tests again and create any new tests which might be necessary.
And to state the obvious, do not integrate your code changes into the main production code base without running these tests.
Yet again changes which worked fine in a test environment may not work in a production environment.
Have some form of source code configuration management system like Subversion, GitHub, CVS etc.
This enables you to roll back your changes
I read Bob Martin's brilliant article on how "Given-When-Then" can actual be compared to an FSM. It got me thinking. Is it OK for a BDD test to have multiple "When"s?
For eg.
GIVEN my system is in a defined state
WHEN an event A occurs
AND an event B occurs
AND an event C occurs
THEN my system should behave in this manner
I personally think these should be 3 different tests for good separation of intent. But other than that, are there any compelling reasons for or against this approach?
When multiple steps (WHEN) are needed before you do your actual assertion (THEN), I prefer to group them in the initial condition part (GIVEN) and keep only one in the WHEN section. This kind of shows that the event that really triggers the "action" of my SUT is this one, and that the previous one are more steps to get there.
Your test would become:
GIVEN my system is in a defined state
AND an event A occurs
AND an event B occurs
WHEN an event C occurs
THEN my system should behave in this manner
but this is more of a personal preference I guess.
If you truly need to test that a system behaves in a particular manner under those specific conditions, it's a perfectly acceptable way to write a test.
I found that the other limiting factor could be in an E2E testing scenario that you would like to reuse a statement multiple times. In my case the BDD framework of my choice(pytest_bdd) is implemented in a way that a given statement can have a singular return value and it maps the then input parameters automagically by the name of the function that was mapped to the given step. Now this design prevents reusability whereas in my case I wanted that. In short I needed to create objects and add them to a sequence object provided by another given statement. The way I worked around this limitation is by using a test fixture(which I named test_context), which was a python dictionary(a hashmap) and used when statements that don't have same singular requirement so the '(when)add object to sequence' step looked up the sequence in the context and appended the object in question to it. So now I could reuse the add object to sequence action multiple times.
This requirement was tricky because BDD aims to be descriptive. So I could have used a single given statement with the pickled memory map of the sequence object that I wanted to perform test action on. BUT would it have been useful? I think not. I needed to get the sequence constructed first and that needed reusable statements. And although this is not in the BDD bible I think in the end it is a practical and pragmatic solution to a very real E2E descriptive testing problem.
My app has a 5 modules and each modules data was stored in 5 different managed object. I created a search page, where the user can type a keyword to find the keyword in all the 5 modules. For each text change in search bar, I refreshed the search result table to show the matching records count along with the module name.
So, for each press in search bar, I need to fetch the matching datas count from all 5 modules. This is simple if the app has minimal amount of records. But, in my case, the total record count was so heavy, so the time taken by fetching data from 5 modules by the typed search word was so huge and it makes the app freeze.
I have no idea about implementing thread concept in iOS, so I tried to learn by reading Apple's threading programming guide and concurrency programming guide. I hope, I gained some knowledge about threads, but it's somewhat complex when I try to implement those concepts by code.
My requirement is, I have some 5 functions which should be called using thread concept. Consider the functions are,
function1() {...}
function2() {...}
function3() {...}
function4() {...}
function5() {...}
I want to call these 5 functions in the same time so that no one can wait for others completion. One more point is, when the above functions fetching the records, if the user type/erase a text in searchbar, I want to cancel/stop all threads, and I need to perform another new 5 calls to these functions.
I'm looking for suggestions, any kind of working sample codes, examples. Any help would be greatly appreciated.
Thanks
There's a good post about Core Data and background threading here! With the whole GCD stuff this is not so hard to acomplish anymore, so this is maybe a good entry point for further research on this topic.
And here is another post to this topic.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've just finished the implementation of my software system and now I have to document whether it has satisfied its requirements. What sort of information should I include and how should I lay it out?
My initial functional and non-functional requirements was in a two-column table and looked something like this:
FN-01 The system should allow users
to send private messages to each
other.
NFN-03 The setup/configuration
form should contain sensible default
values for most fields.
I would use the requirement numbering scheme already in place rather than creating a new one. I would document the following items for each requirement:
Requirement Status: This can be phrased in many different ways but you are tyring to communicate if the requirement was completed as listed, completed in a modified variant of what was listed or was simply not able to be completed at all.
Requirement Comment: Describes the previously listed requirement status. This is the "why" that will explain those items that were not able to fully meet the requirements.
Date completed: This is mostly for future product planning but also servers as a historical reference.
A couple of other points to remember:
Requirements may be reviewed by the customer, especially if the customer was the source of the requirements. Hence, this document needs to be as accurate and as informative as possible. (It's also another reason you don't change the requirement numbering scheme unless you have to.)
Your testing department (assuming you have one) should be using these documents for their test planning and they need to know what requirments were met, which ones weren't and most importantly which ones changed and how.
Lastly, unless you're putting on a dog and pony show for someone you shouldn't need screenshots as part of requirement documentation. You also shouldn't need to provide "proof" of completion. The testing department will do that for you.
there are some techniques to convert your requirements into test cases.
But those depend on how your requirements are documented.
If you already have made a scenario based requirements analysis then it would be very easy: Just create a sequence diagram for every path of your scenario, write/do a test -> done.
Besides the documentation created that way should also impress your lecturer.
If you don't have scenarios, you should create some out of your use cases.
The downside here is that it is very work intensive and should only be used in cases that justify its use (a thesis for example ;))
List, one by one, the requirements numbers with the requirements line, then text and/or screenshots proving it does so.
Have the requirement number on the left in bold, then have the requirement text tabbed in and italicized. Align the proof text/screenshots with the requirement text, leaving the left column clear for just the requirement numbers. EG:
REQ-1 italicized requirement text
text discussing how the software has
fulfilled the requirements, possibly
with a picture:
-----------------------
| |
| |
| |
| |
| |
-----------------------
REQ-2 italicized requirement text
etc...
You should group into chapters or sections based upon logical program areas, and start the section or chapter with a blurb about how the whole program area meets the requirements (be general
I would keep it simple and add the following columns:
Delivery Satisfied requirement - with a drop down list containing Yes, No, Open
Comment - any comment regarding the delivery, such as 'need to define message size', 'Does not fully satisfy in the layout of the message, but accepted by the client', etc.
Date completed - when the change was delivered
Date satisfied - when the change was accepted
With the use of requirement ID's, I'm assuming they point back to the docs containing more detailed info including layouts, screen shots, etc.
We would normally have a test plan in place in which each item can be ticked-off if satisfactory. The plan would be based on the original requirements (functional or non-functional) for example:
Requirement: The users account should be locked after three attempts to login with an incorrect password.
Test: Attempt to login more than three times with an incorrect password. Is the user account now locked?
We would do this for each requirement and re-run the plans for each Release Candidate. Some of the tests are automated but we do have the luxuary of a test team to perform manual testing as well!
Based on the results of running these test plans and User Acceptance Testing we would sign-off the RC as correct and fit for release.
Note that sometimes we will sign-off for release even if some items in the test plan do not pass, it all depends on the nature of the items!
The formal way to validate requirements is with testing - usually acceptance testing.
The idea is: for every requirement, there should be one or more tests that validate the requirement. In a formal development situation, the customer would sign off on the acceptance tests at the same time they sign off on the requirements.
Then, when the product is complete, you present the results of the acceptance tests and the customer reviews them before accepting the final product.
If you have requirements that cannot be tested, then they probably are badly written.
e.g. don't say "loading files shall be fast", say "an file of size X shall be loaded in not more than Y milliseconds on hardware of Z" or something like that.