Trying to find out what exactly happens the first time Percy is run against Storybook. I've looked through the docs and can't spot a 'clear' answer. Is it:
Current state is taken as the default baseline for images without human intervention
A human is required to 'approve' the current state for each image
Depends - you can control Percy so it can be either 1 or 2 (link to the docs would be great)
Something else
Help would be appreciated.
After speaking to someone at Percy the answer is:
1 - Current state is taken as the default baseline for images without human intervention
Related
i followed the example steps to create an own android app KNIFT template matching example like the 3 dollar bill example on the mediapipe website...did anyone of you build this and know how this really works? I cant get a clear documentation.
My approach to run my own example as suggested on the mediapipe: I have three example pics in my folder (and I did all the build steps with them) that are detected and framed indeed....but not as often and not as correctly as in the dollar bill example (which works fine for me, every bill is detected and framed and labeled as expected).
Also my labeling doesnt work properly. Sometimes it does partly, sometimes it is labeled but incorrect. What does the framework do with my pics and labels and how to optimize my own example?
Any help is appreciated...
regards, fabian
I have recently downloaded MT4 & MT5. In both of these platforms where the historical data section should be ( in the dropdown of the tools section ), it is missing in both and I cannot seem to find a way to access this function.
It just doesn't seem to be in the platform at all?
My intention is to carry on with my research on backtesting data.
Step 1) define the problem:
Given the text above, it seems that your MetaTrader Terminal downloads have been installed, but do not allow you to inspect the (Menu)->[Tools]->[History Center]. If this is the case, check this with Support personnel from the Broker company you have downloaded these platforms from, as there are possible ways, how some Brokers may adapt the platform, including the objected behaviour.
Step 2) explain the target behaviour:
Your initial post has mentioned that your intention is to gain access to data due to "research on backtesting data".
If this is the valid target, your goal can be achieved also by taking an MT4 platform from any other Broker, be it with or without data, and next, importing { PERIOD_M1 | PERIOD_M5 | ... }-records, via an MT4 [History Center] F2 import interface. Just enough to follow the product documentation.
If your Quantitative Modelling requires tick-based data with a Market-Access Venue "fidelity", there was no such a way so far available for an end-user to import and resample some externally collected tick-data for MetaTrader Terminal platform.
Step 3) demonstrate your research efforts + steps already performed:
This community will always welcome active members, rather than "shouting" things like "Any idea?" or "ASAP" and "I need this and that, help me!".
Showing efforts, that have been spent on solving the root cause are warmly welcome, as Stack Overflow strongly encourages members to post high quality questions, formulated in a Minimum Complete Verifiable Example of code, that can re-run + re-produce the problem under test. Using PrintScreens for UI-states are ok for your type of issue, to show the blocking state and explain the context.
Say there are 1-10 user stories. All tested okay. -> to Production. Then comes the CR with 5 more user stories. All then tested okay. -> to production.
Then comes 5 more user stories. Tested okay. -> To production.. now here a user story or two from first 1- 10 breaks down. Obviously testers will have to carry the blame for the same.
Developers have direct access to the QA environments' build path. any developer can go put the code file there. just a simple folder structure.
How do we fix this and keep 'our' hands clean?
Also Please note that we do ad-hoc testing due to the stringent timelines.
The situation when something new breaks down something old is rather common. I cannot see what is the problem. QA environment is perfectly good for catching up such a regression.
What i can suggest is:
1. Having Development / QA / Production environments
And try to set up the proper process of if sth new has been coded up and developer-tested it can go to 'QA'. And only when the new stuff has been QA-tested it can go to 'Production';
2. Continuous Build Integration
It's also nice to have the key features covered with the unit tests or (and) to have a suite of automated tests. One button-click can show you the general state of your app and even whose check-in has failed the build.
3. Regression testing
Ensure you have a profound Regression suite. These are run mainly to avoid such problems and verify that no critical issues leak into the production.
Hope this helps a little.
There is a program out there that I can't for the life of me remember the name of. I'm not sure if this is the correct forum to ask this but thought I'd give it a shot and see if anyone can help me out.
Program Description:
This program is for writing and running test automation. It allows you to write the steps of the test in English.
Example (scenario: posting stack overflow question):
Enter title 'XXX'
Enter Description
Enter tag 'X'
Click Post button
After entering the test steps in English sentences you can run the test. The test will come back yellow, saying that not all the steps are automated yet and create stub methods for each of the 4 steps mentioned above.
You can then go through the work of automating the different steps and running the tests.
Sorry if this sounds vague, but I remember looking at this piece of software last year and can't seem to find it or remember the name anymore.
Any ideas?
There's the Ruby Gem Cucumber.
That's Cucumber.
It will warn you if you have undefined scenarios or steps, and it will suggest some code snippets for those steps, as shown in this "10 min tutorial".
I have a manual test result with various iterations. Some iterations passed the test but other s didn't. I need to create and link a bug to a specific test result iteration (the one that didn't pass the test) but when I choose to create and link a bug it always defaults to the first iteration. I can't see how to choose the specific iteration I want to link the bug to.
Before the click:
After the click:
On http://msdn.microsoft.com/en-us/library/dd380693.aspx they show how to link a bug to a test result. However, they don't specify on how to link a Bug to a particular iteration.
Thanks in advance!
You'd select it in Iteration dropdown under Classification above.
Apparently there is no way to do this once the test run has finished. It's a usability bug that Microsoft needs to fix. At least that what a Testing Microsoft Partner say on the MSDN Forums. The only way to to this is from the Test Runner itself.