Multiple login tests on mobile app with UFT - testing

I am trying to test the Login feature of my Android app with multiple user-password entries that I have in an Excel. I have already been able to import that data from the Excel successfully and run the same test with each row (with "Run on all Rows" option), but now I am facing a problem that I am not being able to solve.
After a test runs with one row, one the test starts over with a new row, it will not restart the app, but start at the same point where the previous one finished. I think this is not the expected behaviour, in general, since most of the GUI testing tools restart the app when testing a feature with parametrization (data from Excel, mostly). Anyway, I "fixed" this by logging out in my app.
In this case there was an "easy solution" by logging out. But what if I was testing a different feature in which I cannot simply "log out". The problem is that in those different cases I would have to navigate back or do something that may fail and has nothing to do with the feature I am testing.
I am not sure if I am not using the right approach. Is there a good general solution for this issue?

I would suggest the following two ways to solve your problem if you cannot simply use logout as the last step.
Use App.Launch function you can add one line to the top of your script like Device("iPhone 7").App("myApp").Launch NotInstall, Restart . Here the device and the app can be TO in object repository or identified using descriptive programing like Device("id:=123456")
Check options in Test Settings Please check the latest UFT version maybe 12.53 or later if there are any options in Test Settings for users to choose to restart or reinstall app for iterations.
Thanks

Related

Workaround to seeing data factory v2 debug runs

I realise normally a debug run is not visible in the data factory v2 UI after closing the browser window, however unfortunately I needed to restart my machine unexpectedly and it's a long running pipeline.
I thought maybe the runs might be available via powershell, but I haven't had any luck.
The pipeline is likely still running.
We do have external logging, however ideally I'd like to see how long each activity is taking as I'm load testing.
And more importantly I do not want to do another run until I'm sure it's finished.... notably I'll run it from a trigger next time (just in case!).
EDIT:
It looks like a sandbox id is used which is stored in the browser local storage and there appears to be undocumented API endpoints for gathering info using the sandbox id. But there doesn't appear to be a way of getting old sandbox id's so I'm probably out of luck.
There is a button for view all debug runs.
Taken from Microsoft documentation:
To view a historical view of debug runs or see a list of all active debug runs, you can go into the Monitor experience.

Wrong Auto Login with Automation Anywhere

I have a problem trying to schedule a task with Automation Anywhere 10.5.
I've actually activated the Auto login option at AAE Client but when the computer is locked and the task is suppose to start the computer doesn't login and the task starts normally as in background without unlocking the computer.
The problem with it is that we can only run tasks that normally would run on background (tasks that don't need to activate windows and perform operations like clicks or object cloning).
Example: I schedule a task that shows a simple Messagebox but when apparently it doesn't run. Then, I log in to the computer and I can see the messagebox active.
Do you know how to solve it?
My team has hit a few snags on this previously. If you haven't solved this yet - please utilize the following link:
http://www.automationanywhere.com/techsupport/Customers/Support/Utility/Autologin_Diagnose_Fix_Utility.zip
I received this from an AA employee as well as a best practice document. The utility tool should alert you of any practices you're not currently adhering to and resolve them if it can.
Please let me know if you need additional assistance.

Wanting to get rid of the Test Hub in TFS and integrate with Test Rail

Just wondering if there is a way to integrate TFS with TestRail to replace(get rid of entirely) the Test Hub within TFS to use TestRail to record Test Plans?
My concern with removing the Test Hub, would be if Test Rail can still reference IDs in Bug and Stories within TFS and vice versa?
You currently cannot remove the standard Hubs in Visual Studio Team Services / TFS or replace them with something else. You can enable an extension that either adds its own functionality under said hub as a separate tab or adds another Test Rail hub to the top level (if there is any) or write your own. Extensions currently cannot leave their sandbox to overwrite standard functionality.
There is nothing preventing anything from keeping track of work item numbers anywhere, so the second part of your question, whether it would have broken any form of integrations, that's unlikely.
If you are on TFS, you could try creating a custom process template that doesn't have the Test Case, Shared Step, Test Suite and Test Plan work item types, this will likely at least completely cripple the existing functionality. In the on-prem version you can also customize the files on disk, I've never tried, but it's likely that you could probably hack the test hub away. That would be a totally unsupported scenario through.

IndexedDB in Metro, domain changed after running WACK tool?

I am trying to get this IndexedDB stuff working in a Metro (Windows 8) app, using JS.
I thought I was good, but then I ran the WACK tool a couple of times, just to see if I ran into any issues.
After these tests the IndexedDB.open call no longer opens my database (which has 7 entries in it) instead it fires onupgradeneeded, and gives me a blank (new) database (since I create an object store in the onupgradeneeded handler).
I did not change my version number, I did not change the database name. So I am guessing the applications domain somehow changed during the WACK tests.
Does anyone now how to get my database domain back?
One of the things the WACK test probably does is doing a fresh install of the app checking if everything goes fine. So when the app is installed for the first time you have to provide a creation of the database, this is done in the onupgradeneeded event.
I think you forgot to provide this, and that is why he creates a new blank database. Instead of a new database with the required structure.

How to run code during the "firstrun" of a Xulrunner Application

I am writing a custom xulrunner-based app and I wish to have some files deployed in the user profile the first time the application is run.
I placed the files in my application's defaults/profile directory but they did not get copied to user's profile during the first run of the application.
Should I write some additional code or this should happen automatically?
The thing that gets copied for sure is the application default preferences.
Is there a "standard" way offered by Firefox or some of the many mozilla applications?
Any link to some reading will be helpful.
Any hint is valuable.
Thanks in advance.
Unfortunately the standard way of doing first run code is to use the pref system to determine if you have or haven't done something yet. There are a few gotchas though:
Make sure this code only runs once. If your firstrun code is in an overlay or main browser window, it can be run multiple times (once per window)
after you run the code and set the pref, make sure you flush the prefs, since prefs are written on close and will only be saved when you close.
Components.classes['#mozilla.org/preferences-service;1']
.getService(Components.interfaces.nsIPrefService)
.savePrefFile(null);
You could also use the preferences system in concert with querying for an extensions version number. When the version changes, call your function. That would allow you the flexibility to call the function again later if you want - but only at a version change.