Power Automate Desktop - Creating a UI Selector From an Accessibility Insights for Windows Report - accessibility-insights

I am using the Accessibility Insights for Windows tool as a method to determine the most effective selector when using Power Automate Desktop. I have a Pulse VPN app that I can launch but cannot click the Connect button. I can use the UI automation recorder to click the button but after a reboot, the selector no longer works. While I understand web and jQuery selectors, I don't know how to write UI element selectors. Any insight is appreciated.

Unfortunately, there's no generally accepted Windows analog to selectors. WinAppDriver includes a concept called xPath that attempts to solve this problem, but it's tightly bound to a specific version and language of an app, and only a few platforms (mainly WinAppDriver) use it.
Accessibility Insights support for xPath was requested in this issue. We triaged it and decided that we'd leave that for a community contribution. Nobody ever picked it up and it was eventually closed after someone observed that WinAppDriver was no longer in active development.
Sorry I don't have a better answer for you.
Dave Tryon, Accessibility Insights team

Related

Does Karate.robot supports CI /CD with Bamboo and need visible desktop like Sikuli to run the script?

I used image locators to locate some desktop elements the following question came to my mind that regarding Karate. Robot.
Can desktop script run on VM using CI/CD pipeline? does it need a physical desktop?
Previously I worked on Sikuli that needed a physical desktop if I minimize then the script does not work, is it the same case with Karate. Robot?
As long as you can install Karate on the VM it should be fine. Yes, having to do an RDP session can get complicated. You will need to spend some time to figure this out, but we know teams that have done this. It is also an opportunity for you to contribute some reference material and hopefully code to the community. For example getting different resolutions to work can be a challenge.
If you use the Element.invoke() method (not documented) on elements that support that automation method (e.g. buttons) you don't need the UI to be visible.
All available information can be found in this answer: https://stackoverflow.com/a/65187737/143475
If you have more questions, the best option is to figure this out on your own - and report your findings back here for the benefit of others.

What language (or framework) can i use to build auto do some specific task in a website

I want to write a program to do some specific task in a website (ie: Auto order, auto login and post some comments). But i don't know what language or framework can help me do that.
If you know plase help me.
It sounds like you want to automate some user actions on a website, for that Selenium/Webdriver is the best library/framework if you want to do it on a Desktop or Appium if you want to do it on mobile.
Seeing as you are a beginner I would also recommend to use Python as its not only easy to get started with but its also one of the better languages in current times for anything related to automation.
I actually have multiple playlists that teach:
beginner to senior concepts in Selenium
beginner Appium concepts
how to build an advanced framework in Appium

UiPath unattended automation

I was just curious about how does Uipath process render GUI to interact with various application in unattended mode without screen. I am trying to build my own RPA system for few specific use cases but I am stuck at running those process in unattended. Because to interact with application(click etc) it requires GUI to render.
Thanks
According to this article (and a little bit simplified) they either use the console session (which is a well-known solution / workaround) or they create RDP Sessions programmatically using the FreeRDP framework. (I have tried my luck with FreeRDP but most of it's features are disabled in corporate environments)
If you really want to dig in the whole thing, Microsoft provides a framework for implementing own Remoting Solutions. Theoretically you could implement your own protocol with lower security boundaries and by not destroying the GUI if the remote session is not active (disconnected but not closed)
It's based on the coordinates of the controls and the text they contain. It recognizes graphical objects by their platform-specific attributes. In very particular scenarios, where object recognition is not available such as with RDP, it uses image and OCR text-based automation.

Testing a desktop application

I need to open a .exe application where I have to test all the functions, UI, etc.
I were working with watin and Nunit for testing a web, but now, i think watin is useless for this. I found NunitForms, but I dont think that will be enough.
I have to open the application and test all the windows, buttons, etc that appear. The application also start minimized in the taskbar and have a desplegable menu.
How can I handle it? Thanks!
I believe you are referring to Winforms application. Please check below links
The Microsoft UI Automation Library -
http://msdn.microsoft.com/en-us/site/cc163288
UI Automation with Windows PowerShell -
http://msdn.microsoft.com/en-us/site/cc163301
Lightweight UI Test Automation with .NET - http://msdn.microsoft.com/en-us/site/cc163864
Ideally, you want as little code in the forms as possible. If you move the functionality to separate classes, those can easily be tested using NUnit. If you must test the forms directly, NUnitForms is a reasonable tool.

Tools for automating mouse and keyboard events sent to a windows application

What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success.
Though they're mostly targeted at automating administration tasks or shortcuts for users, Autohotkey and AutoIT let you automate nearly anything you want as far as mouse/keyboard interaction.
Some of the mouse stuff can get tricky when the only way to really tell it what you want to click is an X,Y coordinate, but for automating entirely arbitrary tasks on a Windows machine, it does the trick.
Like I said, they're not necessarily intended for testing purposes, so they're not instrumented for unit test conventions. However, I use them all of the time to automate stuff that isn't testing related.
You can do it programmatically via the Microsoft UI Automation API. There's an MSDN Magazine article about it.
Integrates well with unit test frameworks. A better option than the coordinate-based script runners because you don't have to rewrite scripts when layouts change.
There's a couple out there. They all hook into the windows API to log item clicks, and then reproduce them to test.
We're now mostly web based (using WatiN), but we used to use Mercury Quicktest.
Don't use Quicktest, it's awful for a tremendously long list of reasons.
This is what i was looking for.
Check out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success.