In My test I want to click on object of Type WebArea which opens a webelement popup includes some fields that i need to test.
the problem that the popup not open after I click on WebArea object through the code.
the code I use as below.
Browser("WW").page("assessment").WebArea("areaassessment").Click
nothing hapens after the above line excuted.
Look into the HTML of the WebArea and see what action is triggering the popup. Normally it has something like onclick='showPopup();', but in other cases it is onmousedown or onmouseup.
If this is the case, you have to setup QTP accordingly. There are multiple roads to walk here, one is to see how you advanced web settings are configured. Go to Tools>Options>Web>Advanced and look in the Run Settings.
Setting the Replay Type to Event will replay your scripts by events (by default mousedown, mouseup and then mouseclick) or by mouse (You'll see your mouse pointer moving in this mode, QTP will replay by sending WM_* messages through the Windows api for movement to the correct screenlocation and triggering the click).
Allthough it replays a bit faster, if Run only click is checked, it is better to uncheck this to trigger all events / messages.
Events can also be fired by the FireEvent method:
Browser("WW").page("assessment").WebArea("areaassessment").FireEvent("onclick")
or through the object native methods:
call Browser("WW").page("assessment").WebArea("areaassessment").Object.click()
call Browser("WW").page("assessment").WebArea("areaassessment").Object.FireEvent("onclick")
As #AutomateChaos said there is probably an event that QTP isn't simulating, one way to work around this is to do as #AutomateChaos suggests and simulate the needed event. A simpler way is to change to device replay (as I described here and here).
Related
I want to achieve something that I have seen multiple times on various sites. I'll provide a scenario so it will be easier to understand what I'm trying to achieve. Let's say I have a button which executes file downloading. Sometimes this button might have additional event to open up a modal with some information, after clicking "continue" in this modal, the download event should be fired as if it would without this interception. There might also be a cancel button in that modal which would cancel the download action. How might I achieve this? I already had this implemented using jquery but now I'm reworking my project to use vue and the way I achieved it with jquery was very dirty, hack-ish and unstable.
My scenario which produces the question goes something like below:
I enter a webpage via normal means, next I press on a button, to start a HTML5 application on this webpage, this application is inside an iFrame. On application start I'm being prompted to either turn the sound on or off. At this point there are two possible outcomes:
1. When I answer this prompt manually, new buttons appear in the application window, as expected.
2. When I answer this prompt through automation via Appium, new buttons do not appear.
Now to the question:
To answer the prompt I use the click() method from Selenium. Is it possible that this click() is not considered to be executed by a human and therefore doesn't trigger necessary things? And since I don't have access to the source of the application can I force the Selenium click() to look exactly like a human click?
Here is the code I use to execute the mentioned click:
//Application loading up, hence the sleep
Thread.sleep(5000);
AppiumTestBase.getDriver().switchTo().frame("e_iframe");
Thread.sleep(5000);
WebElement soundOff = AppiumTestBase.getDriver().findElement(By.id("soundOff"));
AppiumTestBase.getStandardWaitTime().until(elementToBeClickable(soundOff));
soundOff.click();
The program is able to find and switch in to the iFrame, there are no cross-origin issues either. The AppiumTestBase is just there for initializing the driver, setup capabilities etc. I also tried clicking the element via Actions and JavaScript, but there was no change in behavior.
In C# a workaround I've found to actually take control of the mouse and move it/click with it is to use "Microsoft.VisualStudio.TestTools.UITesting" for the Keyboard/Mouse libraries. From there, you can tell it "Mouse.Click(new Point(X, Y));"and it will move your mouse to that location and click.
Sample Code:
using Microsoft.VisualStudio.TestTools.UITesting;
var soundOff = AppiumTestBase.getDriver().findElement(By.id("soundOff"));
Mouse.Click(new Point(soundOff.Bounds.X, soundOff.Bounds.Y));
I've been trying for hours to use global hotkeys and "consume" the key event so it is not forwarded any more to the application where the key event is originally coming from.
So what I want to do is:
- a user presses a shortcut with application A in front, e.g. Cmd+F3
- my application (application B) receives this shortcut through the global event handler and sends mouse and keyboard events to application A
It's probably easiest to think of it as a macro.
I'm using DDHotkey and it works quite fine. The problem I have is that DDHotkey doesn't "consume" the key events and modifiers. That means that when my application starts sending mouse and keyboard events, the Cmd key from the actual global shortcut is still pressed.
This leads to erroneous behavior in my case (I'm double-clicking a textfield programmatically and that doesn't open when Cmd is pressed for example).
So what I'd like to do is really consume the key event and the modifier keys so that they are not forwarded to application A. Alternatively, I would "flush" the event queue before sending the key events to application A.
Is there any way to achieve this easily?
An even more reliable approach if it works for your use case would likely be to not script the UI by triggering mouse events and key presses, and instead use the Accessibility API to trigger the more high-level actions (like using Accessibility to tell a button it has been pressed). Unless the app contains some unfortunate code, that should not look at the modifier keys.
Telling the OS from an event monitor to remove key states would probably cause lots of issues: It would be confusing if the user then actually released the physical keys and a second keyUp came in. Even if the OS tries to avoid that, it is just asking for other edge case bugs - what if the user pressed a modifier key while your code is scripting the UI?
But if the applications you are scripting do not support Accessibility, nor AppleScript, nor any other more high level approach to automation, what you could do is wait for the user to release your hotkey (i.e. wait for keyUp events) and only then trigger your scripted actions. Might be necessary to use performSelector:withObject:afterDelay:0.0 to get out of the keyUp handler before you do that.
In my current project I am working with a specific implementation of the jface ProjectionViewer attached to a TextEditor nested in a MultiPageEditor.
My task is now to implement a custom reaction to Ctrl-Z, and from what I get this is best done by attaching a specific implementation of IUndoHandler to the Viewer, all of that would be no problem.
But, pressing Ctrl-Z while having that TextEditor focused fails to cause any reaction that would be expected. While clicking "Undo Typing" in the context menu, which displays the associated key combination Ctrl-Z causes the TextViewerUndoManager.DocumentUndoListener's notification method is called, no line of code in the TextViewerUndoManager is touched when pressing Ctrl-Z.
As a possible source of this problem I assumed that maybe a handler might be defined for this key combination in an extension point, since I had previously experimented with this mechanism, but the plugin.xml does not define any key combinations nor undo handlers apart from one that is associated with a special context menu for a different widget.
It might be worth to note that Ctrl-C and Ctrl-V work as intended.
I need to find out what happens when Ctrl-Z is pressed and why nothing is relayed to the TextViewerUndoManager.
It would be very helpful if someone could describe the progress how eclipse handles these key combinations normally and decides which command is appropriate.
Thanks in advance
Cntrl+Z- undo is processed using OperationHistorySupport. Look at UndoActionHandler class.
Binding support is implemented using keydown event filter using WorkbenchKeyboard ( all keydown events first filtered using this class. this is how BindingService was implemented). This will figure out correspond command for the key binding.
DocumentUndoManager.UndoableTextChange is where undo operation is handled.
I have an issue with WindowsHookEx in vb.net. If my pc is overloaded especially from 3D rendering, windows automatically disconnects my keyboard hook and my hotkeys stop working. I searched around and it seems that there is no way to detect whether a hook is active or disconnected. So I tried this method presented by "moodforaday"
Is it possible to detect when a low-level keyboard hook has been automatically disconnected by Windows?
hook-has-been-automatically-d
He states that using GetLastInputInfo periodically and store GetLastInputInfo to another variable when a key is used and compare the results. If the tick is much newer than your older variable then its likely that its disconnected. Its a great method but the ticks can go up from other things like the mouse. In my Hook class there is no Mouse hook therefore I cannot store a variable of the tick count when the mouse is moved. So now I ended up having it create a new instance of the hook class and hook again. It checks every second if the stored tick is older than new tick by 10000 ticks.
Is it alright to keep creating new instances of Hooks? It will keep Hooking/Unhooking constantly and I'm wondering if that is going to be a problem for Windows.
Also if anyone has another method to detect if a hook is disconnected please let me know would fix this whole hassle.
Do your 3D rendering in a background thread. Use Control.Invoke only for code where you directly access UI controls.
Alternately, you could split the rendering into very small pieces and post them to yourself as messages, to be handled on the main thread. This way you will be able to handle both internal and external messages.
In both cases, your application will be responding in a timely fashion, Windows will have no reason to consider it non-responding, and your keyboard shortcuts will stay in place.