Getting following error message from browser at the time of recording as well as replay, 'This session is invalid.You are logged in through other browser.
This issue does not appear when logging in manually as well as through TruClient Firefox.
I am using following,
Vugen : 12.02
IE: 11
Please help me with this & let me know if required more information on this.
You need to correlate all dynamic session variables. You have some elements which are hard coded and once you replay multiple users together you have a session violation where the same browser cannot exist in two different places at the same time.
Deleting all Java Temporary files solved this issue.
Control Panel -> All Control Panel Items -> Java -> Temporary Internet Files [Settings] -> Delete Files.
If possible, please try LR 12.02 patch 3, which solves some known TruClient issues on IE 11, mainly on user separations.
Related
I need your help, I have recorded a login script in blaze meter and importing it into JMeter what I noticed that browsing URL is repeating like site.com/0, site.com/1,site.com/2 and so on. Please suggest what to do to fix it asap help required. thanks.
I am trying to record a login script in blaze meter when I imported the script in JMeter I found that the browsing URL is repeating. like example.com/0, example.com/1,and so on. please help me.
We cannot "help" without knowing what are your expectations.
When it comes to performance testing of web applications you need to ensure that JMeter is properly configured to behave exactly like a real browser.
It means that JMeter should send the same requests and in the same manner as the real browser does.
In case if the network footprint generated by JMeter matches the one which the real browser produces - you don't need any "help" there. If it doesn't - we need to see:
the dump of requests from "Network" tab of your browser's developer tools
how did you configure the BlazeMeter Chrome Extension, i.e. choosing "Only top level requests" might "help" you
Normally these numeric postfixes are used as the naming convention for the Transaction Controller to all nested redirects, embedded resources and so on would be considered an integral part of the parent "transaction"
I'm implementing a python script mainly based on pyautogui. One of the things the script does is to open a chrome webpage. After that I would need to access the DOM of this currently open webpage.
Since I've not opened the browser with selenium, I can't use it to analyze the DOM.
However, my question is: is this currently open chrome page available/saved somewhere in the hard drive so that I can access it with selenium? Like an .html file?
I checked many other questions here and users talk about chrome cache, but there are no html files there.
I just need to be able to access the current open page and not all the historical data in the cache.
Opening web browser directly with selenium is not an option either, since most of the websites analyzed have captchas and distil technology.
Thanks.
If you start the original chrome with --remote-debugging-port=PORT_NR argument, and visit localhost:PORT_NR from another browser, you will have access to the full content of the browser, including dev console.
Once you have this, you have multiple ways to go:
You can visit http://localhost:PORT_NR with with any other browser (or even with the same browser), and you should have full access to the content of the original Chrome. With Selenium you should have a relatively easy time to get by.
You can also use the devtools api (the documentation.. is.. well... there is room for improvement. Search for chrome devtools protocol to be amazed by the lack of docs). As an example you can get to http://localhost:PORT_NR/json to get the available debugging URIs. Grab the relevant websocket endpoint (webSocketDebuggerUrl). Open a websocket connection, and issue a command, like {"method": "DOM.getDocument", "id":12}. You can find available DOM related commands here: https://chromedevtools.github.io/devtools-protocol/1-3/DOM
Sice I had to reinvet the wheel I may give some extra info that I coudn't find anywhere:
Start the Browser with remote debugging enabled (see previous posts)
Connect to the given port on localhost and use these HTTP-GET-Requests to geta very limited control on your browser:
https://chromedevtools.github.io/devtools-protocol/#endpoints
Most important:
GET /json/new?{url}
GET /json/activate/{targetId}
GET /json/close/{targetId}
GET /json or /json/list
To gain full control over the browser, you need to use a "websocket" connection. Each Object in the GET /json or /json/list has it's own ID. Use this ID to interact with the tab. Btw: Type "page" are normal tabs, the other stuff are extentions and so on. Once you know which Tab you want to influence, get it's "webSocketDebuggerUrl".
Use this URL and connect with something that can speak the Websocket-protocol.
Once connected, you must craft a valid Json by the following structure:
{
"id":0,
"method":"Page.navigate",
"params":{url:http://google.com}}
}
Notes:
ID is a simple counter (int) that get bigger - not the ID of the tab(!)
Method is the method described in the docs params is also in the docs.
The return values are always JSONs.
From now on you can use the official docs:
https://chromedevtools.github.io/devtools-protocol/tot/Page/#method-navigate
Dunno how other ppl found out about it but it took a few hours to get it working. Probably cause everyone is just using python's selenium to do it.
My sccenario is as follows:
I have an excel workbook that does a lot of stuff, like create users and such.
A part of this "create user"-procedure is to edit some info on a website.
BUT this website needs to be launched as another user which has access to it.
I have no problem controlling IE if I launch it normally with the current user. But when I launch IE as another user, I cant take control of it from Excel.
I have done a lot of googling and tried a lot of stuff.
Fx. code from the following sources:
http://www.makeuseof.com/tag/using-vba-to-automate-internet-explorer-sessions-from-an-excel-spreadsheet/
http://www.excely.com/excel-vba/ie-automation.shtml
http://www.mvps.org/emorcillo/en/code/vb6/iedom.shtml
None of which has worked in my scenario.
I found a very good workaround here: https://superuser.com/a/49462
Basically I found out that it's possible to force a loginprompt on pages using Windows Authentication, and then log in as a different user.
That means that the IE process still runs as my user, thus making it possible to take control of the window from Excel.
I have few simple Xpages, where i test and learn the newest features in Domino 8.5.3.
Now, after some latest changes somehow i'm not able to delete Documents. The Application asks me for the User Name and password, which i enter and which are correct. However, nothing happens (well, the system thinks few seconds over) and i'm asked for my credentials again... and again.. If i press "cancel" -> i got the expected result -> error 401.
The strange thing as well, if even i give for the "anonymous" the editor rights with the "delete documents" checked, i'm still asked for the credentials...
Well, i think I need now some ideas and tips where can i look after in order to solve the "undeletable documents" problem.
The "Delete" button is made using the Simple Action "Delete Selected Documents".
Update: After looking into the logs (thanks Simon O'Doherty for the hint below!) i found out the following message
28.02.12 19:20: Exception Thrown
com.ibm.xsp.acl.NoAccessSignal: NotesException: Notes error: Document locking is enabled. You must lock the document before deleting.
After removing the setting "Allow Document locking" everything works fine.
The next question, however, and it seems to be intresting, if i want to use this setting - how to make the standart actions (it looks like at least "Delete Selected Documents" has some problems) work properly ?
Or do i have to use SSJS only ?
In the ACL settings. Click Advanced tab. Check that "Maximum Internet name and password access" is at least a level that allows you to edit documents.
May need to restart your browser for that to be registered.
If it is still an issue at that point then the following Debug may give more hints.
Check the XPages logs in the IBM_TECHNICAL_SUPPORT folder of the server.
Check the elements on the page are not being pulled from another location that would require access.
Check for Authors/Readers fields.
The following debug on the server will allow you to see when an ACL call is made, what is asked for and what it got.
Warning This is very verbose debug, so it should only be activated for the test. Also do not paste the results anywhere externally without first sanitizing. (as it would be confidential to you).
DEBUG_THREADID=1
DEBUG_SERVER_ACL=2
I have also seen this behaviour in our application.
The issue is caused by the "Allow document locking" option.
Either you do not need this feature; then just uncheck in the application properties. If you intend to use the feature, you have to lock the document in your code prior to deleting it.
Add a execute script simple action before delete simple action and write the following code
var doc:NotesDocument = currentDocument.getDocument();
doc.lock();
or
dataSource.getDocument().lock();
It might be as easy as:
- check your ACL. Do you have delete rights? Default is off
How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.