screen recorder tools - open source/integrated with test management - testing

I have a two-part question:
1) what are available for open source 'screen recorder' tools? Here is what I got so far, but I have not evaluated them yet as I'm trying to collect the list first:
http://camstudio.org/
http://code.google.com/p/zscreen/
http://shutter-project.org/
http://getgreenshot.org/
http://autoscreen.sourceforge.net/
http://sourceforge.net/projects/capturedit/
2) Is there any test management/manual test software that currently uses screen recorders with manual testing to record what the tester does during each test case (instead of requiring them to manually print-screen one step at a time).
Intent is to find a better way to, screen-by-screen, see what gets entered on the 'screens' throughout the test case cycle to help with problem identification if a problem occurs. 'Screens' could be identified as either web-based (traditional or ajax) or thick-client (java, .net, whatever).

Try recordmydesktop and gtk-recordmydesktop. It simple, lightweight and will give u a nice video screencast. Just install and press the stop or play button from your panel, videos will be saved in *.ogv formats.
$ sudo apt-get install recordmydesktop gtk-recordmydesktop

Related

If the client keeps on changing the requirements every now and then, then what testing method should be followed?

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.
How to deal with such cases?
1) 1st ask complete Clint requirement and note every small point in doc.
2) Understand that total functionality.
3) Use your default testing method.
4) Your not mention which type your testing.( app or portal )
5) As well as possible which is comfortable and feel easy you continue that testing.
6) You want automation testing.please use this (App-appium or Web-selenium)
I hope this is helpful for you.
I would suggest you the following things
->Initially gather all the requirements and check with the client if you have any queries through email.
->Document every thing in MOM whenever you have client call and share with everyone who has attended the call(dev team,client,business,QA)
->Prepare a Test Plan strategy document, test cases and share it to client and request him for his sign off.
->Once, you are all set start with smoke testing then check the major functionalities in that release and then could proceed further.
->You could automate the regression test cases as you are going to execute them for every release(I would suggest to use Selenium if its a desktop application then UFT).
Kindly, let me know if you have any queries.

About testing techniques

I want some Testing information about my application.
Which testing method is suitable for my application page.
In my page contains 200 check boxes. on click on check box, one new page open with different URL.
*Note: all check boxes having different URL.
So, please anyone help me out to find which testing method is suitable.
and how can i test my this page with less effort.
For testing techniques, I think here are some expectation.
Techniques should focus on functional testing and reduce efforts of regression testing. From my experience, You should focus on Manual and Automation techniques.
Manual Efforts
If page have more than 200 check box, First question is it necessary to have 200 check box on one page, It will not good user experience. You can filed defect against requirement and product development team. Discussion begin
To verify look and feel of 200 checkbox and page, I will always focus on some QA notes that help everyone team to understand testing efforts that include browser specification and if page is responsive than which are different size of screen you are going to test
I will prefer to write Cucumber Scenario in Gherkin Language using Given/When and Then Cucumber Source
Scenario should be written in way that can help you to automate
Automation Efforts
I would recommand you to use selenium and choose any programming lanague(C#,PHP,Java,JavaScript,perl,Ruby,Python and other many) for automation.
You already have scenario and You can automate easily
Few things should be part of automation like deep verification that page is loaded successfully, title of page are matching and take snapshot if page is not loaded or during exception. Automation code should execute against any browser.
This is good starting point.
One possibility for you is to use RSpec and capybara-webkit, but i don't know if you are familiar with the Rubylanguage as you didn't talk about any programming language you'd like to use.
In order to achieve this workflow (click on a checkbox and check for the url), you should do something like this
describe "A test to", :js => true do
it "click on a checkbox and check for the url" do
visit("http://your_url") //to visit your page
page.check('the ID or the NAME of the checkbox') //to click on the checkbox
within_window(switch_to_window(windows.last)) do //to focus on the new opened page
expect(current_url).to eq('http://the_expected_url') //to check the url
end
end
end

how to prevent your Cydia tweaks from getting cracked?

I'm working on a new Cydia tweak, It'll be paid! The problem that I don't want it to be cracked/pirated/redistributed. Are there any scripts I can use in my tweak?
Few examples:
A script that will mail you a log securely to know if the user bought it or he got it cracked.
A script that will check your payment in a database, if it's exist then the tweak will be activated on the device, if not then the tweak will be disabled.
A script with UIAlertView that checks if you bought the package from Cydia Store it will activate the tweak, otherwise it'll open a pop up with UIAlertView to tell you that "You got this package with an illegal way, Blah Blah Blah.." and then when you click the "OK" button it'll take you to safe mode till you uninstall the tweak or till you buy it from safe mode. (Most important one)
You can't really protect software from being cracked. It is always a race between stronger and stronger DRM (and usually more annoying to legit users) and the crackers to see the challenge and do cracking for fun (or fame, or both).
What I would suggest, instead of investing large amount of time in coming up with DRM schemes and then implementing and testing, rather invest that time in the functionality of your product and testing it more thoroughly. If your product is good and priced right, people will buy it. Sure, some will pirate it, that is inevitable, but you will get rewarded for your work.
You could put an MD5 or an SHA1 check within your tweak in various locations in your functions just to make sure that if they modify your tweak it wont match. Also instead of packaging your tweak in a deb file you could get it remotely download after verification of some sort.
EX:
Example.tweak
-/var/root/downloadscript (can be any path) (this can also be used so that they cant just extract the file in question and install it)
-DEBIAN/control (config file for deb)
-PostInstall (Post deb installation script) https://wiki.ubuntu.com/PackagingGuide/Basic#postinst_and_prerm
Download Example.tweak from cydia
Using the 'postinst' execute the /var/root/downloadscript
you can use /usr/bin/deviceinfo -s which will print out serial number and use $POST to your server to send the said serial number and download some kind of licence, inside the licence you can include maybe UDID, mac address some other checks that would be checked out by your tweak.

Script to download Google web history

How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.

Automate adding entries to a wiki

Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page).
If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.
I'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.
Of course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)
You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.
Better bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like #jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)