I recorded one script using Selenium IDE which contain clicking on a link and now i want to add loop to run same script multiple time, for this i am converting script to python but unable to add loop.Please help me in this regards.
Heres some text direct from selenium docs:
Data Driven Testing:
Data Driven Testing refers to using the same test (or tests) multiple times with varying data. These data sets are often from external files i.e. .csv file, text file, or perhaps loaded from a database. Data driven testing is a commonly used test automation technique used to validate an application against many varying input. When the test is designed for varying data, the input data can expand, essentially creating additional tests, without requiring changes to the test code.
# Collection of String values
source = open("input_file.txt", "r")
values = source.readlines()
source.close()
# Execute For loop for each String in the values array
for search in values:
sel.open("/")
sel.type("q", search)
sel.click("btnG")
sel.waitForPageToLoad("30000")
self.failUnless(sel.is_text_present("Results * for " + search))
Hope it helps. More info at: Selenium Documentation
Best Regards,
Paulo Bueno.
Try a loop similar to this example using "for x in range (0,5):" to set the number of times you wish it to iterate.
def test_py2webdriverselenium(self):
for x in range(0,5):
driver = self.driver
driver.get("http://www.bing.com/")
driver.find_element_by_id("sb_form_q").click()
driver.find_element_by_id("sb_form_q").clear()
driver.find_element_by_id("sb_form_q").send_keys("testing software")
driver.find_element_by_id("sb_form_go").click()
driver.find_element_by_link_text("Bing").click()
I tried this for some situations that I have little information:
list = [''' list containing all items ''']
index = 0
while True:
try:
# do what you want with list[index]
index += 1
except:
# index exception occured
break
In java you can do this as below:
# import packages or classes
public class testClassName(){
before test Methods(){
}
#Test
public void testMethod(){
for(int i =0, i<=5, i++){
WebElement element = driver.findElementById("link_ID");
element.click();
waitForPageLoaded(5);
}
}
after Test Method(){
}
}
Related
I am using Sphinx for documentation and pytest for testing.
I need to generate a test plan but I really don't want to generate it by hand.
It occurred to me that a neat solution would be to actually embed test metadata in the tests' themselves, within their respective docstrings. This metadata would include things like % complete, time remaining etc. I could then run through all of the tests (which would at this point include mostly placeholders) and generate a test plan from them. This would then guarantee that the test plan and the tests themselves would be in sync.
I was thinking of making either a pytest plugin or a sphinx plugin to handle this.
Using pytest, the closest hook I can see looks like pytest_collection_modifyitems which gets called after all of the tests are collected.
Alternatively, I was thinking of using Sphinx and perhaps copying/modifying the todolist plugin as it seems like the closest match to this idea. The output of this would be more useful as the output would slot nicely in to the existing Sphinx based docs I have though there is a lot going on in this plugin and I don't really have the time to invest in understanding it.
The docstrings could have something like this within it:
:plan_complete: 50 #% indicator of how complete this test is
:plan_remaining: 2 #the number of hours estimated to complete this test
:plan_focus: something #what is the test focused on testing
The idea is to then generate a simple markdown/rst or similar table based on the function's name, docstring and embedded plan info and use that as the test plan.
Does something like this already exist?
In the end I went with a pytest based plugin as it was just so much simpler to code.
If anyone else is interested, below is the plugin:
"""Module to generate a test plan table based upon metadata extracted from test
docstrings. The test description is extracted from the first sentence or up to
the first blank line. The data which is extracted from the docstrings are of the
format:
:test_remaining: 10 #number of hours remaining for this test to be complete. If
not present, assumed to be 0
:test_complete: #the percentage of the test that is complete. If not
present, assumed to be 100
:test_focus: The item the test is focusing on such as a DLL call.
"""
import pytest
import re
from functools import partial
from operator import itemgetter
from pathlib import Path
whitespace_re = re.compile(r'\s+')
cut_whitespace = partial(whitespace_re.sub, ' ')
plan_re = re.compile(r':plan_(\w+?):')
plan_handlers = {
'remaining': lambda x:int(x.split('#')[0]),
'complete': lambda x:int(x.strip().split('#')[0]),
'focus': lambda x:x.strip().split('#')[0]
}
csv_template = """.. csv-table:: Test Plan
:header: "Name", "Focus", "% Complete", "Hours remaining", "description", "path"
:widths: 20, 20, 10, 10, 60, 100
{tests}
Overall hours remaining: {hours_remaining:.2f}
Overall % complete: {complete:.2f}
"""
class GeneratePlan:
def __init__(self, output_file=Path('test_plan.rst')):
self.output_file = output_file
def pytest_collection_modifyitems(self, session, config, items):
#breakpoint()
items_to_parse = {i.nodeid.split('[')[0]:i for i in self.item_filter(items)}
#parsed = map(parse_item, items_to_parse.items())
parsed = [self.parse_item(n,i) for (n,i) in items_to_parse.items()]
complete, hours_remaining = self.get_summary_data(parsed)
self.output_file.write_text(csv_template.format(
tests = '\n'.join(self.generate_rst_table(parsed)),
complete=complete,
hours_remaining=hours_remaining))
def item_filter(self, items):
return items #override me
def get_summary_data(self, parsed):
completes = [p['complete'] for p in parsed]
overall_complete = sum(completes)/len(completes)
overall_hours_remaining = sum(p['remaining'] for p in parsed)
return overall_complete, overall_hours_remaining
def generate_rst_table(self, items):
"Use CSV type for simplicity"
sorted_items = sorted(items, key=lambda x:x['name'])
quoter = lambda x:'"{}"'.format(x)
getter = itemgetter(*'name focus complete remaining description path'.split())
for item in sorted_items:
yield 3*' ' + ', '.join(map(quoter, getter(item)))
def parse_item(self, path, item):
"Process a pytest provided item"
data = {
'name': item.name.split('[')[0],
'path': path.split('::')[0],
'description': '',
'remaining': 0,
'complete': 100,
'focus': ''
}
doc = item.function.__doc__
if doc:
desc = self.extract_description(doc)
data['description'] = desc
plan_info = self.extract_info(doc)
data.update(plan_info)
return data
def extract_description(self, doc):
first_sentence = doc.split('\n\n')[0].replace('\n',' ')
return cut_whitespace(first_sentence)
def extract_info(self, doc):
plan_info = {}
for sub_str in doc.split('\n\n'):
cleaned = cut_whitespace(sub_str.replace('\n', ' '))
splitted = plan_re.split(cleaned)
if len(splitted) > 1:
i = iter(splitted[1:]) #splitter starts at index 1
while True:
try:
key = next(i)
val = next(i)
except StopIteration:
break
assert key
if key in plan_handlers:
plan_info[key] = plan_handlers[key](val)
return plan_info
From my conftest.py file, I have a command line argument configured within a pytest_addoption function: parser.addoption('--generate_test_plan', action='store_true', default=False, help="Generate test plan")
And I then configure the plugin within this function:
def pytest_configure(config):
output_test_plan_file = Path('docs/source/test_plan.rst')
class CustomPlan(GeneratePlan):
def item_filter(self, items):
return (i for i in items if 'tests/hw_regression_tests' in i.nodeid)
if config.getoption('generate_test_plan'):
config.pluginmanager.register(CustomPlan(output_file=output_test_plan_file))
#config.pluginmanager.register(GeneratePlan())
Finally, in one of my sphinx documentation source files I then just include the output rst file:
Autogenerated test_plan
=======================
The below test_data is extracted from the individual tests in the suite.
.. include:: test_plan.rst
We have done something similar in our company by using Sphinx-needs and Sphinx-Test-Reports.
Inside a test file we use the docstring to store our test-case incl meta-data:
def my_test():
"""
.. test:: My test case
:id: TEST_001
:status: in progress
:author: me
This test case checks for **awesome** stuff.
"""
a = 2
b = 5
# ToDo: chek if a+b = 7
Then we document the test cases by using autodoc.
My tests
========
.. automodule:: test.my_tests:
:members:
This results in some nice test-case objects in sphinx, which we can filter, link and present in table and flowcharts. See Sphinx-Needs.
With Sphinx-Test-Reports we are loading the results into the docs as well:
.. test-report: My Test report
:id: REPORT_1
:file: ../pytest_junit_results.xml
:links: [[tr_link('case_name', 'signature')]]
This will create objects for each test case, which we also can filter and link.
Thanks of tr_link the result objects get automatically linked to the test case objects.
After that we have all needed information in sphinx and can use e.g. .. needtable:: to get custom views on it.
I have created a feature file that will contain lots of javascript functions.
From within a DIFFERENT feature file I want to use ONE of those functions (and pass in a value).
How do I do this please?
My feature file is called SystemSolentraCustomKarateMethods.feature
Here is the current content (it currently contains just one function):
Feature: System Solentra Status Test
Background:
* def checkreturneddatetimeiscorrect =
#The following code compares the passed in datetime with the current systemdatetime and
#makes sure they are within 2 seconds of each other
"""
function(datetime) {
var datenow = new Date();
karate.log("***The Date Now = " + datenow.toISOString() + " ***");
var timenow = datenow.getTime();
karate.log("***The Time Now in Milliseconds = " + timenow+ " ***");
karate.log("***The Passedin Date = " + datetime + " ***");
var passedintime = new Date();
passedintime = Date.parse(datetime);
karate.log("***The Passed in Time = " + passedintime+ " ***");
var difference = timenow - passedintime;
karate.log("***The Time Difference = " + difference + " milliseconds ***");
return (difference < 2000)
}
"""
Thanks Peter I have figured out how to do this now.
(1) The feature file that contains the functions MUST have the Feature, Background and Scenario tags - even if your file does NOT contain any scenarios. (*see my example file below)
(2) In the feature file that you are calling FROM add the following code to the Background section:
* call read('yourfilename.feature')
(3) You can now use the functions within the called feature file
Here is the structure of the feature file I am calling:
Feature: Custom Karate Methods
This feature file contains Custom Karate Methods that can be called and used from other Feature Files
Background:
* def *nameofyourfunction* =
#Comment describing the fuction
"""
function() {
*code*
}
"""
****Scenario: This line is required please do not delete - or the functions cannot be called****
I think you've already seen the answer here, and this question is an exact duplicate: https://stackoverflow.com/a/47002604/143475 (edit: ok, maybe not)
Anyway, I'll repeat what I posted there:
there is no problem when you define multiple functions in one feature and call it from multiple other features
you will anyway need a unique name for each function
when you use call for that feature, all the functions will be available, yes, but if you don't use them, that's okay. if you are worrying about performance and memory, IMHO that is premature optimization
if that does not sound good enough, one way to achieve what you want is to define a Java class Foo with a bunch of static methods. then you can do Foo.myMethodOne(), Foo.myMethodTwo() to your hearts content. I would strongly recommend this approach in your case, because you seem to be expecting an explosion of utility methods, and in my experience, that is better managed in Java, just because you can maintain that code better, IDE support, unit-tests, debugging and all
Hope that makes sense !
I've got a pig-latin script that takes in some xml, uses the XPath UDF to pull out some fields and then stores the resulting fields:
REGISTER udf-lib-1.0-SNAPSHOT.jar;
DEFINE XPath com.blah.udfs.XPath();
docs = LOAD '$input' USING com.blah.storage.XMLLoader('root') as (content:chararray);
results = FOREACH docs GENERATE XPath(content, 'root/id'), XPath(content, 'root/otherField'), content;
store results into '$output';
Note that we're using pig-0.12.0 on our cluster, so I ripped the XPath/XMLLoader classes out of pig-0.14.0 and put them in my own jar so that I could use them in 0.12.
This above script works fine and produces the data that I'm looking for. However, it generates over 1,900 partfiles with only a few mbs in each file. I learned about the default_parallel option, so I set that to 128 to try and get 128 partfiles. I ended up having to add a piece to force a reduce phase to achieve this. My script now looks like:
set default_parallel 128;
REGISTER udf-lib-1.0-SNAPSHOT.jar;
DEFINE XPath com.blah.udfs.XPath();
docs = LOAD '$input' USING com.blah.storage.XMLLoader('root') as (content:chararray);
results = FOREACH docs GENERATE XPath(content, 'root/id'), XPath(content, 'root/otherField'), content;
forced_reduce = FOREACH (GROUP results BY RANDOM()) GENERATE FLATTEN(results);
store forced_reduce into '$output';
Again, this produces the expected data. Also, I now get 128 part-files. My problem now is that the data is not evenly distributed among the part-files. Some have 8 gigs, others have 100 mb. I should have expected this when grouping them by RANDOM() :).
My question is what would be the preferred way to limit the number of part-files yet still have them evenly-sized? I'm new to pig/pig latin and assume I'm going about this in the completely wrong way.
p.s. the reason I care about the number of part-files is because I'd like to process the output with spark and our spark cluster seems to do a lot better with a smaller number of files.
I'm still looking for a way to do this directly from the pig script but for now my "solution" is to repartition the data within the spark process that works on the output of the pig script. I use the RDD.coalesce function to rebalance the data.
From the first code snippet, I am assuming it is map only job since you are not using any aggregates.
Instead of using reducers, set the property pig.maxCombinedSplitSize
REGISTER udf-lib-1.0-SNAPSHOT.jar;
DEFINE XPath com.blah.udfs.XPath();
docs = LOAD '$input' USING com.blah.storage.XMLLoader('root') as (content:chararray);
results = FOREACH docs GENERATE XPath(content, 'root/id'), XPath(content, 'root/otherField'), content;
store results into '$output';
exec;
set pig.maxCombinedSplitSize 1000000000; -- 1 GB(given size in bytes)
x = load '$output' using PigStorage();
store x into '$output2' using PigStorage();
pig.maxCombinedSplitSize - setting this property will make sure each mapper reads around 1 GB data and above code works as identity mapper job, which helps you write data in 1GB part file chunks.
The application I test has some areas where it requires unique data. Specifically, the application will generate a request number that can only be used once. After my test runs I must manually update my datapool reference for this number. Is there any way using java, that I can get the information stored in my datapool, increase the value by one, and then save the data back to the datapool. This way I can keep rft in sync with my application in regard to this number.
Here is an example how to read a value from the datapool, increment it by 1, and save it back to the datapool. It is an adapted example from the book Software Test Engineering with IBM Rational Functional Tester. The original source code is from chapter 5 (and can be downloaded from the book's homepage).
// some imports
import org.eclipse.hyades.edit.datapool.IDatapoolCell;
import org.eclipse.hyades.edit.datapool.IDatapoolEquivalenceClass;
import org.eclipse.hyades.execution.runtime.datapool.IDatapool;
import org.eclipse.hyades.execution.runtime.datapool.IDatapoolRecord;
int value = dpInt("value");
value++;
java.io.File dpFile = new java.io.File((String) getOption(IOptionName.DATASTORE), "SomeDatapool.rftdp");
IDatapool dp = dpFactory().load(dpFile, true);
IDatapoolEquivalenceClass equivalenceClass = (IDatapoolEquivalenceClass) dp.getEquivalenceClass(dp
.getDefaultEquivalenceClassIndex());
IDatapoolRecord record = equivalenceClass.getRecord(0);
IDatapoolCell cell = (IDatapoolCell) record.getCell(0);
cell.setCellValue(value);
DatapoolFactory factory = DatapoolFactory.get();
factory.save((org.eclipse.hyades.edit.datapool.IDatapool) dp);
I think it is quite a lot of code to simply change one value—maybe it is easier to use some other method like writing the value to a normal text file.
I am using JMeter v2.5.
I need to get data from the responses of the test and extract data from it (which I am doing using regular exp extractor). How do I store this extracted data to a file?
Just solved a similar problem. After getting the data using a regular expression extractor, add a BeanShell PostProcessor element. Use the code below to write the variables to a file:
name = vars.get("name");
email = vars.get("email");
log.info(email); // if you want to log something to jmeter.log file
// Pass true if you want to append to existing file
// If you want to overwrite, then don't pass the second argument
f = new FileOutputStream("/my/file/path/result.csv", true);
p = new PrintStream(f);
this.interpreter.setOut(p);
print(name + "," + email);
f.close();
import org.apache.jmeter.services.FileServer;
String path=FileServer.getFileServer().getBaseDir();
name1= vars.get("user_Name_value");
name2= vars.get("UserId_value");
f = new FileOutputStream("E://csvfile/result.csv", true); //spec-ify true if you want to overwrite file. Keep blank otherwise.
p = new PrintStream(f);
this.interpreter.setOut(p);
p.println(name1+"," +name2);
f.close();
this is worked for me i hope it will work for you also
If you just want to write extracted variables to CSV results file, then just add to user.properties the variables you want:
sample_variables=name,email
As per doc:
https://jmeter.apache.org/usermanual/properties_reference.html#results_file_config
They will be appended as last column of CSV results file.
You have a couple options
You can tally the results by adding an aggregate report listener to your thread group => add listener => aggregate report
You can get raw results by adding a simple data writer listener to your thread group => add listener => simple data writer
Hope this helps
You may use https://jmeter-plugins.org/wiki/FlexibleFileWriter/ with sample variables set up.
Or with fake Dummy Sampler.
Anyway Flexible File Writer is good for writing data into file.