Running external selenium script from Groovy[SOAPUI] - selenium

I see online that the way to run a simple python file from groovy is:
def cmdArray2 = ["python", "/Users/test/temp/hello.py"]
def cmd2 = cmdArray2.execute()
cmd2.waitForOrKill(1000)
log.info cmd2.text
If hello.py contains - Print "Hello". It seems to work fine
But when I try to run a .py file containing the below Selenium code nothing happens.
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.PhantomJS(executable_path=r'C:\phantomjs\bin\phantomjs.exe')
driver.get("http://www.google.com") # Load page
# 1 & 2
title = driver.title
print title, len(title)
driver.quit()
Any help would be appreciated.
FYI - I have tried using all the browsers including headless browsers but no luck.
Also, I am able to run the selenium script fine from the command line. But when I run from the SOAPUI, I get no errors, the script runs and I do not see anything in the log

Most likely you do not see any errors from your python script because they are printed to stderr not stdout (as you expected when called cmd2.text). Try this groovy script to check error messages from python script stderr
def cmdArray2 = ["python", "/Users/test/temp/hello.py"]
def process = new ProcessBuilder(cmdArray2).redirectErrorStream(true).start()
process.inputStream.eachLine {
log.warn(it)
}
process.waitFor()
return process.exitValue()
Another thing you might want to try is using selenium directly from groovy without calling external python script.

Related

Robot framework: running test cases in batch makes them fail, while running a single test case works

I have 22 Robot framework test cases in a folder. All the test cases work perfectly if run individually.
If I try to run them all together, with the command robot . (from the TestCases directory) most of them fail.
The ones that fail make use of a python script to get the performance log of the webpage, with the following code:
d['goog:loggingPrefs'] = {'performance': 'ALL'}
the error I get is:
InvalidArgumentException: Message: invalid argument: log type 'performance' not found
the strange thing is that this happens only when I run all test cases together. Otherwise, error does not appear and log is retrieved correctly.
How can I fix this so the test cases work also when run in batch?
This is the file structure:
Each ROBOT file contains only 1 test case. In the driver.py the custom keywords are defined and called at the beginning of each TC with:
*** Settings ***
Library SeleniumLibrary
Library driver.py
*** Variables ***
*** Keywords ***
Get Logs2
[Arguments] ${arg1}
Get Chrome Browser Logging Capability
Find String
the code that throws the error is in the file "driver.py":
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def get_chrome_browser_logging_capability():
d = DesiredCapabilities.CHROME
d['goog:loggingPrefs'] = {'browser': 'ALL'}
d['goog:loggingPrefs'] = {'performance': 'ALL'}
return d
def get_logs2(driver):
f = open("demofile2.txt", "w")
a = driver.get_log('performance')
for entry in a:
f.write(str(entry))
f.write('\n')
f.close()
def find_string(mystring):
f = open("demofile2.txt", "r")
for x in f:
for element in mystring:
if element in x:
print("found")
return True
f.close()

using scrapy creating a spider and unable to store data to csv

I'm pretty new to scrapy, here I created a spider using the amazon URL unable to get the output to the csv.
Here is my code:
import scrapy
class AmazonMotoMobilesSpider(scrapy.Spider):
name = "amazon"
start_urls = ['https://www.amazon.in/Samsung-Mobiles/b/ref=amb_link_47?ie=UTF8&node=4363159031&pf_rd_m=A1VBAL9TL5WCBF&pf_rd_s=merchandised-search-leftnav&pf_rd_r=NGA52N9RAWY1W103MPZX&pf_rd_r=NGA52N9RAWY1W103MPZX&pf_rd_t=101&pf_rd_p=1ce3e975-c6e8-479a-8485-2e490b9f58a9&pf_rd_p=1ce3e975-c6e8-479a-8485-2e490b9f58a9&pf_rd_i=1389401031',]
def parse(self,response):
product_name = response.xpath('//h2[contains(#class,"a-size-base s-inline s-access-title a-text-normal")]/text()').extract()
product_price = response.xpath('//span[contains(#class,"a-size-base a-color-price s-price a-text-bold")]/text()').extract()
yield {'product_name'product_name,'product_price': product_price}
My shell is showing this result:
len(response.xpath('//h2[contains(#class,"a-size-base s-inline s-access-tit
le a-text-normal")]/text()'))
24
do I need to change any settings ?
To generate results in CSV you need to run the crawler with a output option
scrapy crawl -o results.csv spidername
Only when you activate a output the results are sent to the file. Else they will processed by your piplelines. If you are not saving them anywhere through pipeline then they will be just on terminal's console logs
I think Its because your yield has some syntax errors in the dictionary.
Change this
yield {'product_name'product_name,'product_price': product_price}
to
yield {'product_name':product_name,'product_price': product_price}

User input produces blank file with no result

I am playing around with Selenium to get screenshots of websites to view them safely.
The original code I found and tweaked was like this and works.
from selenium import webdriver
br = webdriver.PhantomJS()
br.get('http://www.google.com')
br.save_screenshot('screenshot.png')
br.quit
It gives you a screenshot of the website
I wanted to get user input so that I did not have the VI the file overtime I need a screenshot of a URL this is what I changed the code too.
#!/usr/bin/python
import re
import sys
from selenium import webdriver
br = webdriver.PhantomJS()
br.get_user_input =raw_input('Enter URL:')
br.save_screenshot('screenshot.png')
br.quit
Now it ask for the URL and you input it in the program runs and finishes and even creates the screenshot.png file but its blank
try to use a chrome canary (if you need headless browser). you don't have to use a selenium.
make_screen = '''#!/usr/bin/env bash
test -f ./chrome-linux/chrome && echo "chrome exists" || unzip chrome-linux.zip
./chrome-linux/chrome --headless --disable-gpu --virtual-time-budget=7000 --hide-scrollbars --screenshot=dir/screens/{screen_prod}.png --window-size=1200,2000 {link_prod}'''.format(screen_prod = screen_prod, link_prod = link_prod)
Here is the Answer to your Question:
There is a small bug in your code. You havn't called the get() method with your WebDriver instance br.
As a solution, (to avoid Law of Demeter) I have broken up that line in to two. Took the url from the user as an input in the first line. In the next line passed the url as an argument to get() method. Next it takes a proper snapshot and saves it within Screenshots sub-directory. Here is the working code block :
from selenium import webdriver
br = webdriver.PhantomJS(executable_path="C:\\Utility\\phantomjs-2.1.1-windows\\bin\\phantomjs.exe")
user_input = input('Enter URL : ')
br.get(str(user_input))
br.save_screenshot('./Screenshots/my_next_screenshot.png')
br.quit
The output on my console is:
Enter URL : http://google.com
Process finished with exit code 0
Let me know if this Answers your Question.

Update spider code controlled by scrapyd

What is the proper way to install/activate a spider that is controlled by scrapyd?
I install a new spider version using scrapyd-deploy; a job is currently running. Do I have to stop the job using cancel.json, then schedule a new job?
Answering my own question:
I wrote a little python script that stops all running spiders. After running this script, I run scrapyd-deploy, then relaunch my spiders.
I am still not sure if this is the way scrapy pros would do it, though, but it looks sensible to me.
This is the script (replace the value for PROJECT to suit yours), it requires the requests package (pip install requests):
import requests
import sys
import time
PROJECT = 'crawler' # replace with your project's name
resp = requests.get("http://localhost:6800/listjobs.json?project=%s" % PROJECT)
list_json = resp.json()
failed = False
count = len(list_json["running"])
if count == 0:
print "No running spiders found."
sys.exit(0)
for sp in list_json["running"]:
# cancel this spider
r = requests.post("http://localhost:6800/cancel.json", data={"project":PROJECT, "job": sp["id"]})
print "Sent cancel request for %s %s" % (sp["spider"], sp["id"])
print "Status: %s" % r.json()
if r.json()["status"] != "ok":
print "ERROR: Failed to stop spider %s" % sp["spider"]
failed = True
if failed:
sys.exit(1)
# poll running spiders and wait until all spiders are down
while count:
time.sleep(2)
resp = requests.get("http://localhost:6800/listjobs.json?project=%s" % PROJECT)
count = len(resp.json()["running"])
print "%d spiders still running" % count

Android MonkeyRunner unable to enter text

I recently started using MonkeyRunner to test the UI of my Android application (I am also using Espresso but wanted to play around with MonkeyRunner). The problem that I am having is that I am unable to enter text into EditText fields using the automation script.
The script navigates through my app perfectly but it doesn't seem to actually enter any text on call of the MonkeyRunner.type() command.
Please find my script below.
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
from com.android.monkeyrunner.easy import EasyMonkeyDevice, By
import commands
import sys
import os
# starting the application and test
print "Starting the monkeyrunner script"
# connection to the current device, and return a MonkeyDevice object
device = MonkeyRunner.waitForConnection()
easy_device = EasyMonkeyDevice(device)
apk_path = device.shell('pm path com.mysample
if apk_path.startswith('package:'):
print "application installed."
else:
print "not installed, install APK"
device.installPackage('/MySample/MySample.apk')')
package ="com.mysample"
activity = ".SampleActivity"
print "Package: " + package + "Activity: " + activity
print "starting application...."
device.startActivity(component=package + '/' + activity)
print "...component started"
device.touch(205,361, "DOWN_AND_UP")
device.type("This is sample text")
MonkeyRunner.sleep(1)
result = device.takeSnapshot()
result.writeToFile("images/testimage.png",'png')
As you can see from the script above the text This is sample text should be placed in the EditText box. Both the emulator and screenshot that is taken show no text in the text field.
Am I missing a step or just doing something incorrectly?
Any help would be greatly appreciated!
I would rather use AndroidViewClient/culebra to simplify the task.
Basically, you can connect your device with adb and then run
culebra -VC -d on -t on -o myscript.py
The script obtains references to all of the visible Views. Edit the script and add at the end
no_id10.type('This is sample text')
no_id10.writeImageToFile('/tmp/image.png')
No need to worry about View coordinates, no need to touch and type, no need to add sleeps, etc.
NOTE: this is using no_id10 as an example, the id for your EditText could be different
First of all, I would not use the MonkeyRunner.sleep command, but I would rather use the time package and the time.sleep command. Just import the package
import time
and you should be good to go.
Moreover, I suggest you should wait some time between device.touch and device.type. Try with
device.touch(205,361, "DOWN_AND_UP")
time.sleep(1)
device.type("This is sample text")