Robot Framework: How do I direct my test results to an external file? - automation

I have a functioning Robot Framework test that searches for identified elements in a list and then logs their presence or absence to the console. This has been working just fine for me so far. But now I need that console output directed to a file.
I have used the LOG keyword, I've used LOG MANY keyword and I've tried to also use APPEND TO FILE to get this done. I'm wondering at this point if my issue is the list/search logic itself. I can have individual commands logged or appended no problem. Even looking at the generated log.html file, only those basic log commands show up. Not the console output. Here's the keyword in question. And just to note, the search logic is sound. My problem is how to LOG what normally shows in the console to a file.
***Test Keyword***
Log "TEST MENU ----"
${StaList}= Create List test1 test2 test3 test4 test5 test6
FOR ${a} IN #{StaList}
${p}= Run Keyword And Return Status Page Should Contain Element xpath=//*
[contains(text(), "${a}")]
Run Keyword If ${p} Log "(${a}) X" ELSE Log "(${a}) "
END
When I run this with "Log To Console", this is what I get. A running list showing me if an element is present (with X) or absent (without X).
"TEST MENU ----
"(test1) X"
"(test2) "
"(test3) X"
This works fine if its just me running it. But I need this output sent to a text file to deliver to my team. I've been at this for a while now and need some help. Anybody have any ideas? Thanks so much!

Log to Console, does just that. It shows in Console but not in log.html.
What you want is to duplicate your step to APPEND TO FILE, so you have in Console and in the file.
You need to add \n or ${ENTER} to your output strings.
EDIT: Here is a fully working example:
*** Settings ***
Suite Setup Create and Open Page
Suite Teardown Close All Browsers
Library SeleniumLibrary
Library OperatingSystem
*** Test Cases ***
Example Test
Verify Elements
*** Keywords ***
Verify Elements
Log To Console "TEST MENU ----"\n
Create File test_menu.txt TEST MENU ----\n
${StaList}= Create List test1 test2 test3 test4 test5 test6
FOR ${a} IN #{StaList}
${p}= Run Keyword And Return Status Page Should Contain Element xpath=//*[contains(text(), "${a}")]
Run Keyword If ${p} Log To Console "(${a}) X"\n
... ELSE Log To Console "(${a}) "\n
Run Keyword If ${p} Append To File test_menu.txt (${a}) X\n
... ELSE Append To File test_menu.txt (${a}) \n
END
Log The result is test_menu.txt html=True
Create and Open Page
Create File my_mock.html
Append To File my_mock.html <HTML>\n \ \ \ <BODY>\n \ \ \ \ \ \ \ <p>test2</p>\n \ \ \ \ \ \ \ <p>test4</p>\n \ \ \ \ \ \ \ <p>test5</p>\n \ \ \ </BODY>\n</HTML>
Open Browser file://${CURDIR}/my_mock.html
You should replace Run Keyword If by IF/ELSE/END blocks.

Related

In Finding Broken links script ( using Selenium Robot Framework) -In Get Request URL's are going twice by appending

I am not sure what's wrong in the below Get Request, when i run the script in Get Request link appended.
Issue :
GET Request : url=http://127.0.0.1:5000//http://127.0.0.1:5000/index.html
Please see the below code and the Report screenshot.
I am stuck here! Really appreciate for the help.
${url3} http://127.0.0.1:5000/
${BROWSER} chrome
*** Test Cases ***
BrokenLinksTest-ForPracticeSelenium-2ndPage
Open Browser ${url3} ${BROWSER}
Maximize Browser Window
VerifyAllLinksOn2ndPage
Close Browser
*** Keywords ***
VerifyAllLinksOn2ndPage
Comment Count Number Of Links on the Page
${AllLinksCount}= get element count xpath://a
Comment Log links count
Log ${AllLinksCount}
Comment Create a list to store link texts
#{LinkItems} Create List
Comment Loop through all links and store links value that has length more than 1 character
: FOR ${INDEX} IN RANGE 1 ${AllLinksCount}-1
\ Log ${INDEX}
\ ${link_text}= Get Text xpath=(//a)[${INDEX}] #<-- for what ? -->
\ ${href}= Get Element Attribute xpath=(//a)[${INDEX}] href
\ Log ${link_text}
\ log to console ("The link text is "${link_text}" & href is "${href}" ${INDEX})
\ ${linklength} Get Length ${link_text} #<-- you are checking text not href ? -->
\ Run Keyword If ${linklength}>1 Append To List ${LinkItems} ${href}
Log Many ${LinkItems}
Remove Values From List ${LinkItems} javascript:void(0) #<-- don't forget checking content on list -->
${linkitems_length} Get Length ${LinkItems}
Log Many ${LinkItems}
#{errors_msg} Create List
Create Session secondpage http://127.0.0.1:5000/
:FOR ${INDEX} IN RANGE ${linkitems_length}
\ Log Many ${LinkItems[${INDEX}]}
\ ${ret} get request secondpage ${LinkItems[${INDEX}]}
\ log to console ${ret}
\ log ${ret}
\ ${code} Run Keyword And Return Status Should Be Equal As Strings ${ret.status_code} 200
#\ log to console "Gonna link" ${LinkItems[${INDEX}]}
# \ click link ${LinkItems[${INDEX}]}
#\ Capture Page Screenshot
#\ Click Link link=${LinkItems[${INDEX}]}
\ Run Keyword Unless ${code} Append To List ${errors_msg} error :${LinkItems[${INDEX}]} | ${ret.status_code}
${check} Run Keyword And Return Status Lists Should Be Equal ${errors_msg} ${EMPTY}
Run Keyword Unless ${check} Fail Link \ assertion Failed with msg:\n#{errors_msg}
Sleep 1
Ok, the "problem" is with these two lines:
Create Session secondpage http://127.0.0.1:5000/
and:
${ret} get request secondpage ${LinkItems[${INDEX}]}
As your screens show, your list of items (#{LinkItems}) already contains full url links, e.g.: http://127.0.0.1:5000/index.html but the Create Session keyword adds another http://127.0.0.1:5000/ in front of each list item.
Think about it as BASE_URL set up by Create Session keyword and an endpoint, e.g. /index.html. Create Session and Get Request are used together, the former setting up BASE_URL, the latter the endpoint part of the URL. You can see the documentation for the Create Session keyword, it explains its second parameter:
url Base url of the server
To solve this, you'd need to store in #{LinkItems} only everything after last / (it seems so in your case), so for example only /index.html or /shop.html

BigQuery CLI: load commands stays pending

I have a csv file on my computer. I would like to load this CSV file into a BigQuery table.
I'm using the following command from a terminal:
bq load --apilog=./logs --field_delimiter=$(printf ';') --skip_leading_rows=1 --autodetect dataset1.table1 mycsvfile.csv myschema.json
The command in my terminal doesn't give any output. In the GCP interface, I see no job being created, which makes me think the request doesn't even reach GCP.
In the logs file (from the --apilog parameter) I get informations about the request being made, and it ends with this:
INFO:googleapiclient.discovery:URL being requested: POST https://bigquery.googleapis.com/upload/bigquery/v2/projects/myproject/jobs?uploadType=resumable&alt=json
and that's it. No matter how long I wait, nothing happens.
You are mixing --autodetect with myschema.json, something like the following shoud work:
bq load --apilog=logs \
--source_format=CSV \
--field_delimiter=';' \
--skip_leading_rows=1 \
--autodetect \
dataset.table \
mycsvfile.csv
If you continue having issues, please post the content of the apilog, the line you shared doesn't seem to be an error. There should be more than one line and normally contains the error in a json structure, for instance:
"reason": "invalid",
"message": "Provided Schema does not match Table project:dataset.table. Field users is missing in new schema"
I'm not sure why you are using
--apilog=./logs
I did not find this in the bq load documentation, please clarify.
Based on that, maybe the bq load command could be the issue, you can try with something like:
bq load \
--autodetect \
--source_format=CSV \
--skip_leading_rows= 1 \
--field_delimiter=';'
dataset1.table1 \
gs://mybucket/mycsvfile.csv \
./myschema.json
If it fails, please check your job list to get the job created, then use bq show to view the information about that job, there you should find an error messag which can help you to determine the cause of the issue.

Export test results to excel using Robot Framework and Excellibrary

I'm new to Robotframework and I've been trying to export the result of my test to excel, but I couldn't get the correct loop for writing the data to excel.
the logic works that every time the element is present in the page, it will be log to the console. But at the same time, I want it to be written in excel.
With the current code, it's only failing, it couldn't recognize ${my_data}
I just put ..... to represent the codes that are not mentioned.
*** Test Cases ***
Check data of the web
Open browser ${url} chrome
: FOR ${url} IN #{url_list}
\ Go To ${url}
\ ${searched_script} = Get Source
\ Run Keyword And Continue On Failure Should Contain ${searched_script} ${sample}
\ Log to Console ${url}
\ #{site_data} = Get WebElements
\ Loop data #{site_data}
\ Push all result to excel
Close Browser
*** Keywords ***
Loop data
[Arguments] #{site_data}
: FOR ${site_data} IN #{site_data}
\ Log ${site_data}
\ ${my_data}= Get Element Attribute ${site_data} my_data_sample
\ Continue For Loop If $my_data is None
\ Run Keyword And Continue on Failure Should Contain ${my_data} hello_world
\ Log To Console ${my_data}
Push all result to excel
Create excel document doc_id=docname
Write excel rows 1 0 #{my_data} sheet #my_data here is not passing the data from the loop
Save Excel Document test.xlsx
the problem in your example is that the variable is local to your test case, not to the test suite.
you need to add the following command to promote the value to test suite
Set Suite Variable ${my_data}

How to store a command output in OpenVMS

Im having an issue writing a DCL in OpenVMS in that I need the DCL to call a command and capture its output (but not output the output to the screen) Later on in the DCL I then need to print that output I stored.
Heres an example:
ICE SET FASTER !This command sets my environment to the "Faster" environment.
The above command outputs this if executed directly in OpenVMS:
Initialising TEST Environment to FASTER
--------------------------------------------------------------------------------
Using Test Search rules FASTER
Using Test Search rules FASTER
--------------------------------------------------------------------------------
dcl>
So I created a DCL in an attempt to wrap this output in order to display a more simplified output. Heres my code so far:
!************************************************************************
$ !* Wrapper for setting ICE account. Outputs Environment
$ !************************************************************************
$ on error then goto ABORT_PROCESS
$ICE_DCL_MAIN:
$ ice set 'P1'
$ ICE SHOW
$ EXIT
$ABORT_PROCESS:
$ say "Error ICING to: " + P1
$ EXIT 2
[End of file]
In the lines above ICE SET 'P1' is setting the ice environment, but I dont want this output to be echoed to VMS. But what I do want is to write the output of $ICE SHOW into a variable and then echo that out later on in the DCL (most of which ive omitted for simplification purposes)
So what should be outputted should be:
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
Instead of:
Initialising TEST Environment to FASTER
--------------------------------------------------------------------------------
Using Test Search rules FASTER
Using Test Search rules FASTER
--------------------------------------------------------------------------------
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
Ive had a look through the manual and im getting a bit confused so I figured I tried here. Id appreciate any pointers. Thanks.
EDIT
Here is what ive come up with after the comments, the problem im having is when I connect to VMS using an emulator such as SecureCRT the correct output is echoed. But when I run the DCL via my SSH2 library in .NET it doesnt output anything. I guess thats because it closes the SYS$OUTPUT stream temporarily or something?
$ !************************************************************************
$ !* Wrapper for setting ICE account. Outputs Environment
$ !************************************************************************
$ on error then goto ABORT_PROCESS
$ICE_DCL_MAIN:
$ DEFINE SYS$OUTPUT NL:
$ ice set 'P1'
$ DEASSIGN SYS$OUTPUT
$ ice show
$ EXIT
$ABORT_PROCESS:
$ say "Error ICING to: " + P1
$ EXIT 2
[End of file]
EDIT 2
So I guess really I need to clarify what im trying to do here. Blocking the output doesnt so matter so much, im merely trying to capture it into a Symbol for example.
So in C# for example you can have a method that returns a string. So you'd have string myResult = vms.ICETo("FASTER"); and it would return that and store it in the variable.
I guess im looking for a similar thing in VMS so that once ive iced to the environment I can call:
$ environment == $ICE SHOW
But I of course get errors with that statement
The command $ assign/user_mode Thing Sys$Output will cause output to be redirected to Thing until you $ deassign/user_mode Sys$Output or next executable image exits. An assignment without the /USER_MODE qualifier will persist until deassigned.
Thing can be a logical name, a file specification (LOG.TXT) or the null device (NLA0:) if you simply want to flush the output.
When a command procedure is executed the output can be redirected using an /OUTPUT qualifier, e.g. $ #FOO/output=LOG.TXT.
And then there is piping ... .
You can redirect the output to a temp file and then print its content later:
$ pipe write sys$output "hi" > tmp.tmp
$ ty tmp.tmp
VMS is not Unix, DCL is not Bash: you can not easily set a DCL symbol from the output of a command.
Your ICE SHOW prints one line, correct? The first word is always "current", correct?
So you can create a hack.
First let me fake your ICE command:
$ create ice.com
$ write sys$output "current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]"
^Z
$
and I define a dcl$path pointing to the directory where this command procedure is
so that I can use/fake the command ICE
$ define dcl$path sys$disk[]
$ ice show
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
$
Now what you need, create a command procedure which sets a job logical
$ cre deflog.com
$ def/job/nolog mylog "current''p1'"
^Z
$
And I define a command "current" to run that command procedure:
$ current="#deflog """
Yes, you need three of the double quotes at the end of the line!
And finally:
$ pipe (ice show | #sys$pipe) && mysym="''f$log("mylog")'"
$ sh symb mysym
MYSYM = "current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]"
$
On the other hand, I don't know what you are referring to C# and Java. Can you elaborate on that and tell us what runs where?
You can try using: DEFINE /USER SYS$OUTPUT NL:.
It works only for the next command and you dont need to deassign.
Sharing some of my experience here. I used below methods to redirect outputs to files.
Define/Assign the user output and then execute the required command/script afterwards. Output will be written to .
$define /user sys$output <file_path>
execute your command/script
OR
assign /user <file_path> sys$output
execute your command/script
deassign sys$output
To re-direct in to null device like in Unix (mentioned in above answers), you can use 'nl:' instead of
define /user sys$output nl:
or
assign /user nl: sys$output

How can i view all comments posted by users in bitbucket repository

In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.