Functional/acceptance testing - multilanguage websites - testing

Any best practices of creating acceptance and functional tests for multilingual sites? I would like to test content and functionality on multiple languages but not duplicate the tests.
If we take Behat, how to arrange a test environment from the very beginning?

You could use Scenario Outlines: http://behat.readthedocs.org/en/v2.5/guides/1.gherkin.html#scenario-outlines
language | label1 | label2 |
es | Hola | Adios |
en | Hello | Bye |

Related

Intellij idea 2021.2 cannot render markdown table currently?

I use idea 2021.2 and corresponding markdown plugin. But the simple table cannot display in previous mode:
Why? I found somebody has related the problem to the JavaFx, however, I think it is occured in old version idea. I cannot find javafx render option in my version.
How to solve it?
You are using the wrong Markdown syntax.
The following code works fine (minimum three - for table header)
| Column 1 | Column 2 |
| :--- | :--- |
| AAA | BBB |

Chrome Selenium IDE 3.4.5 extension "execute script" command not storing as variable

I am trying to create a random number generator:
Command | Tgt | Val |
store | tom | tester
store | dominic | envr
execute script | Math.floor(Math.random()*11111); | number
type | id=XXX | ${tester}.${dominic}.${number}
Expected result:
tom.dominic.0 <-- random number
Instead I get this:
tom.dominic.${number}
I looked thru all the resources and it seems the recent selenium update/version has changed the approach and I cannot find a solution.
I realize this question is 2 years old, but it's a fairly common one, so I'll answer it and see if there are other answers that address it.
If you want to assign the result of a script run by the "execute script" in Selenium IDE to a Selenium variable, you have to return the value from JavaScript. So instead of
execute script | Math.floor(Math.random()*11111); | number
you need
execute script | return Math.floor(Math.random()*11111); | number
Also, in your final assignment that puts the 3 pieces together, you needed ${envr} instead of ${dominic}.

Can GraphDB load 10 million statements with OWL reasoning?

I am struggling to load most of the Drug Ontology OWL files and most of the ChEBI OWL files into GraphDB free v8.3 repository with Optimized OWL Horst reasoning on.
is this possible? Should I do something other than "be patient?"
Details:
I'm using the loadrdf offline bulk loader to populate an AWS r4.16xlarge instance with 488.0 GiB and 64 vCPUs
Over the weekend, I played around with different pool buffer sizes and found that most of these files individually load fastest with a pool buffer of 2,000 or 20,000 statements instead of the suggested 200,000. I also added -Xmx470g to the loadrdf script. Most of the OWL files would load individually in less than one hour.
Around 10 pm EDT last night, I started to load all of the files listed below simultaneously. Now it's 11 hours later, and there are still millions of statements to go. The load rate is around 70/second now. It appears that only 30% of my RAM is being used, but the CPU load is consistently around 60.
are there websites that document other people doing something of this scale?
should I be using a different reasoning configuration? I chose this configuration as it was the fastest loading OWL configuration, based on my experiments over the weekend. I think I will need to look for relationships that go beyond rdfs:subClassOf.
Files I'm trying to load:
+-------------+------------+---------------------+
| bytes | statements | file |
+-------------+------------+---------------------+
| 471,265,716 | 4,268,532 | chebi.owl |
| 61,529 | 451 | chebi-disjoints.owl |
| 82,449 | 1,076 | chebi-proteins.owl |
| 10,237,338 | 135,369 | dron-chebi.owl |
| 2,374 | 16 | dron-full.owl |
| 170,896 | 2,257 | dron-hand.owl |
| 140,434,070 | 1,986,609 | dron-ingredient.owl |
| 2,391 | 16 | dron-lite.owl |
| 234,853,064 | 2,495,144 | dron-ndc.owl |
| 4,970 | 28 | dron-pro.owl |
| 37,198,480 | 301,031 | dron-rxnorm.owl |
| 137,507 | 1,228 | dron-upper.owl |
+-------------+------------+---------------------+
#MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.
http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html
Just typing out #Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.
All of these queries also require the following prefix definition
prefix sys: <http://www.ontotext.com/owlim/system#>
This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.
Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.
SELECT ?state ?ruleset {
?state sys:listRulesets ?ruleset
}
Add a predefined ruleset
INSERT DATA {
_:b sys:addRuleset "rdfsplus-optimized"
}
Make the new ruleset the default
INSERT DATA {
_:b sys:defaultRuleset "rdfsplus-optimized"
}
Re-infer... could take a long time!
INSERT DATA {
[] <http://www.ontotext.com/owlim/system#reinfer> []
}

Get version of rich edit library

ALL,
Is it possible to get the version of the RichEdit control the program uses?
| Version | Class name | Library | Shipped with | New features
|------------|---------------|--------------|-----------------|
| 1.0 | "RICHEDIT" | Riched32.dll | Windows 95 |
| 2.0 | "RichEdit20W" | Riched20.dll | Windows 98 | ITextDocument
| 3.0 | "RichEdit20W" | Riched20.dll | Windows 2000 | ITextDocument2
| 3.1 | "RichEdit20W" | Riched20.dll | Server 2003 |
| 4.1 | "RICHEDIT50" | Msftedit.dll | Windows XP SP1 | tomApplyTmp
| 7.5 | "RICHEDIT50" | Msftedit.dll | Windows 8 | ITextDocument2 (new), ITextDocument2Old, Spell checking, Ink support, Office Math
| 8.5 | "RICHEDIT50" | Msftedit.dll | Windows 10 | LocaleName, more image formats
I know I can just have some variable and assign it appropriately if Msftedit.dll library is loaded or not. However if I do load RichEd20.dll, I can get either RichEdit 2 or RichEdit 3 implementation. And they are quite different. A lot of stuff were added in the latter.
If i did load Msftedit.dll, there are features that 7.5 that would not be available in earlier versions (e.g. automatic spell checking).
It's even possible that the same process can have all three DLLs loaded, and even using all three versions of RichEdit in the same process:
"RICHEDIT" → 1.0
"RichEdit20W" → 2.0, 3.0
"RICHEDIT50" → 4.1, 7.5, 8.5
Given a RichEdit control (e.g. WinForms RichTextBox, WPF RichTextBox, WinRT RichEditBox, VCL TRichEdit) is there a way to determine the version of a RichEdit control?
Or maybe I can somehow differentiate them by Windows version where it is available?
If using c++ you may find the following snippet useful to read out the class name :
TCHAR className[MAX_PATH];
GetClassName(GetRichEditCtrl().GetSafeHwnd(), className, _countof(className));
GetRichEditCtrl() is function on another control, you may need to substitute with whatever gives you a hwnd to the control.
Another method is using a tool like spy++ to inspect the class name.

Running RobotFramework Tests on Sauce Labs - error with timing out

I have existing selenium tests written in Robot IDE Framework that I'm trying to run in Sauce Labs.
I'm using the sample test from this tutorial to see if I can get at least one test running. http://datakurre.pandala.org/2014/03/cross-browser-selenium-testing-with.html
The test passes locally, and passes all the tests on Sauce Labs, but then times out and gives and error, "Test did not see a new command for 90 seconds. Timing out.
error" because it's not disconnecting Remote Web Driver.
I've tried all of these, together and separately at the end of the "Close test browser" function:
Close all browsers
Process close
Stop selenium server
I've also tried adding ((RemoteWebDriver) getCurrentWebDriver()).quit() in one of the python functions that runs during the closing process. I'm new to Selenium and Robot Framework, so I'm not sure how to grab the Remote Web Driver.
Here is the code, in case that helps:
*** Settings ***
Test Setup Open test browser
Test Teardown Close test browser
Resource ../../Keywords/super.txt
Library Selenium2Library
Library ../../Library/SauceLabs.py
*** Variables ***
${LOGIN_FAIL_MSG} Incorrect username or password.
${COMMAND_EXECUTOR} http://username:key#ondemand.saucelabs.com:80/wd/hub
${REMOTE_URL} http://username:key#ondemand.saucelabs.com:80/wd/hub
${DESIRED_CAPABILITIES} username:name,access-key:key,name:Testing RobotFramework,platform:Windows 8.1,version:26,browserName:CHROME,javascriptEnabled:True
*** Test Cases ***
Incorrect username or password
[Tags] Login
Go to https://saucelabs.com/login
Page should contain element id=username
Page should contain element id=password
Input text id=username anonymous
Input text id=password secret
Click button id=submit
Page should contain ${LOGIN_FAIL_MSG}
[Teardown]
*** Keywords ***
Open test browser
Open browser http://www.google.com ${BROWSER} \ remote_url=${REMOTE_URL} desired_capabilities=${DESIRED_CAPABILITIES}
Close test browser
Run keyword if '${REMOTE_URL}' != '' Report Sauce status ${SUITE_NAME} | ${TEST_NAME} ${TEST_STATUS} ${TEST_TAGS} ${REMOTE_URL}
Close all browsers
Process close
Stop selenium server
You shouldn't need to do anything special to close down the connection. My guess is, there's something in your test that is preventing the browser from being closed. My recommendation is to start with a simpler example, and start from the command line. Get that working, and then work your way up to being able to run something more complex from within RIDE.
Here is a working example where I removed all of the extra stuff in the test. I am able to run this both from the command line and via RIDE on Windows. You'll have to add in your own key, however:
*** Settings ***
| Library | Selenium2Library
*** Variables ***
| #{_tmp}
| ... | name:Testing RobotFramework Selenium2Library,
| ... | browserName:internet explorer,
| ... | platform:Windows 8,
| ... | version:10
| ${CAPABILITIES} | ${EMPTY.join(${_tmp})}
| ${KEY} | <put your username:key here>
| ${REMOTE_URL} | http://${KEY}#ondemand.saucelabs.com:80/wd/hub
| ${URL} | https://saucelabs.com/login
| ${LOGIN_FAIL_MSG} | Incorrect username or password.
*** Test cases ***
| Example of connecting to saucelabs via robot
| | [Setup]
| | ... | Open Browser
| | ... | ${URL}
| | ... | remote_url=${REMOTE_URL}
| | ... | desired_capabilities=${CAPABILITIES}
| |
| | Page should contain element | id=username
| | Page should contain element | id=password
| |
| | Input text | id=username | anonymous
| | Input text | id=password | secret
| | Click button | id=submit
| |
| | Page should contain | ${LOGIN_FAIL_MSG}
| |
| | [Teardown] | Close all browsers