Setting up Sahi, Behat & PhantomJS on Vagrant - phantomjs

I'm trying to set up automated testing with PhantomJS, Behat and Sahi on my vagrant machine.
I'm getting the following output, when trying to run a test with behat:
[Behat\SahiClient\Exception\ConnectionException]
Exception has been thrown in "afterStep" hook, defined in FeatureContext::afterStep()
Connection time limit reached
Here is my userdata.properties:
# dirs. Relative paths are relative to userdata dir. Separate directories with semi-colon
scripts.dir=scripts;
# default log directory.
logs.dir=logs
# Directory where auto generated ssl cerificates are stored
certs.dir=certs
# Use external proxy server for http
ext.http.proxy.enable=false
ext.http.proxy.host=
ext.http.proxy.port=
ext.http.proxy.auth.enable=false
ext.http.proxy.auth.name=kamlesh
ext.http.proxy.auth.password=password
# Use external proxy server for https
ext.https.proxy.enable=false
ext.https.proxy.host=
ext.https.proxy.port=
ext.https.proxy.auth.enable=false
ext.https.proxy.auth.name=kamlesh
ext.https.proxy.auth.password=password
# There is only one bypass list for both secure and insecure.
ext.http.both.proxy.bypass_hosts=localhost|127.0.0.1|*.internaldomain.com
# Mark this property true to disable the proxy alert
proxy_alert.disabled=false
And my browswer_types.xml:
<browserTypes>
<browserType>
<name>phantomjs</name>
<displayName>PhantomJS</displayName>
<icon>safari.png</icon>
<path>/usr/bin/phantomjs</path>
<options>--ignore-ssl-errors=yes --proxy=localhost:9999 --ssl-protocol=any /usr/local/sahi/phantomjs-sahi.js</options>
<processName>phantomjs</processName>
<capacity>100</capacity>
<force>true</force>
</browserType>
</browserTypes>
behat.yml:
default:
extensions:
Behat\MinkExtension\Extension:
javascript_session: sahi
browser_name: phantomjs
goutte: ~
sahi:
host: localhost
port: 9999
Sahi run output:
--------
SAHI_HOME: ..
SAHI_USERDATA_DIR: ../userdata
SAHI_EXT_CLASS_PATH:
--------
Sahi properties file = /usr/local/sahi/config/sahi.properties
Sahi user properties file = /usr/local/sahi/userdata/config/userdata.properties
Added shutdown hook.
>>>> Sahi OS v5.0 started. Listening on port: 9999
>>>> Configure your browser to use this server and port as its proxy
>>>> Browse any page and CTRL-ALT-DblClick on the page to bring up the Sahi Controller
-----
Reading browser types from: /usr/local/sahi/userdata/config/browser_types.xml
-----
I've tried reinstalling a bunch of stuff, tried playing around with the ports, processes, proxy settings, nothing.

your vagrant comes with an empty or no db. so when you try to connect to your app, e.g log in with some known user it will crash cause it won't find it!
all the best ;)

Since version 4.3.2 of BrowserType change settings. Since there is no tag force. please check.
https://sahipro.com/docs/using-sahi/sahi-headless-execution-with-phantomjs.html#Documentation since Sahi Pro V4.3.2

Related

Jibri recording issues behind reverse proxy

I'm trying to run Jibri as part of a Jitsi-Meet installation (all on one server) behind a reverse SSL proxyJitsi works out of the box, but as soon as Jibri tries to log in to the session to record it, the corresponding Chrome session times out. Here's an excerpt from the jibri log:
2021-04-04 09:09:42.546 FINE: [890] org.jitsi.jibri.selenium.pageobjects.CallPage.visit() Visiting url https://example.com/room#config.iAmRecorder=true&config.externalConnectUrl=null&config.startWithAudioMuted=true&config.startWithVideoMuted=true&interfaceConfig.APP_NAME="Jibri"&config.analytics.disabled=true&config.p2p.enabled=false&config.prejoinPageEnabled=false&config.requireDisplayName=false
2021-04-04 09:09:42.633 FINE: [890] org.jitsi.jibri.selenium.pageobjects.CallPage.apply() Not joined yet: APP is not defined
...
2021-04-04 09:10:12.945 INFO: [890] org.jitsi.jibri.selenium.JibriSelenium.onSeleniumStateChange() Transitioning from state Starting up to Error: FailedToJoinCall SESSION Failed to join the call
2021-04-04 09:10:12.947 INFO: [890] org.jitsi.jibri.service.impl.FileRecordingJibriService.onServiceStateChange() File recording service transitioning from state Starting up to Error: FailedToJoinCall SESSION Failed to join the call
The reverse proxy is configured to watch out for this login string on port 443 (normal SSL traffic per the URL above) and forward this to the Jitsi instance. Prosody accepts the request on its http-bind interface but then the invocation times out.
As the web server logs are inconclusive: Where / what logs can I check to see what happens afterwards? I can see jicofo picking up the invocation but don't know what happens afterwards (Jicofo 2021-04-04 09:09:42.130 INFO: [461] org.jitsi.jicofo.recording.jibri.JibriSession.log() Updating status from JIBRI: <iq to='focus#auth.example.com/focus647288887711795' from='jibribrewery#internal.auth.example.com/jibri-nickname' id='5iurC-49012' type='result'><jibri xmlns='http://jitsi.org/protocol/jibri' status='pending'/></iq> for room#conference.example.com)?
More than happy to provide more info as required.

CSV Data Set : Parameterize URL variables in JMeter - wrong CSV file

I am testing backend application, which is in NodeJS and Java technology.
The communication protocols are:
WebSocket in NodeJs part
and HTTP in Java part)
In JMeter, I must parameterize URL, to switch between development URL, production and preproduction.
I did it using CSV file.
I created a folder containing CSVs, in the folder where I have JMeter 5.0.
I prepare 3 CSV files.
I have three csv file in folder bin in Jmeter such as:
development.csv,
production.csv.
prepod.csv
My CSV files are following:
protocol, host
http, 10.219.227.66
ws, 10.219.227.66
protocol, host
https, prepod.myprepod.io
ws, prepod.myprepod.io
protocol, host
https, production.myproduction.io
ws, production.myproduction.io
and I have set in JMeter:
WebSocket Open Connection
Serwer URL – ws
Server name or IP - ${host}
CSV Data Set Config
${__P(environment,development)}.csv
and this project doesn't work, in log I have:
Caused by: java.lang.IllegalArgumentException: File development.csv
must exist and be readable at
org.apache.jmeter.services.FileServer.createBufferedReader(FileServer.java:424)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:340)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:324)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.reserveFile(FileServer.java:272)
~[ApacheJMeter_core.jar:5.0 r1840935] ... 8 more 2018-10-19
14:29:30,727 INFO o.a.j.t.JMeterThread: Thread finished: Authorize
success 1-1 2018-10-19 14:29:30,728 INFO o.a.j.e.StandardJMeterEngine:
Notifying test listeners of end of test 2018-10-19 14:29:30,728 INFO
o.a.j.g.u.JMeterMenuBar: setRunning(false, local)
What is wrong ?
As per message:
java.lang.IllegalArgumentException: File development.csv must exist and be readable at ...
It seems the test is using the default value "development" , so JMeter looks for development.csv
So I guess you're facing this in another environment, in this case you should run jmeter with this additional parameter:
-Jenvironment=production

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

Is there a way to dynamically define and register new Dgraphs in Endeca

As far as my knowledge of Endeca goes, any time you want to add a new dgraph definition in your Endeca configuration, you have to run initializeServices.sh to set the updated configuration on EAC.
I was wondering if there is any way I can do that without running initalizeServices.sh (since it does a lot more than just update the list of Dgraph registered in EAC, and I want to prevent that).
I found the command ./runcommand.sh --update-definition allows you to do configuration changes to a Dgraph, which has already been registered with EAC, but if I add a new dgraph in config and run the command it fails with below error:
[11.17.16 16:00:07] INFO: Setting definition for host 'MDEXLiveHost2'.
[11.17.16 16:00:07] SEVERE: Caught an exception while checking provisioning
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host setDefinition - Caught exception while setting host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
I can't find any detailed logs of this error being generated anywhere in PlatformServices logs to further debug.
I could, however see in request log that /eac/ProvisioningService gave a HTTP code of 500, which leads me to believe that the script is trying to find current configuration of MDEXLiveHost2 and is unable to find it.
EDITED TO ADD Configuration for:
New host:
<host id="MDEXLiveHost2" hostName="${mdexLive.host2}" port="${mdexLive.eac.port}" useSsl="false" />
New Dgraph:
<dgraph id="DgraphLive2" host-id="MDEXLiveHost2" port="${dgraphLive1.port}"
post-startup-script="LiveDgraphPostStartup">
<properties>
<property name="restartGroup" value="A" />
<property name="updateGroup" value="a" />
<property name="DgraphContentGroup" value="Live" />
</properties>
<log-dir>./logs/dgraphs/DgraphLive</log-dir>
<input-dir>./data/dgraphs/DgraphLive/dgraph_input</input-dir>
<update-dir>./data/dgraphs/DgraphLive/dgraph_input/updates</update-dir>
</dgraph>
EDITED TO ADD errors after manually adding host using eaccmd.sh
Host definition file:
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
The host is added successfully (validated via describe-app)
$./eaccmd.sh describe-app --app myapp | grep MDEXLiveHost2
<host host-name="172.18.0.7" port="9999" host-id="MDEXLiveHost2" useSsl="false">
But, running any command I get this error:
[11.18.16 11:00:58] INFO: Updating provisioning for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] INFO: Host name of host 'MDEXLiveHost2' has changed from 172.18.0.7 to 172.18.0.7 . Components on this host will be re-provisioned.
[11.18.16 11:00:58] INFO: Updating definition for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] SEVERE: Caught an exception while checking provisioning.
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host updateEacDefinition - Caught exception while updating host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
If only this error could be made more verbose, that might give some help.
You do not have to run initializeServices.sh for every configuration change you make. When you execute other scripts in the control folder, they first check if there are any configuration changes and apply these changes.
As far as the error is concerned, I suspect you either didn't specify the MDEXLiveHost2 in your LiveDGraphCluster.xml or the host that you did specify is not reachable. Verify your configuration.
Lastly your approach to dynamically add more DGraphs into the cluster is not standard practice. When you configure your environment you should do a load test using ENEPerf to simulate the load and then create as many DGraphs and hosts as required. If you are adding more hosts and DGraphs dynamically, you also need to ensure that you add them, dynamically, into your load balancer configuration as well.
My first guess was that maybe the mdex host 2 didn't have Platform services/Mdex installed and Platform services running but it may be that the port you specified is incorrect.
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
Is your eac port 9999 and not 8888 (OOB value)? If it is 9999 on your ITL server, you want to make sure that it is also set to 9999 on your new Dgraph server.

Integrate http apache with kaazing gateway

I am running an app on http apache 2.4 through which I am trying to connect to a kaazing gateway. I have followed the
instructions that are found in kaazing site at "setup-guide.html#webserver_integrate" section, but the connection keeps failing: the Mozilla console prints:
TypeError: this._socket is undefined, 4146 XmppClient.js
I changed the <allow origin> with an *. I would like to ask whether I should make any changes on the configuration file of apache.
Finally, I managed it to work. I made a new install of kaazing gateway. In gateway-config.xml at the GATEWAY_HOME/conf/ I changed the value of the gateway.hostname with my internal ip and set the
*
at
<cross-site-constraint>
<allow-origin>*</allow-origin>
</cross-site-constraint>
in service with type: xmpp.proxy and this time worked! Also I changed '
*
with the
http://localhost:80
(http apache) and also worked. I don't know why didn 't it work before.
Thanks for trying to help!