I have a website with a letsencrypt ssl cert. When I ran codeception acceptance tests against it, the test stalls until I press ctrl+z. When I ran the same test against a non ssl site, there is no problem.
That is my setup in acceptance.suite.yml. The phantomjs.cli.args paramater is from this site: http://szdredd.blogspot.de/2013/10/codeception-phantomjs-setup-for.html
class_name: AcceptanceTester
modules:
enabled: [WebDriver]
config:
WebDriver:
url: https://www.domain.de/
browser: phantomjs
My selenium log looks like this:
17:07:15.681 INFO - Executing: [new session: Capabilities [{browserName=phantomjs}]])
17:07:15.682 INFO - Creating a new session for Capabilities [{browserName=phantomjs}]
17:07:15.682 INFO - executable: /usr/bin/phantomjs
17:07:15.683 INFO - port: 27757
17:07:15.683 INFO - arguments: [--webdriver=27757, --webdriver-logfile=/phantomjsdriver.log]
17:07:15.683 INFO - environment: {}
PhantomJS is launching GhostDriver...
[INFO - 2016-02-20T17:07:15.754Z] GhostDriver - Main - running on port 27757
[INFO - 2016-02-20T17:07:15.765Z] Session [64316920-d7f4-11e5-a0c5-8954be0ea076] - CONSTRUCTOR - Desired Capabilities: {"browserName":"phantomjs"}
[INFO - 2016-02-20T17:07:15.765Z] Session [64316920-d7f4-11e5-a0c5-8954be0ea076] - CONSTRUCTOR - Negotiated Capabilities: {"browserName":"phantomjs","version":"1.9.0","driverName":"ghostdriver","driverVersion":"1.0.3","platform":"linux-unknown-64bit","javascriptEnabled":true,"takesScreenshot":true,"handlesAlerts":false,"databaseEnabled":false,"locationContextEnabled":false,"applicationCacheEnabled":false,"browserConnectionEnabled":false,"cssSelectorsEnabled":true,"webStorageEnabled":false,"rotatable":false,"acceptSslCerts":false,"nativeEvents":true,"proxy":{"proxyType":"direct"}}
[INFO - 2016-02-20T17:07:15.765Z] SessionManagerReqHand - _postNewSessionCommand - New Session Created: 64316920-d7f4-11e5-a0c5-8954be0ea076
17:07:15.771 INFO - Done: [new session: Capabilities [{browserName=phantomjs}]]
17:07:15.774 INFO - Executing: [implicitly wait: 0])
17:07:15.777 INFO - Done: [implicitly wait: 0]
17:07:15.790 INFO - Executing: [get: https://www.waldhelden.de/])
[INFO - 2016-02-20T17:07:33.916Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
[INFO - 2016-02-20T17:08:55.442Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
[INFO - 2016-02-20T17:09:02.008Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW
17:09:13.204 INFO - Session 7c5ef02c-9361-49c8-894d-234989179189 deleted due to client timeout
[INFO - 2016-02-20T17:09:13.211Z] ShutdownReqHand - _handle - About to shutdown
I found an advise on this side, but when I add that configuration I an error:
capabilities:
phantomjs.cli.args: ['--ignore-ssl-errors=true']
Caused by: org.openqa.selenium.WebDriverException: The best matching driver provider org.openqa.selenium.htmlunit.HtmlUnitDriver can't create a new driver instance for Capabilities [{phantomjs.cli.args=[--ignore-ssl-errors=true], browserName=phantom}]
Who knows how to setup codeception to ignore ssl errors? Any help appreciated!
Thanks
Udo
For testing my site I use Phantoman to automatically run and close phantomJS. In codeception.yml I have:
config:
Codeception\Extension\Phantoman:
path: 'vendor/bin/phantomjs'
port: 4444
debug: true
ignoreSslErrors: true
sslProtocol: any
Codeception\Extension\Recorder:
delete_successful: true
Related
I have my docker-compose.override.yml set up below in Visual Studio 2022
elasticsearch:
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
restart: always
ports:
- "9200:9200"
- "9300:9300"
networks:
elastic:
kibana:
container_name: kibana
restart: always
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
elastic:
networks:
elastic:
driver: bridge
Every time I try to configure kibana, after supplying a token, I get the following error from the kibana container w/out kibana getting fully configured.
2023-02-06 21:40:38 [2023-02-07T04:40:38.210+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2023-02-06 21:40:38 [2023-02-07T04:40:38.366+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2023-02-06 21:40:38 [2023-02-07T04:40:38.508+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2023-02-06 21:40:39 [2023-02-07T04:40:39.605+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2023-02-06 21:40:39 [2023-02-07T04:40:39.741+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 192.168.128.3:60856, Remote: 192.168.128.2:9200
2023-02-06 21:40:40 [2023-02-07T04:40:40.773+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
2023-02-06 21:40:42 [2023-02-07T04:40:42.194+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 192.168.128.3:44378, Remote: 192.168.128.2:9200
I am a bit at a loss as to why I can't get it configured. Any help appreciated.
I have a Selenium Standalone Server on my local machine (MAC), and it works fine every time I run a test (WebdriverIO).
09:27:06.951 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [
"--headless",
"--disable-gpu",
"--window-size=1024,768",
"--no-sandbox"
]
}
}
09:27:06.962 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.remote.server.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
Starting ChromeDriver 2.42.591059 (a3d9684d10d61aa0c45f6723b327283be1ebaad8) on port 42652
Only local connections are allowed.
09:27:08.168 INFO [ProtocolHandshake.createSession] - Detected dialect: OSS
09:27:08.314 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 3a6c1206b6cd99a762007069868cad2f (org.openqa.selenium.chrome.ChromeDriverService)
09:27:19.053 INFO [ActiveSessions$1.onStop] - Removing session 3a6c1206b6cd99a762007069868cad2f (org.openqa.selenium.chrome.ChromeDriverService)
Now, I am trying to move the selenium server to a Linux machine. I configured and installed all the necessary packages. However, the test just hanged.
Selenium log from Linux machine
[dnguyen#test tmp]$ java -jar selenium-server-standalone-3.141.59.jar
09:24:02.305 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
09:24:02.373 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2019-05-03 09:24:02.413:INFO::main: Logging initialized #289ms to org.seleniumhq.jetty9.util.log.StdErrLog
09:24:02.604 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
09:24:02.697 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
09:24:16.387 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [
"--headless",
"--disable-gpu",
"--window-size=1024,768",
"--no-sandbox"
]
}
}
09:24:16.388 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
Starting ChromeDriver 72.0.3626.7 (efcef9a3ecda02b2132af215116a03852d08b9cb) on port 29488
Only local connections are allowed.
[1556889856.409][SEVERE]: CreatePlatformSocket() returned an error, errno=0: Address family not supported by protocol (97)
[1556889856.714][SEVERE]: CreatePlatformSocket() returned an error, errno=0: Address family not supported by protocol (97)
09:24:16.791 INFO [ProtocolHandshake.createSession] - Detected dialect: OSS
09:24:17.078 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 86ea9b4bd11c3d2d8a994e893440087e (org.openqa.selenium.chrome.ChromeDriverService)
Log from WebdriverIO
Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/path-to-test.js)
It doesn't have that error when I run on my local machine. Not sure what is the different between Selenium Server on MAC and Linux.
Updated: The 443 port on the Linux server doesn't open, so it cannot reach the site. That's all.
The 443 port on the Linux server doesn't open, so it cannot reach the site. That's all.
If you are in the same situation, you can try to use WGET to reach the site first.
I'm having a very hard time to get two WildFly swarm apps (based on 2017.9.5 version) communicate with each other over a standalone ActiveMQ 5.14.3 broker. All done using YAML config as I can't have a main method in my case.
after reading hundreds of outdated examples and inaccurate pages of documentation, I settled with following settings for both producer and consumer apps:
swarm:
messaging-activemq:
servers:
default:
jms-topics:
domain-events: {}
messaging:
remote:
name: remote-mq
host: localhost
port: 61616
jndi-name: java:/jms/remote-mq
remote: true
Now it seems that at least part of the setting is correct as the apps start except for following warning:
2017-09-16 14:20:04,385 WARN [org.jboss.activemq.artemis.wildfly.integration.recovery] (MSC service thread 1-2) AMQ122018: Could not start recovery discovery on XARecoveryConfig [transportConfiguration=[TransportConfiguration(name=, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&localAddress=::&host=localhost], discoveryConfiguration=null, username=null, password=****, JNDI_NAME=java:/jms/remote-mq], we will retry every recovery scan until the server is available
Also when producer tries to send messages it just times out and I get following exception (just the last part):
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:727)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:304)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:300)
at org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection.setup(ActiveMQRAManagedConnection.java:785)
... 127 more
Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119013: Timed out waiting to receive cluster topology. Group:null]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:797)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:724)
... 130 more
I suspect that the problem is ActiveMQ has security turned on, but I found no place to give username and password to swarm config.
The ActiveMQ instance is running using Docker and following compose file:
version: '2'
services:
activemq:
image: webcenter/activemq
environment:
- ACTIVEMQ_NAME=amqp-srv1
- ACTIVEMQ_REMOVE_DEFAULT_ACCOUNT=true
- ACTIVEMQ_ADMIN_LOGIN=admin
- ACTIVEMQ_ADMIN_PASSWORD=your_password
- ACTIVEMQ_WRITE_LOGIN=producer_login
- ACTIVEMQ_WRITE_PASSWORD=producer_password
- ACTIVEMQ_READ_LOGIN=consumer_login
- ACTIVEMQ_READ_PASSWORD=consumer_password
- ACTIVEMQ_JMX_LOGIN=jmx_login
- ACTIVEMQ_JMX_PASSWORD=jmx_password
- ACTIVEMQ_MIN_MEMORY=1024
- ACTIVEMQ_MAX_MEMORY=4096
- ACTIVEMQ_ENABLED_SCHEDULER=true
ports:
- "1883:1883"
- "5672:5672"
- "8161:8161"
- "61616:61616"
- "61613:61613"
- "61614:61614"
any idea what's going wrong?
I had bad times trying to get it working too. The following YML solved my problem:
swarm:
network:
socket-binding-groups:
standard-sockets:
outbound-socket-bindings:
myapp-socket-binding:
remote-host: localhost
remote-port: 61616
messaging-activemq:
servers:
default:
remote-connectors:
myapp-connector:
socket-binding: myapp-socket-binding
pooled-connection-factories:
myAppRemote:
user: username
password: password
connectors:
- myapp-connector
entries:
- 'java:/jms/remote-mq'
I have a Selenium Grid hub running on Jenkins using the Selenium Plugin.
I have a Selenium grid node running on the same machine and it is successfully connected to the Hub.
From an external machine i cant't seem to ping the 4444 port on which hub is running through Jenkins.
I can ping the port, if Hub is started separately through command line.
I have Firewalls disabled on both my machines so its not a network issue.
java -jar selenium-server-standalone-2.46.0.jar -role node -hub http://<IP>:4444/grid/register -timeout 10000 -browserTimeout 10000 -sessionMaxIdleTimeInSeconds 10000
16:34:58.122 INFO - Launching a Selenium Grid node
16:34:59.982 WARN - error getting the parameters from the hub. The node may end up with wrong timeouts.Connect to <IP>:4444 [<IP>] failed: Connection refused: connect
16:35:00.029 INFO - Java: Oracle Corporation 25.51-b03
16:35:00.029 INFO - OS: Windows 8.1 6.3 amd64
16:35:00.044 INFO - v2.46.0, with Core v2.46.0. Built from revision 87c69e2
16:35:00.107 INFO - Driver class not found: com.opera.core.systems.OperaDriver
16:35:00.107 INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
16:35:00.154 INFO - Version Jetty/5.1.x
16:35:00.154 INFO - Started HttpContext[/selenium-server,/selenium-server]
16:35:00.154 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#76a4d6c
16:35:00.154 INFO - Started HttpContext[/wd,/wd]
16:35:00.154 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
16:35:00.154 INFO - Started HttpContext[/,/]
16:35:00.154 INFO - Started SocketListener on 0.0.0.0:5555
16:35:00.154 INFO - Started org.openqa.jetty.jetty.Server#1f7030a6
16:35:00.154 INFO - Selenium Grid node is up and ready to register to the hub
16:35:00.185 INFO - Starting auto registration thread. Will try to register every 5000 ms.
16:35:00.200 INFO - Registering the node to the hub: http://<IP>/grid/register
16:35:01.232 INFO - Couldn't register this node: Error sending the registration request: Connect to <IP>:4444 [IP] failed: Connection refused: connect
16:35:07.232 INFO - Couldn't register this node: The hub is down or not responding: Connect to <IP>:4444 [IP] failed: Connection refused: connect
Any help is appreciated.
Is that the HUB has Max connection limit set and you have already reached the connection Limit.
You can simply debug it by shutdown the HUB and restart the hub and register the node you are trying to the HUB. Hope it will register .
Must be the Connection Limit as if you don't supply anything i think a HUB can handle 5 instances.
Please try and share your reply
1.start the selenium server:
sudo java -jar selenium-server-standalone-2.25.0.jar -trustAllSSLCertificates -port 4444 <br>
........
09:02:06.523 INFO - Version Jetty/5.1.x <br>
09:02:06.526 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver] <br>
09:02:06.533 INFO - Started HttpContext[/selenium-server,/selenium-server]<br>
09:02:06.537 INFO - Started HttpContext[/,/]<br>
09:02:06.571 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#7df17e77<br>
09:02:06.571 INFO - Started HttpContext[/wd,/wd]<br>
09:02:06.574 INFO - Started SocketListener on 0.0.0.0:4444<br>
09:02:06.577 INFO - Started org.openqa.jetty.jetty.Server#798fd984<br>
2.run test case in perl:
my $sel = Test::WWW::Selenium->new( host => "localhost", <br>
port => 4444, <br>
browser => "*googlechrome", <br>
browser_url => "http://>fns-IP/" );<br>
$sel->start;
$sel->open_ok("http://>fns-IP/login");
3.test log:
09:09:25.146 INFO - Command request: getNewBrowserSession[*googlechrome, http://>fns-IP/] on session null<br>
09:09:25.146 INFO - creating new remote session<br>
09:09:25.147 INFO - Allocated session 553a1b30a1dd4f8a889fa4dfb7a6ae8a for http://>fns-IP/, launching...<br>
09:09:25.147 INFO - Launching Google Chrome...<br>
09:09:30.336 INFO - Got result: OK,553a1b30a1dd4f8a889fa4dfb7a6ae8a on session 553a1b30a1dd4f8a889fa4dfb7a6ae8a<br>
09:09:30.340 INFO - Command request: open[http://>fns-IP/login, ] on session 553a1b30a1dd4f8a889fa4dfb7a6ae8a<br>
09:09:30.610 INFO - Got result: OK on session 553a1b30a1dd4f8a889fa4dfb7a6ae8a<br>
09:09:30.623 INFO - Command request: isElementPresent[id=id_username, ] on session 553a1b30a1dd4f8a889fa4dfb7a6ae8a<br>
09:09:32.393 INFO - Couldn't proxy to http://>qlriakmdkm/ because host not found<br>
09:09:32.395 INFO - Couldn't proxy to http://>wkdujqsymi/ because host not found<br>
09:09:32.393 INFO - Couldn't proxy to http://>rkjzjvpsbx/ because host not found<br>
4.The URL In the googlechrome browser:
http://>fns-IP/selenium-server/core/RemoteRunner.html?sessionId=324f251e0de64cfeafa157de0c33ed41&multiWindow=true&baseUrl=http%3A%2F%2Ffns-4%2F&debugMode=false
and
http://>fns-IP/selenium-server/core/Blank.html?start=true
5.I tried on website 'ca.msn.com' and it worked well.
Any feedback is appreciated.
In all the urls quoted in 'http', the character '>' is added for a blockquote. Otherwise, it did not allow me to post. :(
I too have faced this problem earlier but in my case trustAllSSLCertificates was suffice.
I suggest you too move to selenium webdriver that has a good support to google chrome.