I've been using an EC2 instance to run a python script with cron everyday for a month or so. The script uses selenium.
Everything was working correctly until today, when my script did not run.
I have tried to run it manually but it's not working either. The error message says that
raise exception_class(message, screen, stacktrace) selenium.common.exceptions.
NoSuchElementException: Message: no such element: Unable to locate element:
{"method":"cssselector","selector":"#ctl00_ctl00_moteurRapideOffre_
ctl01_EngineCriteriaCollection_Contract > option:nth-child(5)"}
(Session info: headless chrome=90.0.4430.85)
However, the same script is running fine on my computer (ie on my Macbook, not on AWS EC2).
As the problem seems to come from Chrome, I uninstalled it on AWS EC2 using:
sudo yum remove google-chrome-stable
Then I reinstalled it using :
curl https://intoli.com/install-google-chrome.sh | bash
sudo mv /usr/bin/google-chrome-stable /usr/bin/google-chrome
google-chrome --version && which google-chrome
If I try to run Chrome on the EC2 using /usr/bin/google-chrome, it does not work and it displays the following error message :
ERROR:browser_main_loop.cc(1386)] Unable to open X display.
I don't know if it was working before as I have never used it this way. But it seems to be a problem.
I have seen on the web that it might come from the fact that there is no screen and that I should use a package named xvfb. I have tried to install it with the following code:
sudo yum install xorg-x11-server-Xvfb
I guess the package was correclty installed, but it is not working better.
To sum up, I think my problem in the python code is linked to the fact that Google Chrome is not working correclty and this might be linked to xvfb. But I am not sure at all, it is just what I have tried until now.
Could you please help me ? Thanks!
You can simply add setup your like this, runs after every 30 minutes
*/30 * * * * export DISPLAY=:0 && ,<do what ever you want.>
If this does not work, and you google-chrome or firefox not found, simply run the command below in your shell BASH, FISH, ZSH etc to get PATH.
echo $PATH
Whatever the result comes out from the above command just copy and paste it above your cronjob like this,
*/30 * * * * export DISPLAY=:0 && ,<your selenium script.>```
You can remove export ```export DISPLAY=:0``` line if you want to this in the background or make your driver headless.
The reason of doing this, you might install the respective from snapd etc and that's why path is not defined as you downloaded from separate resource.
I'm trying to load a 3D model, stored locally on my computer, into Three.js with JSONLoader, and that 3D model is in the same directory as the entire website.
I'm getting the "Cross origin requests are only supported for HTTP." error, but I don't know what's causing it nor how to fix it.
My crystal ball says that you are loading the model using either file:// or C:/, which stays true to the error message as they are not http://
So you can either install a webserver in your local PC or upload the model somewhere else and use jsonp and change the url to http://example.com/path/to/model
Origin is defined in RFC-6454 as
...they have the same
scheme, host, and port. (See Section 4 for full details.)
So even though your file originates from the same host (localhost), but as long as the scheme is different (http / file), they are treated as different origin.
Just to be explicit - Yes, the error is saying you cannot point your browser directly at file://some/path/some.html
Here are some options to quickly spin up a local web server to let your browser render local files
Python 2
If you have Python installed...
Change directory into the folder where your file some.html or file(s) exist using the command cd /path/to/your/folder
Start up a Python web server using the command python -m SimpleHTTPServer
This will start a web server to host your entire directory listing at http://localhost:8000
You can use a custom port python -m SimpleHTTPServer 9000 giving you link: http://localhost:9000
This approach is built in to any Python installation.
Python 3
Do the same steps, but use the following command instead python3 -m http.server
VSCode
If you are using Visual Studio Code you can install the Live Server extension which provides a local web server enviroment.
Node.js
Alternatively, if you demand a more responsive setup and already use nodejs...
Install http-server by typing npm install -g http-server
Change into your working directory, where yoursome.html lives
Start your http server by issuing http-server -c-1
This spins up a Node.js httpd which serves the files in your directory as static files accessible from http://localhost:8080
Ruby
If your preferred language is Ruby ... the Ruby Gods say this works as well:
ruby -run -e httpd . -p 8080
PHP
Of course PHP also has its solution.
php -S localhost:8000
In Chrome you can use this flag:
--allow-file-access-from-files
Read more here.
Ran in to this today.
I wrote some code that looked like this:
app.controller('ctrlr', function($scope, $http){
$http.get('localhost:3000').success(function(data) {
$scope.stuff = data;
});
});
...but it should've looked like this:
app.controller('ctrlr', function($scope, $http){
$http.get('http://localhost:3000').success(function(data) {
$scope.stuff = data;
});
});
The only difference was the lack of http:// in the second snippet of code.
Just wanted to put that out there in case there are others with a similar issue.
Just change the url to http://localhost instead of localhost. If you open the html file from local, you should create a local server to serve that html file, the simplest way is using Web Server for Chrome. That will fix the issue.
I'm going to list 3 different approaches to solve this issue:
Using a very lightweight npm package: Install live-server using npm install -g live-server. Then, go to that directory open the terminal and type live-server and hit enter, page will be served at localhost:8080. BONUS: It also supports hot reloading by default.
Using a lightweight Google Chrome app developed by Google: Install the app, then go to the apps tab in Chrome and open the app. In the app point it to the right folder. Your page will be served!
Modifying Chrome shortcut in windows: Create a Chrome browser's shortcut. Right-click on the icon and open properties. In properties, edit target to "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="C:/ChromeDevSession" and save. Then using Chrome open the page using ctrl+o. NOTE: Do NOT use this shortcut for regular browsing.
Note: Use http:// like http://localhost:8080 in case you face error.
Use http:// or https:// to create url
error: localhost:8080
solution: http://localhost:8080
In an Android app — for example, to allow JavaScript to have access to assets via file:///android_asset/ — use setAllowFileAccessFromFileURLs(true) on the WebSettings that you get from calling getSettings() on the WebView.
fastest way for me was:
for windows users run your file on Firefox problem solved, or
if you want to use chrome easiest way for me was to install Python 3 then from command prompt run command python -m http.server then go to http://localhost:8000/ then navigate to your files
python -m http.server
Easy solution for whom using VS Code
I've been getting this error for a while. Most of the answers works. But I found a different solution. If you don't want to deal with node.js or any other solution in here and you are working with an HTML file (calling functions from another js file or fetch json api's) try to use Live Server extension.
It allows you to open a live server easily. And because of it creates localhost server, the problem is resolving. You can simply start the localhost by open a HTML file and right-click on the editor and click on Open with Live Server.
It basically load the files using http://localhost/index.html instead of using file://....
EDIT
It is not necessary to have a .html file. You can start the Live Server with shortcuts.
Hit (alt+L, alt+O) to Open the Server and (alt+L, alt+C) to Stop the server. [On MAC, cmd+L, cmd+O and cmd+L, cmd+C]
Hope it will help someone :)
If you use old version of Mozilla Firefox (pre-2019), it will work as expected without any issues;
P.S. Surprisingly, old versions of Internet Explorer & Edge work absolutely fine too.
For those on Windows without Python or Node.js, there is still a lightweight solution: Mongoose.
All you do is drag the executable to wherever the root of the server should be, and run it. An icon will appear in the taskbar and it'll navigate to the server in the default browser.
Also, Z-WAMP is a 100% portable WAMP that runs in a single folder, it's awesome. That's an option if you need a quick PHP and MySQL server. Though it hasn't been updated since 2013. A modern alternative would be Laragon or WinNMP. I haven't tested them, but they are portable and worth mentioning.
Also, if you only want the absolute basics (HTML+JS), here's a tiny PowerShell script that doesn't need anything to be installed or downloaded:
$Srv = New-Object Net.HttpListener;
$Srv.Prefixes.Add("http://localhost:8080/");
$Srv.Start();
Start-Process "http://localhost:8080/index.html";
While($Srv.IsListening) {
$Ctx = $Srv.GetContext();
$Buf = [System.IO.File]::OpenRead((Join-Path $Pwd($Ctx.Request.RawUrl)));
$Ctx.Response.ContentLength64 = $Buf.Length;
$Ctx.Response.Headers.Add("Content-Type", "text/html");
$Buf.CopyTo($Ctx.Response.OutputStream);
$Buf.Close();
$Ctx.Response.Close();
};
This method is very barebones, it cannot show directories or other fancy stuff. But it handles these CORS errors just fine.
Save the script as server.ps1 and run in the root of your project. It will launch index.html in the directory it is placed in.
I suspect it's already mentioned in some of the answers, but I'll slightly modify this to have complete working answer (easier to find and use).
Go to: https://nodejs.org/en/download/. Install nodejs.
Install http-server by running command from command prompt npm install -g http-server.
Change into your working directory, where index.html/yoursome.html resides.
Start your http server by running command http-server -c-1
Open web browser to http://localhost:8080
or http://localhost:8080/yoursome.html - depending on your html filename.
I was getting this exact error when loading an HTML file on the browser that was using a json file from the local directory. In my case, I was able to solve this by creating a simple node server that allowed to server static content. I left the code for this at this other answer.
It simply says that the application should be run on a web server. I had the same problem with chrome, I started tomcat and moved my application there, and it worked.
I suggest you use a mini-server to run these kind of applications on localhost (if you are not using some inbuilt server).
Here's one that is very simple to setup and run:
https://www.npmjs.com/package/tiny-server
Experienced this when I downloaded a page for offline view.
I just had to remove the integrity="*****" and crossorigin="anonymous" attributes from all <link> and <script> tags
If you insist on running the .html file locally and not serving it with a webserver, you can prevent those cross origin requests from happening in the first place by making the problematic resources available inline.
I had this problem when trying to to serve .js files through file://. My solution was to update my build script to replace <script src="..."> tags with <script>...</script>.
Here's a gulp approach for doing that:
1.
run npm install --save-dev to packages gulp, gulp-inline and del.
2.
After creating a gulpfile.js to the root directory, add the following code (just change the file paths for whatever suits you):
let gulp = require('gulp');
let inline = require('gulp-inline');
let del = require('del');
gulp.task('inline', function (done) {
gulp.src('dist/index.html')
.pipe(inline({
base: 'dist/',
disabledTypes: 'css, svg, img'
}))
.pipe(gulp.dest('dist/').on('finish', function(){
done()
}));
});
gulp.task('clean', function (done) {
del(['dist/*.js'])
done()
});
gulp.task('bundle-for-local', gulp.series('inline', 'clean'))
Either run gulp bundle-for-local or update your build script to run it automatically.
You can see the detailed problem and solution for my case here.
For all y'all on MacOS... setup a simple LaunchAgent to enable these glamorous capabilities in your own copy of Chrome...
Save a plist, named whatever (launch.chrome.dev.mode.plist, for example) in ~/Library/LaunchAgents with similar content to...
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>launch.chrome.dev.mode</string>
<key>ProgramArguments</key>
<array>
<string>/Applications/Google Chrome.app/Contents/MacOS/Google Chrome</string>
<string>-allow-file-access-from-files</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
It should launch at startup.. but you can force it to do so at any time with the terminal command
launchctl load -w ~/Library/LaunchAgents/launch.chrome.dev.mode.plist
TADA! 😎 💁🏻 🙊 🙏🏾
Not possible to load static local files(eg:svg) without server. If you have NPM /YARN installed in your machine, you can setup simple http server using "http-server"
npm install http-server -g
http-server [path] [options]
Or open terminal in that project folder and type "hs". It will automaticaly start HTTP live server.
er. I just found some official words "Attempting to load unbuilt, remote AMD modules that use the dojo/text plugin will fail due to cross-origin security restrictions. (Built versions of AMD modules are unaffected because the calls to dojo/text are eliminated by the build system.)" https://dojotoolkit.org/documentation/tutorials/1.10/cdn/
One way it worked loading local files is using them with in the project folder instead of outside your project folder. Create one folder under your project example files similar to the way we create for images and replace the section where using complete local path other than project path and use relative url of file under project folder .
It worked for me
Install local webserver for java e.g Tomcat,for php you can use lamp etc
Drop the json file in the public accessible app server directory
Start the app server,and you should be able to access the file from localhost
For Linux Python users:
import webbrowser
browser = webbrowser.get('google-chrome --allow-file-access-from-files %s')
browser.open(url)
url should be like:
createUserURL = "http://www.localhost:3000/api/angular/users"
instead of:
createUserURL = "localhost:3000/api/angular/users"
Many problem for this, with my problem is missing '/' example:
jquery-1.10.2.js:8720 XMLHttpRequest cannot load http://localhost:xxxProduct/getList_tagLabels/
It's must be: http://localhost:xxx/Product/getList_tagLabels/
I hope this help for who meet this problem.
I have also been able to recreate this error message when using an anchor tag with the following href:
Example a tag
In my case an a tag was being used to get the 'Pointer Cursor' and the event was actually controlled by some jQuery on click event. I removed the href and added a class that applies:
cursor:pointer;
cordova achieve this. I still can not figure out how cordova did. It does not even go through shouldInterceptRequest.
Later I found out that the key to load any file from local is: myWebView.getSettings().setAllowUniversalAccessFromFileURLs(true);
And when you want to access any http resource, the webview will do checking with OPTIONS method, which you can grant the access through WebViewClient.shouldInterceptRequest by return a response, and for the following GET/POST method, you can just return null.
If you are searching for a solution for Firebase Hosting, you can run the
firebase serve --only hosting command from the Firebase CLI
That's what I came here for, so I thought I'd just leave it here to help like ones.
If your using VS code just trying loading a live server in there. fixed my problem immediately.
I'm trying to find some clues on the following issues and not able to find good help online.
I'm running Xvfb (X virtual frame buffer), firefox on a Linux machine in headless mode. Xvfb main service is up and running and DISPLAY variable is set.
/usr/bin/Xvfb :99 -ac -screen 0 1600x1200x16
I have some automated selenium based tests which I'm running using Gradle (gradle test). They run successfully and in Jenkins I'm able to get this working using Xvfb plugin. JUnit post publish report/result info and Gradle's reports/test/index.html file is showing successful test run.
I just run the following to run tests in Gradle:
gradle test -DsomePropConfigFileForEnv=SomeSourceConfigFilewithPathvalue
My questions:
1. How can I get the screenshots of all the pages that this automated test/run is rendering (i.e. login page, application main page after login, user clicks on the main page here and there (i.e. opening/clicking on various tabs, links, tables, buttons etc) and finally log out page.
I'm able to get the screenshot from the Xvfb_screen<N> file, which is getting created under -fbdir folder (what we specify while running Xvfb via a Jenkins job) but the screenshot is a Black page if test runs successfully (this can be due to the 2nd bullet I mentioned below) --OR it's a valid single page image screenshot (if an error is encountered during the test run).
I'm trying to get all the pages which the automated Selenium tests are rendering (the config file I passed to Gradle as a -D parameter has URLs / user name / browser, version etc info in it). PS: It's not just for some random URL that I'm trying to get an image screenshot using Xvfb DISPLAY virtual frame buffer.
During the test, I see there's a valid virtual framebuffer file, with a valid size.
For ex: While Jenkins job is in progress and running Gradle test task and Xvfb plugin has started a new xvfb instance, I see:
/production/JSlaves/kobaloki2_1/xvfb-2015-02-04_01-16-37-6170319257811815857.fbdir/Xvfb_screen0
but as soon as the test is complete (or errors our), this file is getting deleted from this xxxx.fbdir folder and there's no file at all.
Why is this file getting deleted.
If it'll remain there, then I can use xwd/xwud command and other tools (imagemagick convert etc commands) to create an image file as a POST BUILD action or even within the BUILD section after "Invoke Gradle" step.
The following command will create a .png image file of the firefox screenshot (only one page screenshot) and assuming xvfb is running on DISPLAY=:107
xwd -root -display :107 | convert xwd:- /tmp/capture2.png
and the following xvfb process (which is still running, containing a valid Xvfb_screen**** file in it - which was created by the Jenkins job where Xvfb plugin is configured with offset base 100 and 7 is the node/build number thus, making :107 as DISPLAY number).
u10002 30717 19950 1 01:16 ? 00:00:00 Xvfb :107 -screen 0 1024x768x8 -fbdir /production/JSlaves/kobaloki2_1/xvfb-2015-02-04_01-16-37-6170319257811815857.fbdir
I'm not running Xvfb / Imagemagick etc to just get an image of a URL (ex: www.google.com) but trying to get all the screenshots what a test is rendering behind Xvfb memory virtual framebuffer/file during the test run.
Are there any other tools (simple enough to install without messing up with the Linux server) which can achieve the same (capturing screenshots of all the pages that a test is rendering behind Xvf/firefox/Linux server in Headless way)?
I also tried Selenium Grid server, but FF is acting up there (due to some reason) thus I'm trying to run these tests using Jenkins, Gradle, Xvfb plugin on a Linux server (Headless mode) using firefox browser and planning to have N no. of executors to run multiple runs of these tests and finally capturing the results per run.
I'm archiving the artifacts (if any) and using Image Gallery plugin as well, but don't have the images for all the rendered pages which ran in Selenium behind Xvfb/firefox.
Any inputs are greatly appreciated.
Thanks.
If you're running with Selenium then you could use driver.getScreenshotAs()
http://docs.seleniumhq.org/docs/04_webdriver_advanced.jsp
Set this at the end of a step or method where you want a screenshot and output it to disc.
OK, this is what I did. This approach doesn't require any change to the source code of the project.
Installed imagemagick (..ck) i.e. yum install imagemagick on RHEL.
Created a script on the target server and it works now. All I do is, in the Jenkins job, when I have already started the Xvfb instance (using Xvfb plugin in Jenkins), then just a second before before running the Selenium GUI tests via Gradle (or any build tool), I call the following script and pass the parameters (where DISPLAY variable value is available to the Jenkins job as we are using Xvfb plugin in it). At the end of tests, the script exists automatically (as xwd command doesn't get any more input so it exits gracefully) and finally I publish the images and .mp4 (video) file on Jenkins (as a side bar link to show Test results / video) and archive the artifacts (.png image files using "Image Galary Plugin" and .mp4 file).
NOTE: This requires that your machine has: imagemagick, xwd and ffmpeg installed. If the options passed to any commands differs acc. to your OS machine, then tweak it accordingly. The framerate value in ffmpeg command can be a fraction i.e. 1/5 or 0.5 or 15 or anything you want (try it and see what you get).
It's up to you, if you want to ARCHIVE this big amount of data or not. You can do it if you have good space and if your Jenkins job have a better old build clean retention policies.
#!bin/bash
##
## This script will capture Screenshot (every 0.1 seconds) of an automated GUI (for ex: Selenium tests) tests running behind a HEADLESS Xvfb display instance.
## Then, it'll create a mp4 format movie using the captured screenshots.
##
## Machine where you run this script, should have: Xvfb service running, a session started by Xvfb plugin via Jenkins, xwd,ffmpeg OS commands and imagemagick (utilities).
## - For ex, try this on RHEL to install imagemagick: yum install imagemagick
##
## Variables
ws=$1; ## Workspace folder location
d=$2; d=$(echo $d | tr -d ':'); ## Display number associated with the Xvfb instance started by Xvfb plugin from a Jenkins job
wscapdir=${ws}/capturebrowserss; ## Workspace capture browser's screen shot folder
if [[ -n $3 ]]; then wscapdir=${wscapdir}/$3; fi ## If a user pass a 3rd parameter i.e. a Jenkins BUILD_NUMBER, then create a child directory with that name to archive that specific run.
i=1;
rm -fr ${wscapdir} 2>/dev/null || ( echo - Oh Oh.. Cant remove ${wscapdir} folder; echo -e "-- Still exiting gracefully! \n"; exit 0);
mkdir -p ${wscapdir}
while : ; do
xwd -root -display :$d 2>/dev/null | convert xwd:- ${wscapdir}/capFile_${d}_dispId`printf "%08d" $i`.png 2>/dev/null;
if [[ ${PIPESTATUS[0]} -gt 0 || ${PIPESTATUS[1]} -gt 0 ]]; then echo -e "\n-- Something bad happened during xwd or imagemagick convert command, manually check it.\n"; exit 0; fi
((i++)); sleep 0.1;
done
ffmpeg -r 5 -i ${wscapdir}/capFile_dispId_%08d.png ${wscapdir}/out_byRateOf5.mp4 2>/dev/null || echo -e "\n-- Some error occurred (may be too many files opened), exiting gracefully!\n";
Can I get an interactive JS debugger working on PhantomJS and/or CasperJS?
I didn't solve this entirely, but I definitely reduced the pain.
PhantomJS provides a command line argument to enable webkit's remote debugger. AFAIK, PhantomJS launches a server and dumps the script into the <head> of a webpage with the familiar in-browser debugger. It's actually pretty nice, with breakpoints, etc. However, switching to manually digging around in the terminal for a random command line parameter and the path to your script is seriously irritating.
So, I used IntelliJ's "external tools" feature to launch a Bash script that kills any previous debugging session, launches PhantomJS, and then opens the page up in Chrome.
#!/bin/bash
lsof -i tcp#0.0.0.0:9000 #list anything bound to port 9000
if [ $? -eq 0 ] #if something was listed
then
killall 'phantomjs'
fi
/usr/local/Cellar/phantomjs/2.0.0/bin/phantomjs --remote-debugger-port=9000 $1 &
# --remote-debugger-autorun=yes <- use if you have added 'debugger;' break points
# replace $1 with full path if you don't pass it as a variable.
sleep 2; #give phantomJS time to get started
open -a /Applications/Google\ Chrome.app http://localhost:9000 & #linux has a different 'open' command
# alt URL if you want to skip the page listing
# http://localhost:9000/webkit/inspector/inspector.html?page=1
#see also
#github.com/ariya/phantomjs/wiki/Troubleshooting
The next few lines are settings for IntelliJ, although the above code works just as well on any platform/IDE.
program: $ProjectFileDir$/path/to/bash/script.sh
parameters: $FilePath$
working dir: $ProjectFileDir$
PhantomJS has a remote-debugger-port option you can use to debug your casper script in Chrome dev tools. To use it, simply execute your casper script with this argument:
casperjs test script.js --remote-debugger-port=9000
Then, open up http://localhost:9000 in Chrome and click on the about:blank link that presents itself. You should then find yourself in familiar Chrome dev tools territory.
Since this is a script and not a web page, in order to start debugging, you have to do one of two things before your script will execute:
In the Chrome dev tools page, open the console and execute __run() to actually start your script.
Insert a debugger; line in your code, and run your casper script with an additional --remote-debugger-autorun=yes argument. Doing so with the remote debug page open will run the script until it hits your debugger; line.
There's a great tutorial that explains this all very nicely.