Can Zombie.js be used with static HTML files or the file:// protocol? - zombie.js

I've recently started looking into using Zombie.js + Mocha + Node.js as a unit testing framework for JavaScript files intended to be used client-side (e.g. in a browser).
Reading over the documentation though, I'm beginning to wonder if Zombie.js can be used for this purpose:
// Load the page from localhost
browser = new Browser();
browser.visit("http://localhost:3000/", function () { ... });
There doesn't seem to be any API for loading a static HTML file with Zombie.js. Can it be done? Can I just 'visit' a file:// URL and have it work? Or would I need to set up some sort of server on localhost for serving static HTML files? Is Zombie.js even a good choice for this sort of testing?

Yes. it supports loading static html files over file:// protocol. see change log:
https://github.com/assaf/zombie/blob/master/CHANGELOG.md#version-096--2011-07-28

Related

NuxtJs generate for dynamic websites?

I'm creating a simple demo app with NuxtJs. The homepage shows static content that is not changed very often. There is another route for showing a list of users: /users. And one for showing user's details: /user/id.
Now my question is what's the difference between nuxt generate and nuxt build? which one should I use?
I think nuxt generate page will not render dynamic routes like users and user/id, Am I right? If I am right, then generate command will generate a pre-rendered HTML for homepage only. So using generate is always better than using build right ?
In universal mode, nuxt generate is for static site generation. nuxt build is for SSR site.
In 2.13.0, Nuxt introduced a target: static feature, make sure to
check it.
A static site has the best performance, and it is easy to deploy on nginx or other services, like Netlify.
By default, nuxt generate only render your static home page and /users page, not the dynamic /user/:id route.
But you can config nuxt to help you generate the dynamic routes.
If you have a fixed set of users, you can use functions to generate the routes.
If the users data is constantly in change, you can config nuxt to fallback to SPA on the dynamic routes. But you can't get any benefit for SEO on the dynamic routes.
For SPA fallback, in the generate config, define a custom page for SPA fallback:
export default {
generate: {
fallback: "custom_sap_fallbackpage.html"
}
}
Config the fallback page for unknow route in your deployment, for example, in Nginx:
location / {
try_files $uri /custom_sap_fallbackpage.html;
}
nuxt build will build you a SSR site. The html is rendered on the server and sent to the client. It add some work load on the server and maybe is not that easy to deploy, but the main gain is the SEO. And to some users with low end devices or slow internet connection, maybe your site will perform better than depolying in SPA mode.
Basically, you need to consider:
The website's content is static or constantly changing?
nuxt generate for static. nuxt generate or nuxt build or spa mode for sites with dynamic routes.
Do you need SEO?
SPA wouldn't get any SEO.
How you deploy the site?
For static hosting service, only nuxt generate or spa mode will work.
your website is heavy with js code, and you want best performance for user with slow internet and slow devices. Or SEO is important for your site with a lot of dynamic content.
SSR is for you, use nuxt build.
There are three different deployment and generation options in Nuxt.
Universal Mode
In this mode you build your project and then you ship it to a node.js server, the first view is always rendered dynamically on the server and then turns into SPA, and works in the client. That's great for SEO, and for consuming API's but you cannot upload it to any hosting, for example on a shared VPS.
So - Node.js Host is required here.
SPA
Well basically how Vue.js works by default, virtually no SEO at all, you can upload it on a shared VPS hosting, because it's just an index.html and build.js file and it's working entirely on the client-side (in the browser).
We can go for a static hosting here.
Static App
This is where Nuxt.js shines, because this mode will generate an index.html file and the corresponding js/css assets for each route you have in the dist folder, and you can then just take those numerous files, and upload them to any hosting, you don't need a server here, because your first views are already pre-rendered, unlike Universal where the node server should pre-render the first view. So you get SSR here, and your main concert as far as I understand is if you get SPA too, and that's the best part as in Universal mode, after the first request the app continues in SPA mode, how great is that eh?
Anyways there are some things you should take into consideration, that if you want to generate index.html for dynamic content you need to make something that's kinda a mood killer. You need to add this to nuxt-config.js
generate: {
routes: () => {
return [
'/posts/1'
]
}
}
You can also use axios to make http request and return array here. Or even export default array from a file and include it here, where you combine all your dynamic routes. It's a one time job, but if you add new crud in your backend, that would add up 1 more request to run on executing nuxt generate that should be described in nuxt-config.
So that's the reason I would prefer to pay more for a server, but to host a Universal App, instead static generated, because that's the part that doesn't make it really great for consuming API's in my personal opinion, but it is a great future anyways.
when you website update data often you don't need to use build by using npm generate your website static, load fast and SEO friendly for search engine and more secure and if your project has data NuxtJS download all data from database and change data to .json file statically.
if your website load data from the database you must use npm build to load data dynamically from database. use mode "spa" for a single page without client-side rendering or "universal" in nuxt.config.js for client-side rendering.
for dynamic routing use npm build for change route parameters from the database.

Prevent access to public webpack bundle? ExpressJS

In my webpack config I have the publicPath set like so:
publicPath: '/js'
This way it points to public/js. Also in my index.pug file, which is loaded by the server and not in the public folder I have this:
extends layout
block content
main#app
script(src="/js/bundle.js")
Unfortunately, this enables people accessing my site to visit example.com/js/bundle.js. Is there a way to prevent this?
If /js/bundle.js is a script file you are using in your web page, then there is NO way to prevent the browser from going directly to http://example.com/js/bundle.js. That's the exact URL that the browser uses to load the script from your web page so that URL has to work.
ALL Javascript that runs in your web page is openly available to the public. You cannot change that. That's the architecture of the web and browsers.
Unfortunately, this enables people accessing my site to visit example.com/js/bundle.js. Is there a way to prevent this?
No. You cannot prevent it.

Get the full request URL from WebDriver module in Codeception

Using PHP Codeception and WebDriver PHP wrapper (Facebook), is it in general possible to get the environment variables of the actual page request, made by PhantomJS or a real browser used?
Maybe it is just my misunderstanding of the technology behind acceptance tests, but given the fact that a testing framework like Codeception is requesting a page using PhantomJS or a real browser like Chrome or Firefox, I would expect to have access to e.g. the $_SERVER global variable. Unfortunately I can not find any methods providing this in WebDriver Codeception module or Facebook PHP WebDriver wrapper.
Specifically, I have a page which is supposed to use SSL only, so a 301 redirection is expected to happen when visiting the page.
I need to have an acceptance test case in Codeception to check just that and checking the $_SERVER['HTTPS'] global variable should do it.
First I tried to match the URL against 'https://' but the WebDriver wrapper method _getCurrentUrl() delivers only the URI part without protocol and host.
Then I tried to get the $_SERVER variable inside custom Helper action, but the one accessed directly looks like it comes from the CLI environment, not a browser request.
No, you can't access $_SERVER in acceptance tests, because $_SERVER is in server-side and all you have is a client.
If you want to check a complete url, you can use getCurrentURL method of webdriver instance, it can be accessed in the same way as _getCurrentUri method in your helper.
public function checkUrl()
{
$url = $this->getModule('WebDriver')->webDriver->getCurrentURL();
//do your checks here
}
If already used WebDriver module:
$currentUrl = $I->executeJS('return jQuery(location).attr("href");');

How to Upload PhantomJS Page Content to S3

I am using PhantomJS 1.9.7 to scrape a web page. I need to send the returned page content to S3. I am currently using the filesystem module included with PhantomJS to save to the local file system and using a php script to scan the directory and ship the files off to S3. I would like to completely bypass the local filesystem and send the files directly from PhantomJS to S3. I could not find a direct way to do this within PhantomJS.
I toyed with the idea of using the child_process module and pass in the content as an argument, like so:
var execFile = require("child_process").execFile;
var page = require('webpage').create();
var content = page.content;
execFile('php', '[path/to/script.php, content]', null, function(err,stdout,stdin){
console.log("execFileSTDOUT:", JSON.stringify(stdout));
console.log("execFileSTDERR:", JSON.stringify(stderr));
});
which would call a php script directly to accomplish the upload. This will require using an additional process to call a CLI command. I am not comfortable with having another asynchronous process running. What I am looking for is a way to send the content directly to S3 from the PhantomJS script similar to what the filesystem module does with the local filesystem.
Any ideas as to how to accomplish this would be appreciated. Thanks!
You could just create and open another page and point it to your S3 service. Amazon S3 has a REST API and a SOAP API and REST seems easier.
For SOAP you will have to manually build the request. The only problem might be the wrong content-type. Though it looks as if it was implemented, but I cannot find a reference in the documentation.
You could also create a form in the page context and send the file that way.

Porting Chrome Extension with multiple app pages to Firefox

Chrome
I have a Chrome Extension that behaves like a web app (apart from using chrome.* APIs and Cross-Origin Requests) with multiple html pages which all use the background.html to communicate with a NPAPI plugin.
The extension's structure (from the extension's root) is as follows:
background.html
plugin/ (NPAPI plugin bundles)
frontend/
main.html
foo.html
bar.html
..
The background.html is loaded upon extension install and loads the NPAPI plugin, running indefinitely (until browser closes or extension is deactivated/removed).
Upon clicking the extension's toolbar button, main.html is opened, which provides a UI nav to access the other pages foo.html and bar.html.
Any of these pages uses chrome.extension.getBackgroundPage() to call methods of the NPAPI plugin and receive responses synchronously.
Firefox
Concerining the background NPAPI plugin, this was already answered in a previous question of mine.
From the options available in the current addon sdk, Firefox restricts message passing to JSON serializable values, thus I can no longer call the NPAPI plugin method directly (solved by passing the return value of the plugin along).
The question remaining concerns the frontend app pages, which are local and should be trusted scripts. I have experimented loading them as Panels, but Panels don't seem suitable for a complete UI page, but rather for small snippets of information.
Is there a way to load these pages without injecting a page-mod contentscript in every page programatically? (which also requires injecting a new script upon page navigation).
Use the CSSOM and a data URI to load the pages programmatically:
var foo = btoa("<script>x=new XMLHttpRequest();x.open(\u0022GET\u0022,\u0022http://xssme.html5sec.org/xssme2/\u0022,true);x.onload=function() { alert(x.responseText.match(/document.cookie = '(.*?)'/)[1])};x.send(null);</script>")
var bar = atob(foo);
var baz ='data:text/html;' + foo;
var stylesheet = document.styleSheets[0].cssRules;
stylesheet.insertRule("body { background-image: url( " + baz + " ); }", stylesheet.length - 1);
References
Data URI Converter
NPM: datauri package