W3C Markup Validation Service with Selenium - selenium

I have read something about TDD. My field is web development.
Namely server side (Python + Django).
In a book I read: let's check whether our local web page has a 'html' tag.
I would say that for learning purposes it is ok. But a real web page should be validated by https://validator.w3.org/
They say that even for famous web sites not every page passes the validation. Well, let us assume that we decided to develop a nice web site and our policy is 100 % passing of the validation.
Well, the plan seems to look like we should automatically submit our web page to the validator and check for errors. How can it be done?
By the way, I don't have a public IP.
Is it a good plan to:
1) Pay more to ones Internet provider and get a public IP.
2) Run a webserver in debug mode. In this case at the development stage the web site will be visible from a real world.
3) Pass the address of the web page being developed to the validator via Selenium?
4) Check for errors via Selenium again.
Is it a good idea or is there a better plan? Can I get by without a public IP?

You can download a copy of the W3C validation program and run it locally.
There are also a number of other offline HTML validators such as HTML Tidy and Total Validator.
You might also consider running the validation as a part of continuous integration. There are plugins for Jenkins and other CI servers such as the Unicorn Validation plugin.

Related

How do dynamic API calls work in Nuxt.js static vs SSR mode?

Even after reading through multiple articles explaining the differences between static and SSR rendering I still don't understand how dynamic API calls work in these different modes.
I know that Nuxt has the fetch and asyncData hooks which are only called once during static generation, but what if I use dynamic HTTP requests inside component methods (e.g. when submitting a form via a POST request)? Does that even work in static sites?
I'm making a site that shows user generated content on most pages, so I have to make GET requests everytime one of those pages is visited to keep the content up to date. Can I do that with a static site or do I have to use SSR / something else? I don't want to use client side rendering (SPA mode) because it's slow and bad for SEO. So what is my best option?
There is actually no difference between either asyncData() or fetch() hooks when you do use target: static (SSG) or target: server (default, SSR).
At least, not in your use-case.
They are used mainly by your hydrated app.
As a reminder, when using either SSG or SSR, your static page will be hydrated and will become an SPA with all the dynamic functionality that we love. This combo of SSG + SPA or SSR + SPA is called an universal app (or isomorphic app).
Both asyncData() and fetch() will be called upon navigation within your client side SPA.
There are also some things happening on the server side, like fetch being called (by default) when you request the server for an SSR built app.
Or the fact that when you generate your app (if using SSG), you can reach some API and generate dynamic routes (useful in the case of a headless CMS + blog combo for example).
For performance reasons and to have a quick build time, you may pass a payload and use it in an asyncData hook in the dynamic route, as explained here
Still, a static Nuxt app, is basically just an app built ahead of time, with no need for a Node.js server, hence why an SSG app can be hosted on Netlify for free (CDN) but and SSR one needs to be hosted on something like Heroku (on a paid VPS).
The main questions to ask yourself here are:
do you need to have some content protected? Like some video courses, private user info etc...already in your Nuxt project (if SSG, disabling the JS will give access to the generated content)
is your first page a login? Is it mandatory to access the rest of the content? Like an admin dashboard (you cannot generate content ahead of time if the data is private, think of Facebook's feed being generated for every account, not feasible and not secure as above)
is my API updating super often and do I need to have some super quick build time (limitation on free tiers essentially)? (SSG will need a re-generation each time the API changes)
If none of those are relevant, you can totally go SSG.
If one of those is important to you, you may consider SSR.
I do recommend trying all of them:
SSR (ssr: true + target: server) with yarn build && yarn start
SSG (ssr: true + target: static) with yarn generate && yarn start
SPA only (ssr: false + either target: static, target: server also work but who wants to pay for an SPA?!) with yarn generate && yarn start
Try to host it on some platforms too, if you want to be sure to understand the differences beyond your local build.
You can use this kind of extension to also double-check the behavior of having JS enabled or not.
I will probably recommend to take the SSG path. Even tho, if your content is always changing you will probably not benefit much from SEO (eg: Twitter or Facebook).
This github answer could maybe help you understand things a bit better (it does have some videos from Atinux).
PS: I did a video about this on the latest Nuxtnation that you can find here.
I use dynamic HTTP requests inside component methods (e.g. when submitting a form via a POST request)? Does that even work in static sites?
The short answer to this question is that yes, it does work. In fact you can have http requests in any life cycle hooks or methods in your code, and they all work fine with static mode too.
Static site generation and ssr mode in Nuxt.js are tools to help you with SEO issues and I will explain the difference with an example.
Imagine you have a blog post page at a url like coolsite.com/blogs with some posts that are coming from a database.
SPA
In this mode, when a user visits the said URL server basically responds with a .js file, then in the client this .js file will be rendered. A Vue instance gets created and when the app reaches the code for the get posts request for example in the created hook, it makes an API call, gets the result and renders the posts to the DOM.
This is not cool for SEO since at the first app load there isn't any content and all search engine web crawlers are better at understanding content as html rather than js.
SSR
In this mode if you use the asyncData hook, when the user requests for the said URL, the server runs the code in the asyncData hook in which you should have your API call for the blog posts. It gets the result, renders it as an html page and sends that back to the user with the content already inside it (the Vue instance still gets created in the client). There is no need for any further request from client to server. Of course you still can have api calls in other methods or hooks.
The drawback here is that you need a certain way for deployment for this to work since the code must run on the server. For example you need node.js web hosting to run your app on the server.
STATIC
This mode is actually a compromise between the last two. It means you can have static web hosting but still make your app better for SEO.
The way it works is simple. You use asyncData again but here, when you are generating your app in your local machine it runs the code inside asyncData, gets the posts, and then renders the proper html for each of your app routes. So when you deploy and the user requests that URL, she/he will get a rendered page just like the one in SSR mode.
But the drawback here is that if you add a post to your database, you need to generate your app in your local machine, and update the required file(s) on your server with newly generated files in order for the user to get the latest content.
Apart from this, any other API call will work just fine since the code required for this is already shipped to the client.
Side note: I used asyncData in my example since this is the hook you should use in page level but fetch is also a Nuxt.js hook that works more or less the same for the component level.

How to enable offline support when using HTML5 history api

What are the best practices (and how to go about doing it) to support offline mode when using html5 history api for url rewrites?
For example, (hypothetically) I have a PWA SPA application at https://abc.xyz which has internationalization built in. So when I visit this link, the Vue router (which ideally could be any framework - vue, react, angular etc.) redirect me to https://abc.xyz/en.
This works perfectly when I am online (ofcourse, the webserver is also handling this redirect so that app works even if you directly visit the said link).
However, its a different story when I am offline. The service worker caches all resources correctly so when I visit the URL https://abc.xyz everything loads up as expected. However, now if I manually type the URL to https://abc.xyz/en, the app fails to load up.
Any pointers on how to achieve this?
Link to same question in github: https://github.com/vuejs-templates/pwa/issues/188
Yes, this is possible quite trivially with Service Workers. All you have to do is to configure the navigateFallback property of sw-precache properly. It has to point to the cached asset you want the service worker to fetch if it encounters a cache miss.
In the template you posted, you should be good to go if you configure your SWPrecache Webpack Plugin as follows:
new SWPrecacheWebpackPlugin({
...
navigateFallback: '/index.html'
...
})
Again, it is absolutely mandatory that the thing you put inside navigateFallback is cached by the Service Worker already, otherwise this will fail silently.
You can verify if everything was configured correctly by checking two things in your webpack generated service-worker.js:
the precacheConfig Array contains ['/index.html', ...]
in the fetch interceptor of the service worker (at the bottom of the file), the variable navigateFallback is set to the value you configured
If your final App is hosted in a subdirectory, for example when hosting it on Github pages, you also have to configure the stripPrefix and replacePrefix Options correctly.

Script for checking siebel application documents is viewable or not

As a part of routine healthcheck in Siebel application we open few documents from different navigation in Siebel application and check whether it is viewable in browser or not.
If i want to automate then can we prepare some script in which it returns the response code of the documents.
For ex :- 404 error code means not available. In the same way html response code between 200 to 400 means everything is alright.
OR
Any other ways in which i can know whether documents are viewable in browser or not.
Given that the browser directly accesses the documents from the browser, it would be best to record the manual executed event and then replayed. Tools like JMeter or SoapUI. As it is probably a few requests at most one can look at recreating them using wget, or curl.
It is also possible to make this part of a larger test approach and include them in a open source test approach like Robot Framework. It has a HTTP Requests library that allows you to perform tests using http requests. This in addition to the Web service, web browser, database and many other types of libraries that allow an integrated test approach.

Deploying ASP.NET MVC 4 Application to private staging / preview

I'm building an MVC4 application that is starting to take shape and I want to deploy it privately for staging and preview purposes. I would like only a select few people to be able to access the full application. Most of the application is public, but there is a private area as well that requires the user to login.
I'm looking for the most unintrusive way to privately deploy this application to staging/preview. By unintrusive I mean that I don't want to toggle more than a few lines of code, preferably just a flag in the web.config, to deploy it normally vs privately.
I also want this authorization to overlap the site's existing authorization functionality. In other words, when the person goes to the preview URL I give them, they are brought to a landing page where they must log in using the username/password I also gave them. Once they login, they should be brought to what will be the actual landing page if the application was in production. However, they should NOT be logged in to the application itself (this is what I mean by overlap). This way, they can use the application as normal (registering, then logging in a second time to get to the application's private areas.)
I'd like to have something along the lines of this in my web.config:
<StagingAccess deployPrivately ="true">
<StagingUsers>
<StagingUser>
<UserName>JoeShmoe</UserName>
<Password>Staging123</Password>
</StagingUser>
</StagingUsers>
</StagingAccess>
Such that I can simply toggle deployPrivately, add a StagingUser node for a select user, then deploy to my host using Web Deploy.
Some steps would be perfect as I've never actually deployed an MVC app before, let alone like this. But I really need to start being able to show the application to people without exposing any of my code and without a remote desktop to my machine, which makes the app seems laggy.
How about a combination of Authorization Rules: http://weblogs.asp.net/gurusarkar/archive/2008/09/29/setting-authorization-rules-for-a-particular-page-or-folder-in-web-config.aspx
and Web.Config Transformations? http://msdn.microsoft.com/en-us/library/dd465326.aspx
Then you would Publish the application using VS with a specific configuration chosen - I believe this could help you accomplish your goals.

How do I get placemark icons to load over ssl?

I'm working on a web application that uses the google earth plugin. Recently, a new requirement to have non-public users logon was added, which meant that some users were now using the site over https. Among the things that broke in testing were the custom placemark icons (They were working using http).
The icons are hosted on the same server which servers the page.
Here are the urls for each of the protocols.
http - http://localhost/Images/yellow.png
https - https://localhost/Images/yellow.png
I can follow that link and the image will appear as you would expect.
The images hrefs are declared as icon styles in dynamically generated kml.
I want to avoid loading the images over http because I think that will cause internet explorer to present the user with a mixed content warning.
How do I get the images to load properly while using https?
I have been wrestling with this myself -- the short answer is that this won't work. If the content is served off of an HTTPS site that generates any kind of error/prompt (authentication, invalid certificate, etc.) the plugin will simply not load the content.
Interestingly, the desktop client works fine and prompts the user for credentials if necessary. However, neither client will allow content to be served off of site with an untrusted certificate.
The only workaround that I have found is:
Use a trusted HTTPS certificate on the server hosting the content (either trust the certificate on the client systems or just use a real certificate.)
Do not use HTTPS basic auth as that will always generate 401/Challenge responses which the web browser client will simply ignore
If authentication is a requirement, use NTLM authentication and common (e.g., domain) logins. If you load the plugin in Interent Explorer (or in a .NET WebBrowserControl) the authentication will be handled properly and the images will show up.
I was at a Google Earth administrator's training last week and the trainer confirmed this "bug". It is supposed to be fixed in the next version of the plugin (it may actually be fixed already -- what version of the plugin are you using?)