I'm reading about JS engines in modern browsers (V8, SpiderMonkey, Chakra...)
There isn't any information (or I missed it somehow) about Ajax (XHR request) optimization.
Is there such a thing?
P.S. Maybe there isn't any space for the optimization itself, and I would like to know if that is the case.
Related
When should one develop HATEOAS server RESTful API instead of using HTML (resource links, forms, etc.)?
Isn't HTML and a browser good enough as hypermedia engine?
Isn't HTML and a browser good enough as hypermedia engine?
HTML + HTTP + URI + Browser === The world wide web. So it's pretty good, no joke.
It's not without fault.
HTML's understanding of links is disappointingly limited. No support for idempotent writes. Uri Template support for GET only. I'm not super keen on how many different spellings there are for "link".
It's kind of verbose for a hypermedia format; don't get me wrong - built in text markup is brilliant when you are trying to document what is going on for a human being. But my impression thus far is that same structure starts to get in the way when as a human being you want to quickly review the semantic content that your automated agent is consuming.
I call your attention to this quote from RFC-4287
The primary use case that Atom addresses is the syndication of Web content such as weblogs and news headlines to Web sites as well as directly to user agents.
So a bunch of really smart guys, specifically trying to address use cases directly related to the web, decided to invest a bunch of effort into standardizing a new hypermedia format rather than using the one that was already ubiquitous in their problem domain.
And over the past 10+ years, that format has been widely adopted.
Without adoption, I'm not sure that HATEOAS has much benefit. You don't need a hypermedia api if you are controlling both sides of the conversation (example: javascript on the web -- hypermedia with code on demand capability downloading a client that has learned the protocol of a web api via some out of band channel).
Evidence would seem to suggest that HTML is not nearly as convenient a format as, for example, any of the JSON based hypermedia formats.
In conclusion: no, it's not good enough. It might be an acceptable place holder for the moment; but the JSON hypermedia tool sets are soon going to be sufficiently mature that HTML will be seen as a giant step in the wrong direction.
I'm interested in using FireBase as a data-store for the creation of largely traditional, occasionally updated websites and am concerned about the SEO implications of rendering content using client-side JavaScript.
I know Google has made headway into indexing some JavaScript content, but am wondering what my best course of action is. I know I have some options:
Render content using 100% client-side JS, and probably suffer some indexing trouble
Build static HTML files on the server side (using Node, most likely) and serve them instead
First, I'm not sure how bad the problem actually is doing everything client side (am I solving something that needs solved?). And second, I just wonder if I'm missing some other obvious way to approach this.
Unfortunately, rendering data on the client-side generally makes it difficult to do SEO. Firebase is really intended for use with dynamic data, such as user account info, game data, etc, where SEO is not a goal.
That being said there are a few things you can do to optimize for SEO. First, you can render as much of your site as possible at compile time using a templating tool like mustache. This is what we did on the Firebase.com website (the entire site is static except for the tutorial and examples).
Second, if your app uses hash fragments in the URL for navigation (anything after the "#!"), you can provide a separate set of static or server-generated pages that correspond to your dynamic pages so that crawlers can read the data. Google has a spec for doing this, which you can see here:
https://developers.google.com/webmasters/ajax-crawling/docs/specification
I have to decide a technique to prevent spam bots from registering my site. In this question I am mainly asking about negative captchas.
I came to know about many weaknesses of bots but want to know more. I read somewhere that majority of bots do not render/support javascript. Why is it so? How do I test that the visiting program can't evaluate javascript?
I started with this question Need suggestions/ideas for easy-to-use but secure captchas
Please answer to that question if you have some good captcha ideas.
Then I got ideas about negative captchas here
http://damienkatz.net/2007/01/negative_captch.html
But Damien has written that though this technique likely won't work on big community sites (for long), it will work just fine for most smaller sites.
So, what are the chances of somebody making site-specific bots? I assume my site will be a very popular one. How much safe this technique will be considering that?
Negative captchas using complex honeypot implementations here described here
http://nedbatchelder.com/text/stopbots.html
Does anybody know how easily can it be implemented? Are there some plugins available?
Thanks,
Sandeepan
I read somewhere that majority of bots do not render/support javascript. Why is it so?
Simplicity of implementation — you can read web page source and post forms with just dozen lines of code in high-level languages. I've seen bots that are ridiculously bad, e.g. parsing HTML with regular expressions and getting ../ in URLs wrong. But it works well enough apparently.
However, running JavaScript engine and implementing DOM library is much more complex task. You have to deal with scripts that do while(1);, that depend on timers, external resources, CSS, sniff browsers and do lots of crazy stuff. The amount of work you need to do quickly starts looking like writing a full browser engine.
It's also computationally much much expensive, so probably it's not as profitable for spammers — they can have dumb bot that silently spams 100 pages/second, or fully-featured one that spams 2 pages/second and hogs victim's computer like a typical web browser would.
There's middle ground in implementing just a simple site-specific hack, like filling in certain form field if known script pattern is noticed in the page.
So, what are the chances of somebody making site-specific bots? I assume my site will be a very popular one. How much safe this technique will be considering that?
It's a cost/benefit trade-off. If you have high pagerank, lots of visitors or something of monetary value, or useful for spamming, then some spammer might notice you and decide workaround is worth his time. OTOH if you just have a personal blog or small forum, there's million others unprotected waiting to be spammed.
How do I test that the visiting program can't evaluate javascript?
Create a hidden field with some fixed value, then write a js which increments or changes it and you will see in the response..
So this question came up today and I didn't have a specific or scientific answer.
What are the costs associated with using jsf (or tomahawk, faclets, etc., etc.) tags in place of traditional html tags. My gut reaction is that you should use jsf tags in situations where you need the additional functionality they provide, and use traditional tags when you don't. Also I feel like jsf tags would require more resources (since the server has to take them and rerender them as html anyways) than html. Does anybody know what the cost actually is (as far as time and memory)? Also useful information is what is the convention that is in use, pure jsf or a mixture of the two?
Sure there is a cost. Whether that is noticeable or negligible depends on the hardware and the load of the server in question. Profile it and upgrade the server if necessary.
You should however realize that on the other hand you save time and cost compared to implementing the same without help of a component based MVC framework. That's going to be a lot of boilerplate code gathering the paramters, doing validations, conversions, updating model values which is possibly not written efficient as compared to existing and widely used MVC frameworks.
The Sun JSF development team puts performance as high priority and Mojarra is already optimized as much as possible.
Our site http://www.skill-guru.com runs on JSF/ Tomahawk / Rich faces on a tomcat server. We do not see any speed issues here.
As Jeff pointed out , it all gets compiled so there is not much noticeable difference until and unless you really use too much rich faces or other fancy stuff.
JSF does help you make your life easy.
A JSF page gets compiled upon first request (or pre-compiled if you specify that in the config). Thus, it's not like the page needs to be parsed every time it's requested. I don't have any specific numbers relating to time/cpu/memory cost, but I'm sure it's negligible.
I'm working on a large site’s re-write and redesign. I have been reading up on HTML 5 and wanted to know what the cons are before adopting it for this design implementation.
The design needs to work in A-grade browsers (yes including IE6 :( ), so I'm wondering how <footer> / <section> etc will be rendered (inline/block etc.).
I'd also like to know the pros so that I can sell it to any conservatives within the business.
If we disregard the things which are unchanged since HTML 4.01…
Pros? Not a lot. There are a few things which work in a minority of browsers. There are a few things which work in a minority of browsers but with added JavaScript can support most browsers in a relatively sensible way.
As for cons…
The whole spec is still a draft, and subject to change.
Practically nothing in the spec is supported consistently across browsers (and faking it with JS fails when JS isn't around)
QA tools are immature and often lag behind the specification
It's useful as something to experiment with, but I wouldn't build a mainstream website with it.
Pros:
The more sites are using it, the faster we'll have a reliable spec and support across browsers. So just by building your new site with HTML 5, you help speeding up the advancement of web technologies for all of us.
Cons:
Incomplete QA tools
Incomplete native browser support
The argument that the whole spec is still a draft doesn't really count. Just look at CSS. Even the latest changes to the CSS 2.1 recommendation still have draft status.
If you want to use the HTML 5 specific elements, take a look at http://ejohn.org/blog/html5-shiv/. This approach allows you to use the HTML in browsers that don't support them now.
HTML5 isn’t one thing. There are some parts of HTML5 you can use right now.
For instance, you can change your doctype to the HTML5 one (<!doctype html>). Boom. Your document is now HTML5. Because the HTML5 spec was based on a lot of work figuring out what browsers already do, things like this just work. So, if you prefer the HTML5 syntax, feel free to do that now.
As for the new elements, as has been mentioned, they’re lacking support in IE. You can shim quite a lot of support for HTML5 into IE with JavaScript, if you’re happy with that. Note that unknown HTML elements are displayed as inline by all browsers, so you’d need to add display: block; for new block-level elements yourself for older browsers.
Dive into HTML5 is well worth a read to get you up to speed, particularly Chapter 3.
There are no cons - most of the things will work just as they do in XHTML 1.0 or HTML 4.01. Pros will slowly come in next few years, but bring more semantics (and somehow easier understanding of the content by search engine bots from SEO point of view). HTML 5 moreover enables designers to use any web fonts (not just the limiting basic five such as Arial/Helvetica, Verdana, Times New Roman etc.)
see this as well:
http://www.alistapart.com/articles/semanticsinhtml5/
http://www.zeldman.com/2009/07/13/html-5-nav-ambiguity-resolved/
http://www.zeldman.com/2009/07/20/web-fonts-html-5-roundup/