12 / Aug / 2013 by Shreyance Jain 0 comments

Recently, Programmers are more inclined towards creating web application with AJAX than doing it with static HTML pages which makes applications faster and richer. But by doing so, application loses its indexability which results in modern and user friendly yet less searchable apps.

Let’s take an example of a website:  For browser/user the website is full of content which is generated by properly executing  relevant .js files but for crawler it’s just a few lines – as it cannot execute JavaScript files. To make content visible for crawlers in the same manner as it is for browsers, server need to provide crawler with HTML snapshot (HTML created from static content pieces as well as JavaScript).

Currently, webmasters create a “parallel universe” of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline.

One more way to do it is using CRAWLME middleware which makes your .js files indexable by search engines, a copyright of  Optimal Bits Sweden AB.

Crawlme  ($ npm install crawlme)

Crawlme generates static HTML snapshots of JavaScript web application on the fly and has a built in periodically refreshing in-memory cache, so even though the snapshot generation may take a second or two, search engines will get them really fast. It is of great use for SEO since response time is one of the factors used in the page rank algorithm. To render snapshots, CRAWLME uses the excellent browser zombie.js which actually does not require external browser.

To read more on how to install and implement CRAWLME, read our blog: How to Use CRAWLME


Leave a Reply

Your email address will not be published. Required fields are marked *