Drupal Module: Import_HTML

Synopsis

Facility to import an existing, static HTML site structure into Drupal Nodes.

This is done by allowing an admin to define a source directory of a traditional HTML website, and importing (as much as possible) the content and structure into a drupal site.

Files will be absorbed completely, and their existing cross-links should be maintained, whilst the standard headers, chrome and navigation blocks should be stripped and replaced with Drupal equvalents. Old structure will be inferred and imported from the old folder heirachy.

Requirements

Before you begin

See the setup section for details. Because of the number of settings, this is not just a point-and-go module.

Usage

This module uses no database tables of its own. It requires XML support on the server, this can be tricky if it's not already enabled.

Given a working system, the process is thus:

  1. Visit the admin/settings/import_html page and check the settings.
  2. If all values look OK for now, you can try a test run by visiting admin/import_html/demo . Choose a 'page' sort of page, not a portal or layout-rich sort of thing. The demo will scrape the given file and import it to the system. Some of the new navigation features will not be apparent yet, as they apply only to large-scale imports, or at least imports that have a defined siteroot.
  3. Try opening the 'admin/import_html' main page and defining a source folder. Enter the root path of the site you wish to import and continue. The UI should display a treeview of the files you can selectively select for import.
    It's recommended to just try one page at a time to begin with.
    Note: If your server has PHP open_basedir restrictions in effect, the webserver/PHP process may be prevented from accessing files outside of webroot. See below
  4. Upon importing a page, a new node should be created. The object of the import templates is to trim down the content block to its unique value. This will probably require some template tuning, so make a new template (copy the existing html2drupal.xsl), select it (enter the new name in the admin page) tweak the XSL and try again.
    If you are extremely lucky, or don't care too much about the extras, you can go straight to bulk import.
  5. If you need to check how the the images are turning up, they can safely be imported as well using the previous interface. They will be copied, structured in the same folders they were in originally, into the directory configured in the admin/setting. Imported pages will have their links rewritten to find them there.
    Two type of content are being imported, depending on file suffix. 'Pages' (html) - which become nodes ... and everything else, which becomes 'files'. File suffixes are not good enough for this, should the suffix list be editable, or should I scan the files themselves?
  6. When you are happy that the body field is as tidy as it's going to get (test several pages), you can try a bulk import. This may fill up your node collection a bit, so be prepared to delete them if things don't work perfectly first time. Many static sites have whole sections that are not structured the same as the rest of the pages.
  7. On input, a menu structure and a bunch of aliases will be auto-generated. These can be manually adjusted easily. For instance, the menu branches will initially be named after the document titles found in the directory structure. Which is great if you used a decent folder heirachy, but some of the labels can probably be tidied up a bit. For that matter, after input, you can safely re-arrange the menu structure altogether, shifting whole sections to different places without worrying about links breaking. These changes will show through in the menu, sitemap and breadcrumbs but not in the pathalias which will reman old-style. There appear to be issues navigating to pages deep in a menu where the parent has not been imported or created yet. This is normal Drupal behaviour when making menu links to non-existant paths.

By following these instructions, you should probably be able to end up with a version of the old content in the new layout. For large sites (200+ pages) some extra tuning may be neccessary, eg using different templates for different sources.

Incremental imports, processing just sections at a time, or repeated imports as you tune the content or the transformation should be non-destructive. Re-importing the same file will retain the same node ID path, and any Drupal-specific additions made so far.

Intent / Theory

This is intended as a run-once sort of tool, that, once tuned right on a handful of pages, can churn through a large number of reasonably structured, reasonably formatted pages doing a lot of the boring copy & paste that would otherwise be required.

The existing file paths of the source content will be used to create an automatic menu, and therefore a heirachical structure identical to the source URLs. With path.module, appropriate aliases will also be created such that this will enable a drupal instance to TRANSPARENTLY REPLACE an existing static site without breaking any bookmarks!

Methodology Overview / Tasks

A peek under the hood into what happens in what order

Notes

The more valid and more homogenous the source site is, the better. A creation using strict XHTML and useful, semantic tags like #title #content or something could be imported swiftly. One with a variety of table structures may not...
Of course, this tool is supposed to be useful when dealing with messy, non-homogenous legacy sites that need a makeover. Sometimes regular expression parsing may come to the rescue for content extraction, but that's not implimented yet.

I'm choosing XSL because I know it, it's powerful for converting content out of (well-structured) HTML, and I've had success with this approach in the past. Others may object to this abstract technology (XSL is NOT an easy learning curve) but the alternative options include RegExp wierdness or cut and paste. (which I may patch on as alternative methods - or someone else can have a go) Both approaches I've also used successfully in bulk site templating (over THOUSANDS of pages) but it's my call. Making your own XSL import template is non-trivial.

In the interests of good housekeeping, imported files with spaces in the filenames will be renamed to use underscores. Although it spaces canem> be worked around, they just cause trouble in website URLs. Thus, references to the spaced, or %20 versions of the files may break. This rewrite can be disabled in the settings.
Filenames are assumed to be, and will remain, case-sensitive.

Guide

Installation/setup

XML Support

The module can use either the PHP4 and PHP5 implimentations (which are quite different) but the PHP modules do have to be enabled somehow. This can be tricky as they often require extra libraries to be put in your path somewhere. Please don't ask me for instructions, every time I've done it it hurts my head.

If you can see the words XSL or XSLT in your phpinfo() output, You should be fine. The module will test and warn you anyway.

PHP 4.3 has at least one known bug.

HTMLTidy Setup

The module also uses the famous HTMLTidy tool. There is now a PHP module that impliments HTMLTidy natively, but that needs to be installed and enabled. If you don't have access to that, we can run it from the command line. Find the appropriate binary release of HTMLTidy for your system, and place it in your PATH, in the modules install directory, or wherever you like, then define the path to the executable in the settings. This works fine under Windows too.

If this sounds complicated, and you have limited access to a Unix host and need to use it, there is an auto-installer that can attempt to set up tidy even on a box you don't have login access to.

Import Templates

An import template defines the mapping between existing HTML content and our node values. It uses the XSL language because of the power it has to select bits of a structured document, for example select=\"//*[@id='content']\" ... will find the block anywhere in the page, of any type with the id 'content', and select=\"//table[@class='main']//td[position(3)]\" Will locate the third TD block in the table called 'main'. Both these examples would be common when trying to extract the actual text from a legacy site.

You can begin with the example XSL template, this contains code that attempts to translate a page containing the usual HTML structures like (either title or h1) and (either the div called 'content' or the entire body tag) into a standard, minimal, vanilla, sematically-tagged HTML doc.

It's likely that whatever site you are importing will NOT be shaped exactly like we need it to translate straight using this format. You have to identify the parts of your existing pages that can reliably be scanned for to define content, then come up with an XPath expression to represent this.

If your source, for example, didn't use nice H1 tage to denote the page title, but instead always looked like

<font size='+2'><B>my
  page</B></font>

... your template could be made to find it, wherever it was in the page using select=\"//font[@size='+2']/B\" and proceed to use that as the node title.

No, the code is not pretty, and if Regular Expressions are a foreign language to you, This is worse.
But this is why developers have been ranting for the last ten years about using semantic markup!!
The uniformity, and the usefulness of the metadata detected in the source files will play a big part here.

It's easier to develop and test the XSLT using a third-party tool, I recommend Cooktop. Be sure to set the XSL engine to 'Sablotron' which is the one that PHP uses under the hood.

Although it would be possible to configure a logical mapping system to select different import templates based on different content, at this stage the administrator is expected to be doing a bit of hand-tweaking, and predicting all possible inputs is impossible. Some of this sort of logic can however be built into the powerful XSL template, if you are good at XSL

Once importing is taking place, you can even filter it more to improve the structure of the input, for example by removing all redundant FONT tags, or by ensuring that every H1,2,3 tag has an associated #ID for anchoring. Yay XSL.

Import to CCK

The base functionality supports placing found content into the $node->body field, not naturally into any arbitrary CCK fields, but this is also possible.

If you have a CCK node with (eg) fields:

field_text, field_byline, field_image
and your input pages are nice and semantically tagged, eg
<body>
  <h1 id='title'>the title</h1>
  <div id='image'><img src='this.gif'/></div>
  <h3 id='byline'>By me</h3>
  <div id='text'>the content html etc</div>
</body>

A mapping from HTML ids to CCK fields will be done automatically, and the content should just fall into place.

  $node->title = "the title";
  $node->field_image = "<img src='this.gif'/><";
  $node->field_byline = "By me";
  $node->field_text = "the content html etc";

In fact, ANY element found in the source text with an ID or class gets added to the $node object during import, although most data found this way is immediatly discarded again if the content type doesn't know how to serialize it.
A special-case demonstrated here prepends field_ to known CCK field names. Normally they get labelled as-is.

If the source data is NOT tagged, you'll have to develop a bit of custom XSL to produce the same effect.

customtemplate2simplehtml.xsl

... xsl preamble ...
  <xsl:template name="html_doc" match="/">	
    <html>
    <body>
    ... other extractions ...
    <h3 id="byline">
      <xsl:value-of select="./descendant::xhtml:img[2]/@alt" />
    </h3>
    </body>
  </html>
</xsl:template>

In this example, the byline we wanted to extract was the alt value of the second image found in the page (a real-world example). This has now been extracted and wrapped in an ID-ed h3 during an early phase of the import process, and should now turn up in the CCK field_byline as desired.
XSL is complex, but magic.

Settings

On the admin/settings/import_html screen, you can (if you wish):

Notes on the Treeview Interface

Files and folders beginning with _ or . are nominally 'hidden' so are skipped and do not show up on this listing. While it's possible to list a thousand or so files, It may be a good idea to allow the listing to be more selective, to scale to larger sites. Do this by entering the Subsection to list before clicking list and waiting for every file on the server to be enumerated.

Development / TODO

As mentioned in Usage, this module uses no database tables of its own. Pages are read straight into 'page' nodes. I guess it could feed into flexinode if your import files had extra parsable content blocks, and I've sucessfully used it to import other random XML formats (RecipeML) although the advantages of doing so are limited.

It's easy to imagine this sytem set up as a synchroniser, that could re-fetch and refresh local nodes when remote content changes. This would involve recording exactly what the source URL was (which isn't currently done) but would be a fun feature.

I may fork off the page-parsing into a pluggable method, so that a regexp version can be developed alongside, and be used for folk without XSL support.

How to leverage this to import a local site to a remote server? You must either unpack the source files somewhere on that machine, then provide the absolute path where the server can find them, or upload a zip package and I'll try to take it from there.(TODO)

Also TODO is a 'Spidering' method to try to import URL sites. Way in the future!

TODO Allow settings to set import content type to something other than 'page' done

TODO Find a way to map more meta-data from the original page (assuming there is any to be extracted) to Drupal properties, eg get the contents of META keywords into Taxonomy associations

TODO There are issure when a page links directly to a file that would be regarded as a resource via an href. Most hrefs are re-written to point to the new node, but things like large images or word docs get imported under 'files'. The XSL rewrite_href_and_src.xsl attempts to correct for this, but there may be some side-effects. Always run a link checker after import.

Trouble

The PHP4 XML parser (Sablotron) has trouble with duplicate attributes - if found in a tag (like from old bad HTML) all subsequent input will be flattened to plaintext. Older versions of HTMLTidy, however, do not detect and fix these for us. Make sure that tidy supports option repeated-attributes. It seems the commandline version fixed this somewhere between the 2000 and 2004 release. (Not sure about the PHP module version - it's PHP5, so should be OK)

An issue has been found running under

PHP 4.3.11
xsltproc 1.0.16
libxml 2.6.2
libxslt 1.1.0

(possibly other similar configurations) whereby &lt; and &gt; encoded entities in the input are prematurely converted to the < and > s outputting unencoded results. We are pretty sure this is a limitation of the old, incomplete XML implimentations of the time. An upgrade to PHP 4.4 solved it in one case, so good luck.

If your server has PHP open_basedir restrictions in effect, the webserver/PHP process may be prevented from accessing files outside of webroot. This is a good security measure, but may stop import_html from reading your source data (even though browsing the source directories may still appear to work). The open_basedir setting can be seen in your phpinfo.
An error like: Local file copy failed (/tmp/1fixed/simple.htm to files/imported/simple.htm) When you are sure the source file does exist and permissions are readable may be symptomatic of good security on your server. A reasonable fix is to place your source data inside webroot/files (even if just temporarily) to run the import process, then delete it later. Alternatively, copy your data over top of web root (as described in walkthrough.htm) to do an in-place import. Disabling open_basedir is not recommended, and probably requires root privileges anyway. Drupal.org issue discussion

I've gone to great lengths to rewrite the links from the new node locations to relative links to the resources that moved over into /files/ but there are problems. When a/long/path/index.html links to its image by going ../../../files/a/long/path/pic.jpg it works which is good. But as a/long/path/index.html is also aliased to a/long/path - that up-and-over path is wrong now the page is being served from what looks to the browser like a different place.

I don't favour embedding anything that hard-codes the Drupal base_url, and we don't want to use HTML BASE. I want to continue to support portable subsites, so embedding site-rooted links (/files/etc) is not great either.

Currently, by happy chance, going up one ../ too far will get ignored by most browsers, so if you are not running Drupal in a subdirectory, the requests for both style of page will just work. Which will mean that 80% of cases should get by OK. The rest may need an output filter of some sort developed some day

Reference

Long ago, I started building this with reference to the existing import/export module but I couldn't find too many common features. The transitional format the XSL templates convert into is a 'microformat' of XHTML (basically XHTML, but with strictly controlled classes and IDs). This is how I see a platform-agnostic dump of content should be exported, when this eventually morphs into import_export_HTML.