02.17.02

Linkrot Combat

Posted in General at 10 pm

It seems to me that the way we link to pages is a problem. You see the whole URL thing is way too specific and really doesn’t allow any room for error. It’s either right or it’s wrong and today linking across sites is a 50/50 chance, and it’s getting worse.

What I’d like to see is an expansion of the A HREF tag, or perhaps the URL format that would allow for a more ‘fuzzy’ link.

Step one should be the ability to point to an Alternate URL. If you link directly to some content in the archives of another site, and your primary link breaks, the secondary link should point to an alternate location for the content, whether it is on the calling server’s website or an alternate site altogether. A third option may be to link to a higher level in the target site. For example linking to the archives section of a site that may still exist even if the primary target’s URL has rotted away.

Step Two would be a search phrase. This phrase could be used to find the content in a random search engine. Google might be useful for it, though the target site’s own search engine would be better. If I’m looking for specific lyrics to a song that I have no title for, I’ll search Google for a quoted phrase, whatever words I can make out. (Of course that might not always work out.)

The search phrase could be a subject category like ‘Deep sea bass’ or ‘flavored leather products’. It would provide additional context for the link, potentially extending the usefulness of the link by providing more updated information than the original target provided.

To implement this system there would need to be some definitions written. Some examples of what I would see would be to take a link like:

<http://www.anybrowser.org/bbedit/grep.shtml>

and alter it to include a secondary link:

<http://www.anybrowser.org/bbedit/grep.shtml?altlink://www.ling.udel.edu/colin/tools/BBEdit/grep_tips.html>

The URL really looks a bit ungainly. But, it’s an option, yes?

Okay let’s try step two:

<http://www.anybrowser.org/bbedit/grep.shtml?keywords://bbedit%20grep%20tips//>

The nice thing about piggy backing the options into a single URL is that the target web server would be able to handle the processing needed, whether it would be a redirect to another page, another server or a search engine, local or remote. As well, the processing could be handled by the 404 Error page that the server would call when the primary link failed in the first place. And finally the URL request would appear in the server’s log files, alerting the webmaster to the needs of the user’s and identifying what they were attempting to obtain.

As well, I think that most sites would be able to ignor the additinal information without worrying about harming the flow of their current site.

[Please note: The above text was written while I was under the influence of heavy cold medicines.]

RSS feed for comments on this post · TrackBack URL

Leave a Comment